00:00:00.001 Started by upstream project "autotest-nightly" build number 3888 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3268 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.017 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.018 The recommended git tool is: git 00:00:00.018 using credential 00000000-0000-0000-0000-000000000002 00:00:00.021 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.036 Fetching changes from the remote Git repository 00:00:00.041 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.052 Using shallow fetch with depth 1 00:00:00.052 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.052 > git --version # timeout=10 00:00:00.073 > git --version # 'git version 2.39.2' 00:00:00.073 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.101 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.101 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.764 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.774 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.785 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:03.785 > git config core.sparsecheckout # timeout=10 00:00:03.796 > git read-tree -mu HEAD # timeout=10 00:00:03.812 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:03.831 Commit message: "inventory: add WCP3 to free inventory" 00:00:03.831 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:03.931 [Pipeline] Start of Pipeline 00:00:03.947 [Pipeline] library 00:00:03.948 Loading library shm_lib@master 00:00:03.949 Library shm_lib@master is cached. Copying from home. 00:00:03.969 [Pipeline] node 00:00:03.978 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:03.979 [Pipeline] { 00:00:03.991 [Pipeline] catchError 00:00:03.993 [Pipeline] { 00:00:04.007 [Pipeline] wrap 00:00:04.015 [Pipeline] { 00:00:04.021 [Pipeline] stage 00:00:04.022 [Pipeline] { (Prologue) 00:00:04.189 [Pipeline] sh 00:00:04.475 + logger -p user.info -t JENKINS-CI 00:00:04.490 [Pipeline] echo 00:00:04.491 Node: GP11 00:00:04.500 [Pipeline] sh 00:00:04.800 [Pipeline] setCustomBuildProperty 00:00:04.813 [Pipeline] echo 00:00:04.815 Cleanup processes 00:00:04.820 [Pipeline] sh 00:00:05.102 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.102 1656647 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.116 [Pipeline] sh 00:00:05.398 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.398 ++ grep -v 'sudo pgrep' 00:00:05.398 ++ awk '{print $1}' 00:00:05.398 + sudo kill -9 00:00:05.398 + true 00:00:05.415 [Pipeline] cleanWs 00:00:05.425 [WS-CLEANUP] Deleting project workspace... 00:00:05.425 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.430 [WS-CLEANUP] done 00:00:05.434 [Pipeline] setCustomBuildProperty 00:00:05.447 [Pipeline] sh 00:00:05.728 + sudo git config --global --replace-all safe.directory '*' 00:00:05.821 [Pipeline] httpRequest 00:00:05.847 [Pipeline] echo 00:00:05.848 Sorcerer 10.211.164.101 is alive 00:00:05.856 [Pipeline] httpRequest 00:00:05.860 HttpMethod: GET 00:00:05.860 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:05.861 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:05.865 Response Code: HTTP/1.1 200 OK 00:00:05.865 Success: Status code 200 is in the accepted range: 200,404 00:00:05.865 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:06.880 [Pipeline] sh 00:00:07.156 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:07.169 [Pipeline] httpRequest 00:00:07.179 [Pipeline] echo 00:00:07.180 Sorcerer 10.211.164.101 is alive 00:00:07.185 [Pipeline] httpRequest 00:00:07.188 HttpMethod: GET 00:00:07.189 URL: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:07.189 Sending request to url: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:07.205 Response Code: HTTP/1.1 200 OK 00:00:07.206 Success: Status code 200 is in the accepted range: 200,404 00:00:07.206 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:01:21.276 [Pipeline] sh 00:01:21.557 + tar --no-same-owner -xf spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:01:24.098 [Pipeline] sh 00:01:24.370 + git -C spdk log --oneline -n5 00:01:24.370 719d03c6a sock/uring: only register net impl if supported 00:01:24.370 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:01:24.370 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:01:24.370 6c7c1f57e accel: add sequence outstanding stat 00:01:24.370 3bc8e6a26 accel: add utility to put task 00:01:24.381 [Pipeline] } 00:01:24.397 [Pipeline] // stage 00:01:24.403 [Pipeline] stage 00:01:24.404 [Pipeline] { (Prepare) 00:01:24.414 [Pipeline] writeFile 00:01:24.425 [Pipeline] sh 00:01:24.700 + logger -p user.info -t JENKINS-CI 00:01:24.713 [Pipeline] sh 00:01:25.014 + logger -p user.info -t JENKINS-CI 00:01:25.030 [Pipeline] sh 00:01:25.310 + cat autorun-spdk.conf 00:01:25.310 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:25.310 SPDK_TEST_NVMF=1 00:01:25.310 SPDK_TEST_NVME_CLI=1 00:01:25.310 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:25.310 SPDK_TEST_NVMF_NICS=e810 00:01:25.310 SPDK_RUN_ASAN=1 00:01:25.310 SPDK_RUN_UBSAN=1 00:01:25.310 NET_TYPE=phy 00:01:25.316 RUN_NIGHTLY=1 00:01:25.322 [Pipeline] readFile 00:01:25.357 [Pipeline] withEnv 00:01:25.359 [Pipeline] { 00:01:25.375 [Pipeline] sh 00:01:25.656 + set -ex 00:01:25.656 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:25.656 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:25.656 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:25.656 ++ SPDK_TEST_NVMF=1 00:01:25.656 ++ SPDK_TEST_NVME_CLI=1 00:01:25.656 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:25.656 ++ SPDK_TEST_NVMF_NICS=e810 00:01:25.656 ++ SPDK_RUN_ASAN=1 00:01:25.656 ++ SPDK_RUN_UBSAN=1 00:01:25.656 ++ NET_TYPE=phy 00:01:25.656 ++ RUN_NIGHTLY=1 00:01:25.656 + case $SPDK_TEST_NVMF_NICS in 00:01:25.656 + DRIVERS=ice 00:01:25.656 + [[ tcp == \r\d\m\a ]] 00:01:25.656 + [[ -n ice ]] 00:01:25.656 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:25.656 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:25.656 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:25.656 rmmod: ERROR: Module irdma is not currently loaded 00:01:25.656 rmmod: ERROR: Module i40iw is not currently loaded 00:01:25.656 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:25.656 + true 00:01:25.656 + for D in $DRIVERS 00:01:25.656 + sudo modprobe ice 00:01:25.656 + exit 0 00:01:25.664 [Pipeline] } 00:01:25.682 [Pipeline] // withEnv 00:01:25.688 [Pipeline] } 00:01:25.704 [Pipeline] // stage 00:01:25.714 [Pipeline] catchError 00:01:25.715 [Pipeline] { 00:01:25.730 [Pipeline] timeout 00:01:25.730 Timeout set to expire in 50 min 00:01:25.732 [Pipeline] { 00:01:25.747 [Pipeline] stage 00:01:25.749 [Pipeline] { (Tests) 00:01:25.764 [Pipeline] sh 00:01:26.044 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:26.044 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:26.044 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:26.044 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:26.044 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:26.044 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:26.044 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:26.044 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:26.044 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:26.044 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:26.044 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:26.044 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:26.044 + source /etc/os-release 00:01:26.044 ++ NAME='Fedora Linux' 00:01:26.044 ++ VERSION='38 (Cloud Edition)' 00:01:26.044 ++ ID=fedora 00:01:26.044 ++ VERSION_ID=38 00:01:26.044 ++ VERSION_CODENAME= 00:01:26.044 ++ PLATFORM_ID=platform:f38 00:01:26.044 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:26.044 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:26.044 ++ LOGO=fedora-logo-icon 00:01:26.044 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:26.044 ++ HOME_URL=https://fedoraproject.org/ 00:01:26.044 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:26.044 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:26.044 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:26.044 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:26.044 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:26.044 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:26.044 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:26.044 ++ SUPPORT_END=2024-05-14 00:01:26.044 ++ VARIANT='Cloud Edition' 00:01:26.044 ++ VARIANT_ID=cloud 00:01:26.044 + uname -a 00:01:26.044 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:26.044 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:26.978 Hugepages 00:01:26.978 node hugesize free / total 00:01:26.978 node0 1048576kB 0 / 0 00:01:26.978 node0 2048kB 0 / 0 00:01:26.978 node1 1048576kB 0 / 0 00:01:26.978 node1 2048kB 0 / 0 00:01:26.978 00:01:26.978 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:26.978 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:26.978 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:26.978 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:26.978 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:26.978 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:26.978 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:26.978 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:26.978 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:26.978 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:26.978 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:26.978 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:26.978 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:26.978 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:26.978 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:27.236 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:27.236 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:27.236 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:27.236 + rm -f /tmp/spdk-ld-path 00:01:27.236 + source autorun-spdk.conf 00:01:27.236 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:27.236 ++ SPDK_TEST_NVMF=1 00:01:27.236 ++ SPDK_TEST_NVME_CLI=1 00:01:27.236 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:27.236 ++ SPDK_TEST_NVMF_NICS=e810 00:01:27.236 ++ SPDK_RUN_ASAN=1 00:01:27.236 ++ SPDK_RUN_UBSAN=1 00:01:27.236 ++ NET_TYPE=phy 00:01:27.236 ++ RUN_NIGHTLY=1 00:01:27.236 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:27.236 + [[ -n '' ]] 00:01:27.236 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:27.236 + for M in /var/spdk/build-*-manifest.txt 00:01:27.236 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:27.236 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:27.236 + for M in /var/spdk/build-*-manifest.txt 00:01:27.236 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:27.236 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:27.236 ++ uname 00:01:27.236 + [[ Linux == \L\i\n\u\x ]] 00:01:27.236 + sudo dmesg -T 00:01:27.236 + sudo dmesg --clear 00:01:27.236 + dmesg_pid=1657956 00:01:27.236 + [[ Fedora Linux == FreeBSD ]] 00:01:27.236 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:27.236 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:27.236 + sudo dmesg -Tw 00:01:27.236 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:27.236 + [[ -x /usr/src/fio-static/fio ]] 00:01:27.236 + export FIO_BIN=/usr/src/fio-static/fio 00:01:27.236 + FIO_BIN=/usr/src/fio-static/fio 00:01:27.236 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:27.236 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:27.236 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:27.236 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:27.236 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:27.236 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:27.236 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:27.236 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:27.236 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:27.236 Test configuration: 00:01:27.236 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:27.236 SPDK_TEST_NVMF=1 00:01:27.236 SPDK_TEST_NVME_CLI=1 00:01:27.236 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:27.236 SPDK_TEST_NVMF_NICS=e810 00:01:27.236 SPDK_RUN_ASAN=1 00:01:27.236 SPDK_RUN_UBSAN=1 00:01:27.236 NET_TYPE=phy 00:01:27.236 RUN_NIGHTLY=1 14:34:06 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:27.236 14:34:06 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:27.236 14:34:06 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:27.236 14:34:06 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:27.237 14:34:06 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.237 14:34:06 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.237 14:34:06 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.237 14:34:06 -- paths/export.sh@5 -- $ export PATH 00:01:27.237 14:34:06 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.237 14:34:06 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:27.237 14:34:06 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:27.237 14:34:06 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720960446.XXXXXX 00:01:27.237 14:34:06 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720960446.SsBBqW 00:01:27.237 14:34:06 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:27.237 14:34:06 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:27.237 14:34:06 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:27.237 14:34:06 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:27.237 14:34:06 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:27.237 14:34:06 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:27.237 14:34:06 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:27.237 14:34:06 -- common/autotest_common.sh@10 -- $ set +x 00:01:27.237 14:34:06 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:01:27.237 14:34:06 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:27.237 14:34:06 -- pm/common@17 -- $ local monitor 00:01:27.237 14:34:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:27.237 14:34:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:27.237 14:34:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:27.237 14:34:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:27.237 14:34:06 -- pm/common@21 -- $ date +%s 00:01:27.237 14:34:06 -- pm/common@25 -- $ sleep 1 00:01:27.237 14:34:06 -- pm/common@21 -- $ date +%s 00:01:27.237 14:34:06 -- pm/common@21 -- $ date +%s 00:01:27.237 14:34:06 -- pm/common@21 -- $ date +%s 00:01:27.237 14:34:06 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720960446 00:01:27.237 14:34:06 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720960446 00:01:27.237 14:34:06 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720960446 00:01:27.237 14:34:06 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720960446 00:01:27.237 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720960446_collect-vmstat.pm.log 00:01:27.237 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720960446_collect-cpu-load.pm.log 00:01:27.237 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720960446_collect-cpu-temp.pm.log 00:01:27.495 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720960446_collect-bmc-pm.bmc.pm.log 00:01:28.427 14:34:07 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:28.427 14:34:07 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:28.427 14:34:07 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:28.427 14:34:07 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:28.427 14:34:07 -- spdk/autobuild.sh@16 -- $ date -u 00:01:28.427 Sun Jul 14 12:34:07 PM UTC 2024 00:01:28.427 14:34:07 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:28.427 v24.09-pre-202-g719d03c6a 00:01:28.427 14:34:07 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:28.427 14:34:07 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:28.427 14:34:07 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:28.427 14:34:07 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:28.427 14:34:07 -- common/autotest_common.sh@10 -- $ set +x 00:01:28.427 ************************************ 00:01:28.427 START TEST asan 00:01:28.427 ************************************ 00:01:28.427 14:34:07 asan -- common/autotest_common.sh@1123 -- $ echo 'using asan' 00:01:28.427 using asan 00:01:28.427 00:01:28.427 real 0m0.000s 00:01:28.427 user 0m0.000s 00:01:28.427 sys 0m0.000s 00:01:28.427 14:34:07 asan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:28.427 14:34:07 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:28.427 ************************************ 00:01:28.427 END TEST asan 00:01:28.427 ************************************ 00:01:28.427 14:34:07 -- common/autotest_common.sh@1142 -- $ return 0 00:01:28.427 14:34:07 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:28.427 14:34:07 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:28.427 14:34:07 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:28.427 14:34:07 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:28.427 14:34:07 -- common/autotest_common.sh@10 -- $ set +x 00:01:28.427 ************************************ 00:01:28.427 START TEST ubsan 00:01:28.427 ************************************ 00:01:28.427 14:34:07 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:28.427 using ubsan 00:01:28.427 00:01:28.427 real 0m0.000s 00:01:28.427 user 0m0.000s 00:01:28.427 sys 0m0.000s 00:01:28.427 14:34:07 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:28.427 14:34:07 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:28.427 ************************************ 00:01:28.427 END TEST ubsan 00:01:28.427 ************************************ 00:01:28.427 14:34:07 -- common/autotest_common.sh@1142 -- $ return 0 00:01:28.427 14:34:07 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:28.427 14:34:07 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:28.427 14:34:07 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:28.427 14:34:07 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:28.427 14:34:07 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:28.427 14:34:07 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:28.427 14:34:07 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:28.427 14:34:07 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:28.427 14:34:07 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:01:28.427 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:28.427 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:28.685 Using 'verbs' RDMA provider 00:01:39.584 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:49.551 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:49.551 Creating mk/config.mk...done. 00:01:49.551 Creating mk/cc.flags.mk...done. 00:01:49.551 Type 'make' to build. 00:01:49.551 14:34:27 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:01:49.551 14:34:27 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:49.551 14:34:27 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:49.551 14:34:27 -- common/autotest_common.sh@10 -- $ set +x 00:01:49.551 ************************************ 00:01:49.551 START TEST make 00:01:49.551 ************************************ 00:01:49.551 14:34:28 make -- common/autotest_common.sh@1123 -- $ make -j48 00:01:49.551 make[1]: Nothing to be done for 'all'. 00:01:57.671 The Meson build system 00:01:57.671 Version: 1.3.1 00:01:57.671 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:57.671 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:57.671 Build type: native build 00:01:57.671 Program cat found: YES (/usr/bin/cat) 00:01:57.671 Project name: DPDK 00:01:57.671 Project version: 24.03.0 00:01:57.671 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:57.671 C linker for the host machine: cc ld.bfd 2.39-16 00:01:57.671 Host machine cpu family: x86_64 00:01:57.671 Host machine cpu: x86_64 00:01:57.671 Message: ## Building in Developer Mode ## 00:01:57.671 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:57.671 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:57.671 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:57.671 Program python3 found: YES (/usr/bin/python3) 00:01:57.671 Program cat found: YES (/usr/bin/cat) 00:01:57.671 Compiler for C supports arguments -march=native: YES 00:01:57.671 Checking for size of "void *" : 8 00:01:57.671 Checking for size of "void *" : 8 (cached) 00:01:57.671 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:57.671 Library m found: YES 00:01:57.671 Library numa found: YES 00:01:57.671 Has header "numaif.h" : YES 00:01:57.671 Library fdt found: NO 00:01:57.671 Library execinfo found: NO 00:01:57.671 Has header "execinfo.h" : YES 00:01:57.671 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:57.671 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:57.671 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:57.671 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:57.671 Run-time dependency openssl found: YES 3.0.9 00:01:57.671 Run-time dependency libpcap found: YES 1.10.4 00:01:57.671 Has header "pcap.h" with dependency libpcap: YES 00:01:57.671 Compiler for C supports arguments -Wcast-qual: YES 00:01:57.671 Compiler for C supports arguments -Wdeprecated: YES 00:01:57.671 Compiler for C supports arguments -Wformat: YES 00:01:57.671 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:57.671 Compiler for C supports arguments -Wformat-security: NO 00:01:57.671 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:57.671 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:57.671 Compiler for C supports arguments -Wnested-externs: YES 00:01:57.671 Compiler for C supports arguments -Wold-style-definition: YES 00:01:57.671 Compiler for C supports arguments -Wpointer-arith: YES 00:01:57.671 Compiler for C supports arguments -Wsign-compare: YES 00:01:57.671 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:57.671 Compiler for C supports arguments -Wundef: YES 00:01:57.671 Compiler for C supports arguments -Wwrite-strings: YES 00:01:57.671 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:57.671 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:57.671 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:57.671 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:57.671 Program objdump found: YES (/usr/bin/objdump) 00:01:57.671 Compiler for C supports arguments -mavx512f: YES 00:01:57.671 Checking if "AVX512 checking" compiles: YES 00:01:57.671 Fetching value of define "__SSE4_2__" : 1 00:01:57.671 Fetching value of define "__AES__" : 1 00:01:57.671 Fetching value of define "__AVX__" : 1 00:01:57.671 Fetching value of define "__AVX2__" : (undefined) 00:01:57.671 Fetching value of define "__AVX512BW__" : (undefined) 00:01:57.671 Fetching value of define "__AVX512CD__" : (undefined) 00:01:57.671 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:57.671 Fetching value of define "__AVX512F__" : (undefined) 00:01:57.671 Fetching value of define "__AVX512VL__" : (undefined) 00:01:57.671 Fetching value of define "__PCLMUL__" : 1 00:01:57.671 Fetching value of define "__RDRND__" : 1 00:01:57.671 Fetching value of define "__RDSEED__" : (undefined) 00:01:57.671 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:57.671 Fetching value of define "__znver1__" : (undefined) 00:01:57.671 Fetching value of define "__znver2__" : (undefined) 00:01:57.671 Fetching value of define "__znver3__" : (undefined) 00:01:57.671 Fetching value of define "__znver4__" : (undefined) 00:01:57.671 Library asan found: YES 00:01:57.671 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:57.671 Message: lib/log: Defining dependency "log" 00:01:57.671 Message: lib/kvargs: Defining dependency "kvargs" 00:01:57.671 Message: lib/telemetry: Defining dependency "telemetry" 00:01:57.671 Library rt found: YES 00:01:57.671 Checking for function "getentropy" : NO 00:01:57.671 Message: lib/eal: Defining dependency "eal" 00:01:57.671 Message: lib/ring: Defining dependency "ring" 00:01:57.671 Message: lib/rcu: Defining dependency "rcu" 00:01:57.671 Message: lib/mempool: Defining dependency "mempool" 00:01:57.671 Message: lib/mbuf: Defining dependency "mbuf" 00:01:57.671 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:57.671 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:57.671 Compiler for C supports arguments -mpclmul: YES 00:01:57.671 Compiler for C supports arguments -maes: YES 00:01:57.671 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:57.671 Compiler for C supports arguments -mavx512bw: YES 00:01:57.671 Compiler for C supports arguments -mavx512dq: YES 00:01:57.671 Compiler for C supports arguments -mavx512vl: YES 00:01:57.671 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:57.671 Compiler for C supports arguments -mavx2: YES 00:01:57.671 Compiler for C supports arguments -mavx: YES 00:01:57.671 Message: lib/net: Defining dependency "net" 00:01:57.671 Message: lib/meter: Defining dependency "meter" 00:01:57.671 Message: lib/ethdev: Defining dependency "ethdev" 00:01:57.671 Message: lib/pci: Defining dependency "pci" 00:01:57.671 Message: lib/cmdline: Defining dependency "cmdline" 00:01:57.671 Message: lib/hash: Defining dependency "hash" 00:01:57.671 Message: lib/timer: Defining dependency "timer" 00:01:57.671 Message: lib/compressdev: Defining dependency "compressdev" 00:01:57.671 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:57.671 Message: lib/dmadev: Defining dependency "dmadev" 00:01:57.671 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:57.671 Message: lib/power: Defining dependency "power" 00:01:57.671 Message: lib/reorder: Defining dependency "reorder" 00:01:57.671 Message: lib/security: Defining dependency "security" 00:01:57.671 Has header "linux/userfaultfd.h" : YES 00:01:57.671 Has header "linux/vduse.h" : YES 00:01:57.671 Message: lib/vhost: Defining dependency "vhost" 00:01:57.671 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:57.671 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:57.671 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:57.671 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:57.671 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:57.671 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:57.671 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:57.671 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:57.671 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:57.671 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:57.671 Program doxygen found: YES (/usr/bin/doxygen) 00:01:57.671 Configuring doxy-api-html.conf using configuration 00:01:57.672 Configuring doxy-api-man.conf using configuration 00:01:57.672 Program mandb found: YES (/usr/bin/mandb) 00:01:57.672 Program sphinx-build found: NO 00:01:57.672 Configuring rte_build_config.h using configuration 00:01:57.672 Message: 00:01:57.672 ================= 00:01:57.672 Applications Enabled 00:01:57.672 ================= 00:01:57.672 00:01:57.672 apps: 00:01:57.672 00:01:57.672 00:01:57.672 Message: 00:01:57.672 ================= 00:01:57.672 Libraries Enabled 00:01:57.672 ================= 00:01:57.672 00:01:57.672 libs: 00:01:57.672 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:57.672 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:57.672 cryptodev, dmadev, power, reorder, security, vhost, 00:01:57.672 00:01:57.672 Message: 00:01:57.672 =============== 00:01:57.672 Drivers Enabled 00:01:57.672 =============== 00:01:57.672 00:01:57.672 common: 00:01:57.672 00:01:57.672 bus: 00:01:57.672 pci, vdev, 00:01:57.672 mempool: 00:01:57.672 ring, 00:01:57.672 dma: 00:01:57.672 00:01:57.672 net: 00:01:57.672 00:01:57.672 crypto: 00:01:57.672 00:01:57.672 compress: 00:01:57.672 00:01:57.672 vdpa: 00:01:57.672 00:01:57.672 00:01:57.672 Message: 00:01:57.672 ================= 00:01:57.672 Content Skipped 00:01:57.672 ================= 00:01:57.672 00:01:57.672 apps: 00:01:57.672 dumpcap: explicitly disabled via build config 00:01:57.672 graph: explicitly disabled via build config 00:01:57.672 pdump: explicitly disabled via build config 00:01:57.672 proc-info: explicitly disabled via build config 00:01:57.672 test-acl: explicitly disabled via build config 00:01:57.672 test-bbdev: explicitly disabled via build config 00:01:57.672 test-cmdline: explicitly disabled via build config 00:01:57.672 test-compress-perf: explicitly disabled via build config 00:01:57.672 test-crypto-perf: explicitly disabled via build config 00:01:57.672 test-dma-perf: explicitly disabled via build config 00:01:57.672 test-eventdev: explicitly disabled via build config 00:01:57.672 test-fib: explicitly disabled via build config 00:01:57.672 test-flow-perf: explicitly disabled via build config 00:01:57.672 test-gpudev: explicitly disabled via build config 00:01:57.672 test-mldev: explicitly disabled via build config 00:01:57.672 test-pipeline: explicitly disabled via build config 00:01:57.672 test-pmd: explicitly disabled via build config 00:01:57.672 test-regex: explicitly disabled via build config 00:01:57.672 test-sad: explicitly disabled via build config 00:01:57.672 test-security-perf: explicitly disabled via build config 00:01:57.672 00:01:57.672 libs: 00:01:57.672 argparse: explicitly disabled via build config 00:01:57.672 metrics: explicitly disabled via build config 00:01:57.672 acl: explicitly disabled via build config 00:01:57.672 bbdev: explicitly disabled via build config 00:01:57.672 bitratestats: explicitly disabled via build config 00:01:57.672 bpf: explicitly disabled via build config 00:01:57.672 cfgfile: explicitly disabled via build config 00:01:57.672 distributor: explicitly disabled via build config 00:01:57.672 efd: explicitly disabled via build config 00:01:57.672 eventdev: explicitly disabled via build config 00:01:57.672 dispatcher: explicitly disabled via build config 00:01:57.672 gpudev: explicitly disabled via build config 00:01:57.672 gro: explicitly disabled via build config 00:01:57.672 gso: explicitly disabled via build config 00:01:57.672 ip_frag: explicitly disabled via build config 00:01:57.672 jobstats: explicitly disabled via build config 00:01:57.672 latencystats: explicitly disabled via build config 00:01:57.672 lpm: explicitly disabled via build config 00:01:57.672 member: explicitly disabled via build config 00:01:57.672 pcapng: explicitly disabled via build config 00:01:57.672 rawdev: explicitly disabled via build config 00:01:57.672 regexdev: explicitly disabled via build config 00:01:57.672 mldev: explicitly disabled via build config 00:01:57.672 rib: explicitly disabled via build config 00:01:57.672 sched: explicitly disabled via build config 00:01:57.672 stack: explicitly disabled via build config 00:01:57.672 ipsec: explicitly disabled via build config 00:01:57.672 pdcp: explicitly disabled via build config 00:01:57.672 fib: explicitly disabled via build config 00:01:57.672 port: explicitly disabled via build config 00:01:57.672 pdump: explicitly disabled via build config 00:01:57.672 table: explicitly disabled via build config 00:01:57.672 pipeline: explicitly disabled via build config 00:01:57.672 graph: explicitly disabled via build config 00:01:57.672 node: explicitly disabled via build config 00:01:57.672 00:01:57.672 drivers: 00:01:57.672 common/cpt: not in enabled drivers build config 00:01:57.672 common/dpaax: not in enabled drivers build config 00:01:57.672 common/iavf: not in enabled drivers build config 00:01:57.672 common/idpf: not in enabled drivers build config 00:01:57.672 common/ionic: not in enabled drivers build config 00:01:57.672 common/mvep: not in enabled drivers build config 00:01:57.672 common/octeontx: not in enabled drivers build config 00:01:57.672 bus/auxiliary: not in enabled drivers build config 00:01:57.672 bus/cdx: not in enabled drivers build config 00:01:57.672 bus/dpaa: not in enabled drivers build config 00:01:57.672 bus/fslmc: not in enabled drivers build config 00:01:57.672 bus/ifpga: not in enabled drivers build config 00:01:57.672 bus/platform: not in enabled drivers build config 00:01:57.672 bus/uacce: not in enabled drivers build config 00:01:57.672 bus/vmbus: not in enabled drivers build config 00:01:57.672 common/cnxk: not in enabled drivers build config 00:01:57.672 common/mlx5: not in enabled drivers build config 00:01:57.672 common/nfp: not in enabled drivers build config 00:01:57.672 common/nitrox: not in enabled drivers build config 00:01:57.672 common/qat: not in enabled drivers build config 00:01:57.672 common/sfc_efx: not in enabled drivers build config 00:01:57.672 mempool/bucket: not in enabled drivers build config 00:01:57.672 mempool/cnxk: not in enabled drivers build config 00:01:57.672 mempool/dpaa: not in enabled drivers build config 00:01:57.672 mempool/dpaa2: not in enabled drivers build config 00:01:57.672 mempool/octeontx: not in enabled drivers build config 00:01:57.672 mempool/stack: not in enabled drivers build config 00:01:57.672 dma/cnxk: not in enabled drivers build config 00:01:57.672 dma/dpaa: not in enabled drivers build config 00:01:57.672 dma/dpaa2: not in enabled drivers build config 00:01:57.672 dma/hisilicon: not in enabled drivers build config 00:01:57.672 dma/idxd: not in enabled drivers build config 00:01:57.672 dma/ioat: not in enabled drivers build config 00:01:57.672 dma/skeleton: not in enabled drivers build config 00:01:57.672 net/af_packet: not in enabled drivers build config 00:01:57.672 net/af_xdp: not in enabled drivers build config 00:01:57.672 net/ark: not in enabled drivers build config 00:01:57.672 net/atlantic: not in enabled drivers build config 00:01:57.672 net/avp: not in enabled drivers build config 00:01:57.672 net/axgbe: not in enabled drivers build config 00:01:57.672 net/bnx2x: not in enabled drivers build config 00:01:57.672 net/bnxt: not in enabled drivers build config 00:01:57.672 net/bonding: not in enabled drivers build config 00:01:57.672 net/cnxk: not in enabled drivers build config 00:01:57.672 net/cpfl: not in enabled drivers build config 00:01:57.672 net/cxgbe: not in enabled drivers build config 00:01:57.672 net/dpaa: not in enabled drivers build config 00:01:57.672 net/dpaa2: not in enabled drivers build config 00:01:57.672 net/e1000: not in enabled drivers build config 00:01:57.672 net/ena: not in enabled drivers build config 00:01:57.672 net/enetc: not in enabled drivers build config 00:01:57.672 net/enetfec: not in enabled drivers build config 00:01:57.672 net/enic: not in enabled drivers build config 00:01:57.672 net/failsafe: not in enabled drivers build config 00:01:57.672 net/fm10k: not in enabled drivers build config 00:01:57.672 net/gve: not in enabled drivers build config 00:01:57.672 net/hinic: not in enabled drivers build config 00:01:57.672 net/hns3: not in enabled drivers build config 00:01:57.672 net/i40e: not in enabled drivers build config 00:01:57.672 net/iavf: not in enabled drivers build config 00:01:57.672 net/ice: not in enabled drivers build config 00:01:57.672 net/idpf: not in enabled drivers build config 00:01:57.672 net/igc: not in enabled drivers build config 00:01:57.672 net/ionic: not in enabled drivers build config 00:01:57.672 net/ipn3ke: not in enabled drivers build config 00:01:57.672 net/ixgbe: not in enabled drivers build config 00:01:57.672 net/mana: not in enabled drivers build config 00:01:57.672 net/memif: not in enabled drivers build config 00:01:57.672 net/mlx4: not in enabled drivers build config 00:01:57.672 net/mlx5: not in enabled drivers build config 00:01:57.672 net/mvneta: not in enabled drivers build config 00:01:57.672 net/mvpp2: not in enabled drivers build config 00:01:57.672 net/netvsc: not in enabled drivers build config 00:01:57.672 net/nfb: not in enabled drivers build config 00:01:57.672 net/nfp: not in enabled drivers build config 00:01:57.672 net/ngbe: not in enabled drivers build config 00:01:57.672 net/null: not in enabled drivers build config 00:01:57.672 net/octeontx: not in enabled drivers build config 00:01:57.672 net/octeon_ep: not in enabled drivers build config 00:01:57.672 net/pcap: not in enabled drivers build config 00:01:57.672 net/pfe: not in enabled drivers build config 00:01:57.672 net/qede: not in enabled drivers build config 00:01:57.672 net/ring: not in enabled drivers build config 00:01:57.672 net/sfc: not in enabled drivers build config 00:01:57.672 net/softnic: not in enabled drivers build config 00:01:57.672 net/tap: not in enabled drivers build config 00:01:57.672 net/thunderx: not in enabled drivers build config 00:01:57.672 net/txgbe: not in enabled drivers build config 00:01:57.672 net/vdev_netvsc: not in enabled drivers build config 00:01:57.672 net/vhost: not in enabled drivers build config 00:01:57.672 net/virtio: not in enabled drivers build config 00:01:57.672 net/vmxnet3: not in enabled drivers build config 00:01:57.672 raw/*: missing internal dependency, "rawdev" 00:01:57.672 crypto/armv8: not in enabled drivers build config 00:01:57.672 crypto/bcmfs: not in enabled drivers build config 00:01:57.672 crypto/caam_jr: not in enabled drivers build config 00:01:57.672 crypto/ccp: not in enabled drivers build config 00:01:57.672 crypto/cnxk: not in enabled drivers build config 00:01:57.672 crypto/dpaa_sec: not in enabled drivers build config 00:01:57.672 crypto/dpaa2_sec: not in enabled drivers build config 00:01:57.672 crypto/ipsec_mb: not in enabled drivers build config 00:01:57.672 crypto/mlx5: not in enabled drivers build config 00:01:57.672 crypto/mvsam: not in enabled drivers build config 00:01:57.672 crypto/nitrox: not in enabled drivers build config 00:01:57.672 crypto/null: not in enabled drivers build config 00:01:57.672 crypto/octeontx: not in enabled drivers build config 00:01:57.673 crypto/openssl: not in enabled drivers build config 00:01:57.673 crypto/scheduler: not in enabled drivers build config 00:01:57.673 crypto/uadk: not in enabled drivers build config 00:01:57.673 crypto/virtio: not in enabled drivers build config 00:01:57.673 compress/isal: not in enabled drivers build config 00:01:57.673 compress/mlx5: not in enabled drivers build config 00:01:57.673 compress/nitrox: not in enabled drivers build config 00:01:57.673 compress/octeontx: not in enabled drivers build config 00:01:57.673 compress/zlib: not in enabled drivers build config 00:01:57.673 regex/*: missing internal dependency, "regexdev" 00:01:57.673 ml/*: missing internal dependency, "mldev" 00:01:57.673 vdpa/ifc: not in enabled drivers build config 00:01:57.673 vdpa/mlx5: not in enabled drivers build config 00:01:57.673 vdpa/nfp: not in enabled drivers build config 00:01:57.673 vdpa/sfc: not in enabled drivers build config 00:01:57.673 event/*: missing internal dependency, "eventdev" 00:01:57.673 baseband/*: missing internal dependency, "bbdev" 00:01:57.673 gpu/*: missing internal dependency, "gpudev" 00:01:57.673 00:01:57.673 00:01:57.673 Build targets in project: 85 00:01:57.673 00:01:57.673 DPDK 24.03.0 00:01:57.673 00:01:57.673 User defined options 00:01:57.673 buildtype : debug 00:01:57.673 default_library : shared 00:01:57.673 libdir : lib 00:01:57.673 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:57.673 b_sanitize : address 00:01:57.673 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:57.673 c_link_args : 00:01:57.673 cpu_instruction_set: native 00:01:57.673 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:01:57.673 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:01:57.673 enable_docs : false 00:01:57.673 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:57.673 enable_kmods : false 00:01:57.673 max_lcores : 128 00:01:57.673 tests : false 00:01:57.673 00:01:57.673 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:58.251 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:58.251 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:58.251 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:58.251 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:58.251 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:58.251 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:58.251 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:58.251 [7/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:58.251 [8/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:58.251 [9/268] Linking static target lib/librte_kvargs.a 00:01:58.251 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:58.251 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:58.537 [12/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:58.537 [13/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:58.537 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:58.537 [15/268] Linking static target lib/librte_log.a 00:01:58.537 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:59.116 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.116 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:59.116 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:59.116 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:59.116 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:59.116 [22/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:59.116 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:59.116 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:59.116 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:59.116 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:59.116 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:59.116 [28/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:59.377 [29/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:59.377 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:59.377 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:59.377 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:59.377 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:59.377 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:59.377 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:59.377 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:59.377 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:59.377 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:59.377 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:59.377 [40/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:59.377 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:59.377 [42/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:59.377 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:59.377 [44/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:59.377 [45/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:59.377 [46/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:59.377 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:59.377 [48/268] Linking static target lib/librte_telemetry.a 00:01:59.377 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:59.377 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:59.377 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:59.377 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:59.377 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:59.377 [54/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:59.377 [55/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:59.377 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:59.377 [57/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:59.377 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:59.377 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:59.377 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:59.638 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:59.638 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:59.638 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:59.638 [64/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.638 [65/268] Linking target lib/librte_log.so.24.1 00:01:59.898 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:00.162 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:00.162 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:00.162 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:00.162 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:00.162 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:00.162 [72/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:00.162 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:00.162 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:00.162 [75/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:00.162 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:00.162 [77/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:00.162 [78/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:00.162 [79/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:00.162 [80/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:00.422 [81/268] Linking static target lib/librte_pci.a 00:02:00.422 [82/268] Linking target lib/librte_kvargs.so.24.1 00:02:00.422 [83/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:00.422 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:00.422 [85/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:00.422 [86/268] Linking static target lib/librte_ring.a 00:02:00.422 [87/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:00.422 [88/268] Linking static target lib/librte_meter.a 00:02:00.422 [89/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:00.422 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:00.422 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:00.422 [92/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:00.422 [93/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:00.422 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:00.422 [95/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:00.422 [96/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:00.422 [97/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:00.422 [98/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:00.422 [99/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:00.422 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:00.422 [101/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:00.422 [102/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:00.422 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:00.422 [104/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.422 [105/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:00.422 [106/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:00.422 [107/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:00.686 [108/268] Linking target lib/librte_telemetry.so.24.1 00:02:00.686 [109/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:00.686 [110/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:00.686 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:00.686 [112/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:00.686 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:00.686 [114/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:00.686 [115/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:00.686 [116/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:00.686 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:00.686 [118/268] Linking static target lib/librte_mempool.a 00:02:00.686 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:00.686 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:00.686 [121/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:00.686 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:00.686 [123/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.947 [124/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.947 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:00.947 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:00.947 [127/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:00.947 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:00.947 [129/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:00.947 [130/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.947 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:00.947 [132/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:00.947 [133/268] Linking static target lib/librte_rcu.a 00:02:01.210 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:01.210 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:01.210 [136/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:01.210 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:01.210 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:01.210 [139/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:01.210 [140/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:01.210 [141/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:01.210 [142/268] Linking static target lib/librte_cmdline.a 00:02:01.210 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:01.470 [144/268] Linking static target lib/librte_eal.a 00:02:01.470 [145/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:01.470 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:01.470 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:01.470 [148/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:01.470 [149/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:01.470 [150/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:01.470 [151/268] Linking static target lib/librte_timer.a 00:02:01.470 [152/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:01.732 [153/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:01.732 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:01.732 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:01.732 [156/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.732 [157/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:01.732 [158/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:01.732 [159/268] Linking static target lib/librte_dmadev.a 00:02:01.732 [160/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.991 [161/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:01.991 [162/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.991 [163/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:01.991 [164/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:01.991 [165/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:01.991 [166/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:02.250 [167/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:02.250 [168/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:02.250 [169/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:02.250 [170/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:02.250 [171/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:02.250 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:02.250 [173/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:02.250 [174/268] Linking static target lib/librte_net.a 00:02:02.250 [175/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:02.250 [176/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:02.250 [177/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:02.250 [178/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.250 [179/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:02.250 [180/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.250 [181/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:02.250 [182/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:02.250 [183/268] Linking static target lib/librte_power.a 00:02:02.508 [184/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:02.508 [185/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.508 [186/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:02.508 [187/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:02.508 [188/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:02.508 [189/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:02.508 [190/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:02.508 [191/268] Linking static target drivers/librte_bus_vdev.a 00:02:02.508 [192/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:02.508 [193/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:02.766 [194/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:02.766 [195/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:02.766 [196/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:02.766 [197/268] Linking static target drivers/librte_bus_pci.a 00:02:02.766 [198/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:02.766 [199/268] Linking static target lib/librte_hash.a 00:02:02.766 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:02.766 [201/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:02.766 [202/268] Linking static target lib/librte_compressdev.a 00:02:02.766 [203/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:02.766 [204/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.766 [205/268] Linking static target lib/librte_reorder.a 00:02:02.766 [206/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:02.766 [207/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:02.766 [208/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:02.766 [209/268] Linking static target drivers/librte_mempool_ring.a 00:02:02.766 [210/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.023 [211/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:03.023 [212/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.023 [213/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.281 [214/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.281 [215/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.539 [216/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:03.539 [217/268] Linking static target lib/librte_security.a 00:02:04.104 [218/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.104 [219/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:04.668 [220/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:04.668 [221/268] Linking static target lib/librte_mbuf.a 00:02:04.926 [222/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:04.926 [223/268] Linking static target lib/librte_cryptodev.a 00:02:05.184 [224/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.750 [225/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:05.750 [226/268] Linking static target lib/librte_ethdev.a 00:02:05.750 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.650 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.650 [229/268] Linking target lib/librte_eal.so.24.1 00:02:07.650 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:07.650 [231/268] Linking target lib/librte_meter.so.24.1 00:02:07.650 [232/268] Linking target lib/librte_pci.so.24.1 00:02:07.650 [233/268] Linking target lib/librte_ring.so.24.1 00:02:07.650 [234/268] Linking target lib/librte_timer.so.24.1 00:02:07.650 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:07.650 [236/268] Linking target lib/librte_dmadev.so.24.1 00:02:07.650 [237/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:07.650 [238/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:07.650 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:07.650 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:07.650 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:07.650 [242/268] Linking target lib/librte_rcu.so.24.1 00:02:07.650 [243/268] Linking target lib/librte_mempool.so.24.1 00:02:07.650 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:07.908 [245/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:07.908 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:07.908 [247/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:07.908 [248/268] Linking target lib/librte_mbuf.so.24.1 00:02:08.166 [249/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:08.166 [250/268] Linking target lib/librte_reorder.so.24.1 00:02:08.166 [251/268] Linking target lib/librte_compressdev.so.24.1 00:02:08.166 [252/268] Linking target lib/librte_net.so.24.1 00:02:08.166 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:02:08.166 [254/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:08.166 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:08.424 [256/268] Linking target lib/librte_security.so.24.1 00:02:08.424 [257/268] Linking target lib/librte_cmdline.so.24.1 00:02:08.424 [258/268] Linking target lib/librte_hash.so.24.1 00:02:08.424 [259/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:08.682 [260/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:10.056 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.056 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:10.314 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:10.314 [264/268] Linking target lib/librte_power.so.24.1 00:02:32.228 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:32.228 [266/268] Linking static target lib/librte_vhost.a 00:02:32.486 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.745 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:32.745 INFO: autodetecting backend as ninja 00:02:32.745 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:02:33.682 CC lib/ut_mock/mock.o 00:02:33.682 CC lib/log/log.o 00:02:33.682 CC lib/log/log_flags.o 00:02:33.682 CC lib/log/log_deprecated.o 00:02:33.682 CC lib/ut/ut.o 00:02:33.682 LIB libspdk_log.a 00:02:33.682 LIB libspdk_ut_mock.a 00:02:33.682 LIB libspdk_ut.a 00:02:33.682 SO libspdk_log.so.7.0 00:02:33.682 SO libspdk_ut.so.2.0 00:02:33.682 SO libspdk_ut_mock.so.6.0 00:02:33.959 SYMLINK libspdk_ut.so 00:02:33.960 SYMLINK libspdk_ut_mock.so 00:02:33.960 SYMLINK libspdk_log.so 00:02:33.960 CC lib/dma/dma.o 00:02:33.960 CC lib/util/base64.o 00:02:33.960 CC lib/util/bit_array.o 00:02:33.960 CXX lib/trace_parser/trace.o 00:02:33.960 CC lib/util/cpuset.o 00:02:33.960 CC lib/ioat/ioat.o 00:02:33.960 CC lib/util/crc16.o 00:02:33.960 CC lib/util/crc32.o 00:02:33.960 CC lib/util/crc32c.o 00:02:33.960 CC lib/util/crc32_ieee.o 00:02:33.960 CC lib/util/crc64.o 00:02:33.960 CC lib/util/dif.o 00:02:33.960 CC lib/util/fd.o 00:02:33.960 CC lib/util/file.o 00:02:33.960 CC lib/util/hexlify.o 00:02:33.960 CC lib/util/iov.o 00:02:33.960 CC lib/util/math.o 00:02:33.960 CC lib/util/pipe.o 00:02:33.960 CC lib/util/strerror_tls.o 00:02:33.960 CC lib/util/string.o 00:02:33.960 CC lib/util/uuid.o 00:02:33.960 CC lib/util/fd_group.o 00:02:33.960 CC lib/util/xor.o 00:02:33.960 CC lib/util/zipf.o 00:02:34.228 CC lib/vfio_user/host/vfio_user_pci.o 00:02:34.228 CC lib/vfio_user/host/vfio_user.o 00:02:34.228 LIB libspdk_dma.a 00:02:34.228 SO libspdk_dma.so.4.0 00:02:34.228 SYMLINK libspdk_dma.so 00:02:34.486 LIB libspdk_ioat.a 00:02:34.486 SO libspdk_ioat.so.7.0 00:02:34.486 SYMLINK libspdk_ioat.so 00:02:34.486 LIB libspdk_vfio_user.a 00:02:34.486 SO libspdk_vfio_user.so.5.0 00:02:34.486 SYMLINK libspdk_vfio_user.so 00:02:34.743 LIB libspdk_util.a 00:02:34.743 SO libspdk_util.so.9.1 00:02:35.000 SYMLINK libspdk_util.so 00:02:35.257 CC lib/env_dpdk/env.o 00:02:35.257 CC lib/idxd/idxd.o 00:02:35.257 CC lib/env_dpdk/memory.o 00:02:35.257 CC lib/vmd/vmd.o 00:02:35.257 CC lib/conf/conf.o 00:02:35.257 CC lib/json/json_parse.o 00:02:35.257 CC lib/rdma_provider/common.o 00:02:35.257 CC lib/env_dpdk/pci.o 00:02:35.257 CC lib/idxd/idxd_user.o 00:02:35.257 CC lib/vmd/led.o 00:02:35.257 CC lib/json/json_util.o 00:02:35.258 CC lib/rdma_utils/rdma_utils.o 00:02:35.258 CC lib/env_dpdk/init.o 00:02:35.258 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:35.258 CC lib/json/json_write.o 00:02:35.258 CC lib/idxd/idxd_kernel.o 00:02:35.258 CC lib/env_dpdk/threads.o 00:02:35.258 CC lib/env_dpdk/pci_ioat.o 00:02:35.258 CC lib/env_dpdk/pci_virtio.o 00:02:35.258 CC lib/env_dpdk/pci_vmd.o 00:02:35.258 CC lib/env_dpdk/pci_idxd.o 00:02:35.258 CC lib/env_dpdk/pci_event.o 00:02:35.258 CC lib/env_dpdk/sigbus_handler.o 00:02:35.258 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:35.258 CC lib/env_dpdk/pci_dpdk.o 00:02:35.258 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:35.258 LIB libspdk_trace_parser.a 00:02:35.258 SO libspdk_trace_parser.so.5.0 00:02:35.515 LIB libspdk_rdma_provider.a 00:02:35.515 SYMLINK libspdk_trace_parser.so 00:02:35.515 SO libspdk_rdma_provider.so.6.0 00:02:35.515 LIB libspdk_conf.a 00:02:35.515 LIB libspdk_rdma_utils.a 00:02:35.515 SO libspdk_conf.so.6.0 00:02:35.515 SYMLINK libspdk_rdma_provider.so 00:02:35.515 SO libspdk_rdma_utils.so.1.0 00:02:35.515 LIB libspdk_json.a 00:02:35.515 SYMLINK libspdk_conf.so 00:02:35.515 SO libspdk_json.so.6.0 00:02:35.515 SYMLINK libspdk_rdma_utils.so 00:02:35.515 SYMLINK libspdk_json.so 00:02:35.772 CC lib/jsonrpc/jsonrpc_server.o 00:02:35.772 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:35.772 CC lib/jsonrpc/jsonrpc_client.o 00:02:35.772 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:36.030 LIB libspdk_idxd.a 00:02:36.030 SO libspdk_idxd.so.12.0 00:02:36.030 SYMLINK libspdk_idxd.so 00:02:36.030 LIB libspdk_vmd.a 00:02:36.030 SO libspdk_vmd.so.6.0 00:02:36.030 LIB libspdk_jsonrpc.a 00:02:36.030 SO libspdk_jsonrpc.so.6.0 00:02:36.030 SYMLINK libspdk_vmd.so 00:02:36.288 SYMLINK libspdk_jsonrpc.so 00:02:36.288 CC lib/rpc/rpc.o 00:02:36.546 LIB libspdk_rpc.a 00:02:36.546 SO libspdk_rpc.so.6.0 00:02:36.546 SYMLINK libspdk_rpc.so 00:02:36.803 CC lib/keyring/keyring.o 00:02:36.803 CC lib/trace/trace.o 00:02:36.803 CC lib/notify/notify.o 00:02:36.803 CC lib/keyring/keyring_rpc.o 00:02:36.803 CC lib/trace/trace_flags.o 00:02:36.803 CC lib/notify/notify_rpc.o 00:02:36.803 CC lib/trace/trace_rpc.o 00:02:37.061 LIB libspdk_notify.a 00:02:37.061 SO libspdk_notify.so.6.0 00:02:37.061 SYMLINK libspdk_notify.so 00:02:37.061 LIB libspdk_keyring.a 00:02:37.061 SO libspdk_keyring.so.1.0 00:02:37.061 LIB libspdk_trace.a 00:02:37.061 SO libspdk_trace.so.10.0 00:02:37.061 SYMLINK libspdk_keyring.so 00:02:37.319 SYMLINK libspdk_trace.so 00:02:37.319 CC lib/thread/thread.o 00:02:37.319 CC lib/thread/iobuf.o 00:02:37.319 CC lib/sock/sock.o 00:02:37.319 CC lib/sock/sock_rpc.o 00:02:37.885 LIB libspdk_sock.a 00:02:37.885 SO libspdk_sock.so.10.0 00:02:37.885 SYMLINK libspdk_sock.so 00:02:38.143 LIB libspdk_env_dpdk.a 00:02:38.143 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:38.143 CC lib/nvme/nvme_ctrlr.o 00:02:38.143 CC lib/nvme/nvme_fabric.o 00:02:38.143 CC lib/nvme/nvme_ns_cmd.o 00:02:38.143 CC lib/nvme/nvme_ns.o 00:02:38.143 CC lib/nvme/nvme_pcie_common.o 00:02:38.143 CC lib/nvme/nvme_pcie.o 00:02:38.143 CC lib/nvme/nvme_qpair.o 00:02:38.143 CC lib/nvme/nvme.o 00:02:38.143 CC lib/nvme/nvme_quirks.o 00:02:38.143 CC lib/nvme/nvme_transport.o 00:02:38.143 CC lib/nvme/nvme_discovery.o 00:02:38.143 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:38.143 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:38.143 CC lib/nvme/nvme_tcp.o 00:02:38.143 CC lib/nvme/nvme_opal.o 00:02:38.143 CC lib/nvme/nvme_io_msg.o 00:02:38.143 CC lib/nvme/nvme_poll_group.o 00:02:38.143 CC lib/nvme/nvme_zns.o 00:02:38.143 CC lib/nvme/nvme_stubs.o 00:02:38.143 CC lib/nvme/nvme_auth.o 00:02:38.143 CC lib/nvme/nvme_cuse.o 00:02:38.143 CC lib/nvme/nvme_rdma.o 00:02:38.143 SO libspdk_env_dpdk.so.14.1 00:02:38.402 SYMLINK libspdk_env_dpdk.so 00:02:39.363 LIB libspdk_thread.a 00:02:39.364 SO libspdk_thread.so.10.1 00:02:39.364 SYMLINK libspdk_thread.so 00:02:39.622 CC lib/virtio/virtio.o 00:02:39.622 CC lib/accel/accel.o 00:02:39.622 CC lib/blob/blobstore.o 00:02:39.622 CC lib/init/json_config.o 00:02:39.622 CC lib/virtio/virtio_vhost_user.o 00:02:39.622 CC lib/accel/accel_rpc.o 00:02:39.622 CC lib/blob/request.o 00:02:39.622 CC lib/init/subsystem.o 00:02:39.622 CC lib/virtio/virtio_vfio_user.o 00:02:39.622 CC lib/blob/zeroes.o 00:02:39.622 CC lib/accel/accel_sw.o 00:02:39.622 CC lib/init/subsystem_rpc.o 00:02:39.622 CC lib/virtio/virtio_pci.o 00:02:39.622 CC lib/blob/blob_bs_dev.o 00:02:39.622 CC lib/init/rpc.o 00:02:39.880 LIB libspdk_init.a 00:02:39.880 SO libspdk_init.so.5.0 00:02:40.139 SYMLINK libspdk_init.so 00:02:40.139 LIB libspdk_virtio.a 00:02:40.139 SO libspdk_virtio.so.7.0 00:02:40.139 CC lib/event/app.o 00:02:40.139 CC lib/event/reactor.o 00:02:40.139 CC lib/event/log_rpc.o 00:02:40.139 CC lib/event/app_rpc.o 00:02:40.139 CC lib/event/scheduler_static.o 00:02:40.139 SYMLINK libspdk_virtio.so 00:02:40.705 LIB libspdk_event.a 00:02:40.705 SO libspdk_event.so.14.0 00:02:40.964 SYMLINK libspdk_event.so 00:02:40.964 LIB libspdk_accel.a 00:02:40.964 SO libspdk_accel.so.15.1 00:02:40.964 LIB libspdk_nvme.a 00:02:40.964 SYMLINK libspdk_accel.so 00:02:41.222 SO libspdk_nvme.so.13.1 00:02:41.222 CC lib/bdev/bdev.o 00:02:41.222 CC lib/bdev/bdev_rpc.o 00:02:41.222 CC lib/bdev/bdev_zone.o 00:02:41.222 CC lib/bdev/part.o 00:02:41.222 CC lib/bdev/scsi_nvme.o 00:02:41.481 SYMLINK libspdk_nvme.so 00:02:44.008 LIB libspdk_blob.a 00:02:44.008 SO libspdk_blob.so.11.0 00:02:44.008 SYMLINK libspdk_blob.so 00:02:44.008 CC lib/lvol/lvol.o 00:02:44.008 CC lib/blobfs/blobfs.o 00:02:44.008 CC lib/blobfs/tree.o 00:02:44.572 LIB libspdk_bdev.a 00:02:44.572 SO libspdk_bdev.so.15.1 00:02:44.572 SYMLINK libspdk_bdev.so 00:02:44.838 CC lib/scsi/dev.o 00:02:44.838 CC lib/nbd/nbd.o 00:02:44.838 CC lib/ublk/ublk.o 00:02:44.838 CC lib/scsi/lun.o 00:02:44.838 CC lib/nvmf/ctrlr.o 00:02:44.838 CC lib/nbd/nbd_rpc.o 00:02:44.838 CC lib/ftl/ftl_core.o 00:02:44.838 CC lib/scsi/port.o 00:02:44.838 CC lib/ublk/ublk_rpc.o 00:02:44.838 CC lib/nvmf/ctrlr_discovery.o 00:02:44.838 CC lib/ftl/ftl_init.o 00:02:44.838 CC lib/scsi/scsi.o 00:02:44.838 CC lib/nvmf/ctrlr_bdev.o 00:02:44.838 CC lib/scsi/scsi_bdev.o 00:02:44.838 CC lib/ftl/ftl_layout.o 00:02:44.838 CC lib/nvmf/subsystem.o 00:02:44.838 CC lib/ftl/ftl_debug.o 00:02:44.838 CC lib/nvmf/nvmf.o 00:02:44.838 CC lib/ftl/ftl_io.o 00:02:44.838 CC lib/scsi/scsi_pr.o 00:02:44.838 CC lib/ftl/ftl_sb.o 00:02:44.838 CC lib/scsi/scsi_rpc.o 00:02:44.838 CC lib/nvmf/nvmf_rpc.o 00:02:44.838 CC lib/ftl/ftl_l2p.o 00:02:44.838 CC lib/nvmf/transport.o 00:02:44.838 CC lib/ftl/ftl_l2p_flat.o 00:02:44.838 CC lib/scsi/task.o 00:02:44.838 CC lib/nvmf/tcp.o 00:02:44.838 CC lib/ftl/ftl_nv_cache.o 00:02:44.838 CC lib/ftl/ftl_band.o 00:02:44.838 CC lib/nvmf/stubs.o 00:02:44.838 CC lib/ftl/ftl_band_ops.o 00:02:44.838 CC lib/nvmf/mdns_server.o 00:02:44.838 CC lib/nvmf/rdma.o 00:02:44.838 CC lib/ftl/ftl_writer.o 00:02:44.838 CC lib/ftl/ftl_rq.o 00:02:44.838 CC lib/nvmf/auth.o 00:02:44.838 CC lib/ftl/ftl_reloc.o 00:02:44.838 CC lib/ftl/ftl_l2p_cache.o 00:02:44.838 CC lib/ftl/ftl_p2l.o 00:02:44.838 CC lib/ftl/mngt/ftl_mngt.o 00:02:44.838 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:44.838 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:44.838 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:44.838 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:44.838 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:45.099 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:45.099 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:45.099 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:45.099 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:45.099 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:45.099 LIB libspdk_blobfs.a 00:02:45.099 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:45.359 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:45.359 CC lib/ftl/utils/ftl_conf.o 00:02:45.359 CC lib/ftl/utils/ftl_md.o 00:02:45.359 CC lib/ftl/utils/ftl_mempool.o 00:02:45.359 SO libspdk_blobfs.so.10.0 00:02:45.359 CC lib/ftl/utils/ftl_bitmap.o 00:02:45.359 CC lib/ftl/utils/ftl_property.o 00:02:45.359 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:45.359 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:45.359 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:45.359 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:45.359 SYMLINK libspdk_blobfs.so 00:02:45.359 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:45.359 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:45.359 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:45.359 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:45.620 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:45.621 LIB libspdk_lvol.a 00:02:45.621 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:45.621 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:45.621 CC lib/ftl/base/ftl_base_dev.o 00:02:45.621 SO libspdk_lvol.so.10.0 00:02:45.621 CC lib/ftl/base/ftl_base_bdev.o 00:02:45.621 CC lib/ftl/ftl_trace.o 00:02:45.621 SYMLINK libspdk_lvol.so 00:02:45.878 LIB libspdk_nbd.a 00:02:45.878 SO libspdk_nbd.so.7.0 00:02:45.878 SYMLINK libspdk_nbd.so 00:02:45.878 LIB libspdk_scsi.a 00:02:46.136 SO libspdk_scsi.so.9.0 00:02:46.136 SYMLINK libspdk_scsi.so 00:02:46.136 LIB libspdk_ublk.a 00:02:46.136 SO libspdk_ublk.so.3.0 00:02:46.136 SYMLINK libspdk_ublk.so 00:02:46.394 CC lib/vhost/vhost.o 00:02:46.394 CC lib/iscsi/conn.o 00:02:46.394 CC lib/iscsi/init_grp.o 00:02:46.394 CC lib/vhost/vhost_rpc.o 00:02:46.394 CC lib/iscsi/iscsi.o 00:02:46.394 CC lib/vhost/vhost_scsi.o 00:02:46.394 CC lib/iscsi/md5.o 00:02:46.394 CC lib/vhost/vhost_blk.o 00:02:46.394 CC lib/iscsi/param.o 00:02:46.394 CC lib/vhost/rte_vhost_user.o 00:02:46.394 CC lib/iscsi/portal_grp.o 00:02:46.394 CC lib/iscsi/tgt_node.o 00:02:46.394 CC lib/iscsi/iscsi_subsystem.o 00:02:46.394 CC lib/iscsi/iscsi_rpc.o 00:02:46.394 CC lib/iscsi/task.o 00:02:46.653 LIB libspdk_ftl.a 00:02:46.912 SO libspdk_ftl.so.9.0 00:02:47.170 SYMLINK libspdk_ftl.so 00:02:47.737 LIB libspdk_vhost.a 00:02:47.737 SO libspdk_vhost.so.8.0 00:02:47.737 SYMLINK libspdk_vhost.so 00:02:48.305 LIB libspdk_nvmf.a 00:02:48.305 LIB libspdk_iscsi.a 00:02:48.305 SO libspdk_nvmf.so.18.1 00:02:48.305 SO libspdk_iscsi.so.8.0 00:02:48.305 SYMLINK libspdk_iscsi.so 00:02:48.564 SYMLINK libspdk_nvmf.so 00:02:48.822 CC module/env_dpdk/env_dpdk_rpc.o 00:02:48.822 CC module/accel/ioat/accel_ioat_rpc.o 00:02:48.822 CC module/keyring/file/keyring.o 00:02:48.822 CC module/accel/ioat/accel_ioat.o 00:02:48.822 CC module/scheduler/gscheduler/gscheduler.o 00:02:48.822 CC module/accel/error/accel_error.o 00:02:48.822 CC module/keyring/file/keyring_rpc.o 00:02:48.822 CC module/accel/error/accel_error_rpc.o 00:02:48.822 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:48.822 CC module/accel/dsa/accel_dsa.o 00:02:48.822 CC module/accel/iaa/accel_iaa.o 00:02:48.822 CC module/blob/bdev/blob_bdev.o 00:02:48.822 CC module/accel/dsa/accel_dsa_rpc.o 00:02:48.822 CC module/accel/iaa/accel_iaa_rpc.o 00:02:48.822 CC module/keyring/linux/keyring.o 00:02:48.822 CC module/sock/posix/posix.o 00:02:48.822 CC module/keyring/linux/keyring_rpc.o 00:02:48.822 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:48.822 LIB libspdk_env_dpdk_rpc.a 00:02:48.822 SO libspdk_env_dpdk_rpc.so.6.0 00:02:49.080 SYMLINK libspdk_env_dpdk_rpc.so 00:02:49.080 LIB libspdk_keyring_linux.a 00:02:49.080 LIB libspdk_keyring_file.a 00:02:49.080 LIB libspdk_scheduler_gscheduler.a 00:02:49.080 LIB libspdk_scheduler_dpdk_governor.a 00:02:49.080 SO libspdk_keyring_linux.so.1.0 00:02:49.080 SO libspdk_keyring_file.so.1.0 00:02:49.080 SO libspdk_scheduler_gscheduler.so.4.0 00:02:49.080 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:49.080 LIB libspdk_accel_error.a 00:02:49.080 LIB libspdk_accel_ioat.a 00:02:49.080 LIB libspdk_scheduler_dynamic.a 00:02:49.080 SO libspdk_accel_error.so.2.0 00:02:49.080 LIB libspdk_accel_iaa.a 00:02:49.080 SO libspdk_accel_ioat.so.6.0 00:02:49.080 SYMLINK libspdk_scheduler_gscheduler.so 00:02:49.080 SO libspdk_scheduler_dynamic.so.4.0 00:02:49.080 SYMLINK libspdk_keyring_linux.so 00:02:49.080 SYMLINK libspdk_keyring_file.so 00:02:49.080 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:49.080 SO libspdk_accel_iaa.so.3.0 00:02:49.080 SYMLINK libspdk_accel_error.so 00:02:49.080 SYMLINK libspdk_accel_ioat.so 00:02:49.080 SYMLINK libspdk_scheduler_dynamic.so 00:02:49.080 LIB libspdk_accel_dsa.a 00:02:49.080 LIB libspdk_blob_bdev.a 00:02:49.080 SYMLINK libspdk_accel_iaa.so 00:02:49.080 SO libspdk_accel_dsa.so.5.0 00:02:49.080 SO libspdk_blob_bdev.so.11.0 00:02:49.338 SYMLINK libspdk_accel_dsa.so 00:02:49.338 SYMLINK libspdk_blob_bdev.so 00:02:49.597 CC module/bdev/error/vbdev_error.o 00:02:49.597 CC module/bdev/malloc/bdev_malloc.o 00:02:49.597 CC module/bdev/error/vbdev_error_rpc.o 00:02:49.597 CC module/bdev/aio/bdev_aio.o 00:02:49.597 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:49.597 CC module/bdev/raid/bdev_raid.o 00:02:49.597 CC module/bdev/lvol/vbdev_lvol.o 00:02:49.597 CC module/bdev/aio/bdev_aio_rpc.o 00:02:49.597 CC module/bdev/gpt/gpt.o 00:02:49.597 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:49.597 CC module/bdev/raid/bdev_raid_rpc.o 00:02:49.597 CC module/blobfs/bdev/blobfs_bdev.o 00:02:49.597 CC module/bdev/nvme/bdev_nvme.o 00:02:49.597 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:49.597 CC module/bdev/gpt/vbdev_gpt.o 00:02:49.597 CC module/bdev/raid/raid0.o 00:02:49.597 CC module/bdev/delay/vbdev_delay.o 00:02:49.597 CC module/bdev/raid/bdev_raid_sb.o 00:02:49.597 CC module/bdev/passthru/vbdev_passthru.o 00:02:49.597 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:49.597 CC module/bdev/null/bdev_null.o 00:02:49.597 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:49.597 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:49.597 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:49.597 CC module/bdev/ftl/bdev_ftl.o 00:02:49.597 CC module/bdev/null/bdev_null_rpc.o 00:02:49.597 CC module/bdev/raid/raid1.o 00:02:49.597 CC module/bdev/nvme/nvme_rpc.o 00:02:49.597 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:49.597 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:49.597 CC module/bdev/nvme/bdev_mdns_client.o 00:02:49.597 CC module/bdev/raid/concat.o 00:02:49.597 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:49.597 CC module/bdev/iscsi/bdev_iscsi.o 00:02:49.597 CC module/bdev/split/vbdev_split.o 00:02:49.597 CC module/bdev/nvme/vbdev_opal.o 00:02:49.597 CC module/bdev/split/vbdev_split_rpc.o 00:02:49.597 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:49.597 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:49.597 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:49.597 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:49.597 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:49.856 LIB libspdk_blobfs_bdev.a 00:02:49.856 LIB libspdk_bdev_error.a 00:02:49.856 SO libspdk_blobfs_bdev.so.6.0 00:02:49.856 SO libspdk_bdev_error.so.6.0 00:02:50.114 LIB libspdk_bdev_split.a 00:02:50.114 SYMLINK libspdk_blobfs_bdev.so 00:02:50.114 SYMLINK libspdk_bdev_error.so 00:02:50.114 LIB libspdk_sock_posix.a 00:02:50.114 SO libspdk_bdev_split.so.6.0 00:02:50.114 SO libspdk_sock_posix.so.6.0 00:02:50.114 LIB libspdk_bdev_gpt.a 00:02:50.114 LIB libspdk_bdev_delay.a 00:02:50.114 SO libspdk_bdev_gpt.so.6.0 00:02:50.114 SO libspdk_bdev_delay.so.6.0 00:02:50.114 LIB libspdk_bdev_null.a 00:02:50.114 SYMLINK libspdk_bdev_split.so 00:02:50.114 LIB libspdk_bdev_zone_block.a 00:02:50.114 LIB libspdk_bdev_passthru.a 00:02:50.114 SO libspdk_bdev_null.so.6.0 00:02:50.114 SYMLINK libspdk_sock_posix.so 00:02:50.114 SO libspdk_bdev_zone_block.so.6.0 00:02:50.114 LIB libspdk_bdev_ftl.a 00:02:50.114 SYMLINK libspdk_bdev_gpt.so 00:02:50.114 SO libspdk_bdev_passthru.so.6.0 00:02:50.114 SYMLINK libspdk_bdev_delay.so 00:02:50.114 SO libspdk_bdev_ftl.so.6.0 00:02:50.114 LIB libspdk_bdev_aio.a 00:02:50.114 SYMLINK libspdk_bdev_null.so 00:02:50.114 SYMLINK libspdk_bdev_zone_block.so 00:02:50.114 SO libspdk_bdev_aio.so.6.0 00:02:50.114 SYMLINK libspdk_bdev_passthru.so 00:02:50.114 SYMLINK libspdk_bdev_ftl.so 00:02:50.114 LIB libspdk_bdev_iscsi.a 00:02:50.373 LIB libspdk_bdev_malloc.a 00:02:50.373 SO libspdk_bdev_iscsi.so.6.0 00:02:50.373 SYMLINK libspdk_bdev_aio.so 00:02:50.373 SO libspdk_bdev_malloc.so.6.0 00:02:50.373 SYMLINK libspdk_bdev_iscsi.so 00:02:50.373 SYMLINK libspdk_bdev_malloc.so 00:02:50.373 LIB libspdk_bdev_virtio.a 00:02:50.373 LIB libspdk_bdev_lvol.a 00:02:50.373 SO libspdk_bdev_virtio.so.6.0 00:02:50.373 SO libspdk_bdev_lvol.so.6.0 00:02:50.373 SYMLINK libspdk_bdev_virtio.so 00:02:50.373 SYMLINK libspdk_bdev_lvol.so 00:02:50.940 LIB libspdk_bdev_raid.a 00:02:50.940 SO libspdk_bdev_raid.so.6.0 00:02:51.268 SYMLINK libspdk_bdev_raid.so 00:02:52.663 LIB libspdk_bdev_nvme.a 00:02:52.663 SO libspdk_bdev_nvme.so.7.0 00:02:52.663 SYMLINK libspdk_bdev_nvme.so 00:02:52.921 CC module/event/subsystems/iobuf/iobuf.o 00:02:52.921 CC module/event/subsystems/keyring/keyring.o 00:02:52.921 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:52.921 CC module/event/subsystems/sock/sock.o 00:02:52.921 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:52.921 CC module/event/subsystems/scheduler/scheduler.o 00:02:52.921 CC module/event/subsystems/vmd/vmd.o 00:02:52.921 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:53.179 LIB libspdk_event_keyring.a 00:02:53.179 LIB libspdk_event_vhost_blk.a 00:02:53.179 LIB libspdk_event_vmd.a 00:02:53.179 LIB libspdk_event_scheduler.a 00:02:53.179 LIB libspdk_event_sock.a 00:02:53.179 LIB libspdk_event_iobuf.a 00:02:53.179 SO libspdk_event_keyring.so.1.0 00:02:53.179 SO libspdk_event_vhost_blk.so.3.0 00:02:53.179 SO libspdk_event_scheduler.so.4.0 00:02:53.179 SO libspdk_event_sock.so.5.0 00:02:53.179 SO libspdk_event_vmd.so.6.0 00:02:53.179 SO libspdk_event_iobuf.so.3.0 00:02:53.179 SYMLINK libspdk_event_keyring.so 00:02:53.179 SYMLINK libspdk_event_vhost_blk.so 00:02:53.179 SYMLINK libspdk_event_scheduler.so 00:02:53.179 SYMLINK libspdk_event_sock.so 00:02:53.179 SYMLINK libspdk_event_vmd.so 00:02:53.179 SYMLINK libspdk_event_iobuf.so 00:02:53.437 CC module/event/subsystems/accel/accel.o 00:02:53.695 LIB libspdk_event_accel.a 00:02:53.695 SO libspdk_event_accel.so.6.0 00:02:53.695 SYMLINK libspdk_event_accel.so 00:02:53.953 CC module/event/subsystems/bdev/bdev.o 00:02:53.953 LIB libspdk_event_bdev.a 00:02:53.953 SO libspdk_event_bdev.so.6.0 00:02:54.210 SYMLINK libspdk_event_bdev.so 00:02:54.210 CC module/event/subsystems/scsi/scsi.o 00:02:54.210 CC module/event/subsystems/ublk/ublk.o 00:02:54.210 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:54.210 CC module/event/subsystems/nbd/nbd.o 00:02:54.210 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:54.466 LIB libspdk_event_nbd.a 00:02:54.466 LIB libspdk_event_ublk.a 00:02:54.466 LIB libspdk_event_scsi.a 00:02:54.466 SO libspdk_event_ublk.so.3.0 00:02:54.466 SO libspdk_event_nbd.so.6.0 00:02:54.466 SO libspdk_event_scsi.so.6.0 00:02:54.466 SYMLINK libspdk_event_nbd.so 00:02:54.466 SYMLINK libspdk_event_ublk.so 00:02:54.466 SYMLINK libspdk_event_scsi.so 00:02:54.466 LIB libspdk_event_nvmf.a 00:02:54.466 SO libspdk_event_nvmf.so.6.0 00:02:54.723 SYMLINK libspdk_event_nvmf.so 00:02:54.723 CC module/event/subsystems/iscsi/iscsi.o 00:02:54.723 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:54.723 LIB libspdk_event_vhost_scsi.a 00:02:54.723 LIB libspdk_event_iscsi.a 00:02:54.723 SO libspdk_event_vhost_scsi.so.3.0 00:02:54.980 SO libspdk_event_iscsi.so.6.0 00:02:54.980 SYMLINK libspdk_event_vhost_scsi.so 00:02:54.980 SYMLINK libspdk_event_iscsi.so 00:02:54.980 SO libspdk.so.6.0 00:02:54.980 SYMLINK libspdk.so 00:02:55.244 CXX app/trace/trace.o 00:02:55.244 CC app/spdk_top/spdk_top.o 00:02:55.244 CC app/spdk_nvme_identify/identify.o 00:02:55.244 CC app/trace_record/trace_record.o 00:02:55.244 CC test/rpc_client/rpc_client_test.o 00:02:55.244 CC app/spdk_nvme_perf/perf.o 00:02:55.244 CC app/spdk_lspci/spdk_lspci.o 00:02:55.244 TEST_HEADER include/spdk/accel.h 00:02:55.244 CC app/spdk_nvme_discover/discovery_aer.o 00:02:55.244 TEST_HEADER include/spdk/assert.h 00:02:55.244 TEST_HEADER include/spdk/accel_module.h 00:02:55.244 TEST_HEADER include/spdk/barrier.h 00:02:55.244 TEST_HEADER include/spdk/base64.h 00:02:55.244 TEST_HEADER include/spdk/bdev.h 00:02:55.244 TEST_HEADER include/spdk/bdev_module.h 00:02:55.244 TEST_HEADER include/spdk/bdev_zone.h 00:02:55.244 TEST_HEADER include/spdk/bit_array.h 00:02:55.244 TEST_HEADER include/spdk/bit_pool.h 00:02:55.244 TEST_HEADER include/spdk/blob_bdev.h 00:02:55.244 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:55.244 TEST_HEADER include/spdk/blobfs.h 00:02:55.244 TEST_HEADER include/spdk/blob.h 00:02:55.244 TEST_HEADER include/spdk/conf.h 00:02:55.244 TEST_HEADER include/spdk/config.h 00:02:55.244 TEST_HEADER include/spdk/cpuset.h 00:02:55.244 TEST_HEADER include/spdk/crc16.h 00:02:55.244 TEST_HEADER include/spdk/crc32.h 00:02:55.244 TEST_HEADER include/spdk/crc64.h 00:02:55.244 TEST_HEADER include/spdk/dif.h 00:02:55.244 TEST_HEADER include/spdk/dma.h 00:02:55.244 TEST_HEADER include/spdk/env_dpdk.h 00:02:55.244 TEST_HEADER include/spdk/endian.h 00:02:55.244 TEST_HEADER include/spdk/env.h 00:02:55.244 TEST_HEADER include/spdk/fd_group.h 00:02:55.244 TEST_HEADER include/spdk/event.h 00:02:55.244 TEST_HEADER include/spdk/file.h 00:02:55.244 TEST_HEADER include/spdk/fd.h 00:02:55.244 TEST_HEADER include/spdk/ftl.h 00:02:55.244 TEST_HEADER include/spdk/gpt_spec.h 00:02:55.244 TEST_HEADER include/spdk/hexlify.h 00:02:55.244 TEST_HEADER include/spdk/histogram_data.h 00:02:55.244 TEST_HEADER include/spdk/idxd_spec.h 00:02:55.244 TEST_HEADER include/spdk/idxd.h 00:02:55.244 TEST_HEADER include/spdk/init.h 00:02:55.244 TEST_HEADER include/spdk/ioat.h 00:02:55.244 TEST_HEADER include/spdk/ioat_spec.h 00:02:55.244 TEST_HEADER include/spdk/iscsi_spec.h 00:02:55.244 TEST_HEADER include/spdk/json.h 00:02:55.244 TEST_HEADER include/spdk/jsonrpc.h 00:02:55.244 TEST_HEADER include/spdk/keyring.h 00:02:55.244 TEST_HEADER include/spdk/keyring_module.h 00:02:55.244 TEST_HEADER include/spdk/likely.h 00:02:55.244 TEST_HEADER include/spdk/lvol.h 00:02:55.244 TEST_HEADER include/spdk/log.h 00:02:55.244 TEST_HEADER include/spdk/memory.h 00:02:55.244 TEST_HEADER include/spdk/mmio.h 00:02:55.244 TEST_HEADER include/spdk/nbd.h 00:02:55.244 TEST_HEADER include/spdk/notify.h 00:02:55.244 TEST_HEADER include/spdk/nvme.h 00:02:55.244 TEST_HEADER include/spdk/nvme_intel.h 00:02:55.244 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:55.244 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:55.244 TEST_HEADER include/spdk/nvme_spec.h 00:02:55.244 TEST_HEADER include/spdk/nvme_zns.h 00:02:55.244 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:55.244 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:55.244 TEST_HEADER include/spdk/nvmf.h 00:02:55.244 TEST_HEADER include/spdk/nvmf_spec.h 00:02:55.244 TEST_HEADER include/spdk/nvmf_transport.h 00:02:55.244 TEST_HEADER include/spdk/opal.h 00:02:55.244 TEST_HEADER include/spdk/pci_ids.h 00:02:55.244 TEST_HEADER include/spdk/opal_spec.h 00:02:55.244 TEST_HEADER include/spdk/pipe.h 00:02:55.244 TEST_HEADER include/spdk/reduce.h 00:02:55.244 TEST_HEADER include/spdk/queue.h 00:02:55.245 TEST_HEADER include/spdk/rpc.h 00:02:55.245 TEST_HEADER include/spdk/scheduler.h 00:02:55.245 TEST_HEADER include/spdk/scsi.h 00:02:55.245 TEST_HEADER include/spdk/scsi_spec.h 00:02:55.245 TEST_HEADER include/spdk/sock.h 00:02:55.245 TEST_HEADER include/spdk/stdinc.h 00:02:55.245 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:55.245 TEST_HEADER include/spdk/string.h 00:02:55.245 TEST_HEADER include/spdk/thread.h 00:02:55.245 TEST_HEADER include/spdk/trace.h 00:02:55.245 TEST_HEADER include/spdk/trace_parser.h 00:02:55.245 TEST_HEADER include/spdk/tree.h 00:02:55.245 TEST_HEADER include/spdk/ublk.h 00:02:55.245 TEST_HEADER include/spdk/util.h 00:02:55.245 TEST_HEADER include/spdk/uuid.h 00:02:55.245 TEST_HEADER include/spdk/version.h 00:02:55.245 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:55.245 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:55.245 TEST_HEADER include/spdk/vhost.h 00:02:55.245 TEST_HEADER include/spdk/vmd.h 00:02:55.245 TEST_HEADER include/spdk/xor.h 00:02:55.245 TEST_HEADER include/spdk/zipf.h 00:02:55.245 CXX test/cpp_headers/accel.o 00:02:55.245 CXX test/cpp_headers/accel_module.o 00:02:55.245 CXX test/cpp_headers/assert.o 00:02:55.245 CXX test/cpp_headers/barrier.o 00:02:55.245 CXX test/cpp_headers/base64.o 00:02:55.245 CXX test/cpp_headers/bdev.o 00:02:55.245 CXX test/cpp_headers/bdev_module.o 00:02:55.245 CXX test/cpp_headers/bdev_zone.o 00:02:55.245 CC app/spdk_dd/spdk_dd.o 00:02:55.245 CC app/iscsi_tgt/iscsi_tgt.o 00:02:55.245 CXX test/cpp_headers/bit_array.o 00:02:55.245 CXX test/cpp_headers/bit_pool.o 00:02:55.245 CXX test/cpp_headers/blob_bdev.o 00:02:55.245 CXX test/cpp_headers/blobfs_bdev.o 00:02:55.245 CXX test/cpp_headers/blobfs.o 00:02:55.245 CXX test/cpp_headers/blob.o 00:02:55.245 CC app/nvmf_tgt/nvmf_main.o 00:02:55.245 CXX test/cpp_headers/conf.o 00:02:55.245 CXX test/cpp_headers/config.o 00:02:55.245 CXX test/cpp_headers/cpuset.o 00:02:55.245 CXX test/cpp_headers/crc16.o 00:02:55.245 CC app/spdk_tgt/spdk_tgt.o 00:02:55.245 CC examples/ioat/perf/perf.o 00:02:55.245 CC examples/ioat/verify/verify.o 00:02:55.245 CXX test/cpp_headers/crc32.o 00:02:55.245 CC test/app/jsoncat/jsoncat.o 00:02:55.245 CC test/thread/poller_perf/poller_perf.o 00:02:55.245 CC test/app/stub/stub.o 00:02:55.245 CC test/app/histogram_perf/histogram_perf.o 00:02:55.245 CC examples/util/zipf/zipf.o 00:02:55.245 CC app/fio/nvme/fio_plugin.o 00:02:55.245 CC test/env/pci/pci_ut.o 00:02:55.245 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:55.245 CC test/env/vtophys/vtophys.o 00:02:55.504 CC test/env/memory/memory_ut.o 00:02:55.504 CC app/fio/bdev/fio_plugin.o 00:02:55.504 CC test/dma/test_dma/test_dma.o 00:02:55.504 CC test/app/bdev_svc/bdev_svc.o 00:02:55.504 LINK spdk_lspci 00:02:55.504 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:55.504 CC test/env/mem_callbacks/mem_callbacks.o 00:02:55.766 LINK rpc_client_test 00:02:55.766 LINK spdk_nvme_discover 00:02:55.766 LINK interrupt_tgt 00:02:55.766 CXX test/cpp_headers/crc64.o 00:02:55.766 LINK histogram_perf 00:02:55.766 LINK jsoncat 00:02:55.766 LINK poller_perf 00:02:55.766 LINK nvmf_tgt 00:02:55.766 LINK zipf 00:02:55.766 LINK vtophys 00:02:55.766 CXX test/cpp_headers/dif.o 00:02:55.766 CXX test/cpp_headers/dma.o 00:02:55.766 CXX test/cpp_headers/endian.o 00:02:55.766 CXX test/cpp_headers/env_dpdk.o 00:02:55.766 CXX test/cpp_headers/env.o 00:02:55.766 CXX test/cpp_headers/event.o 00:02:55.766 LINK iscsi_tgt 00:02:55.766 LINK env_dpdk_post_init 00:02:55.766 CXX test/cpp_headers/fd_group.o 00:02:55.766 CXX test/cpp_headers/fd.o 00:02:55.766 CXX test/cpp_headers/file.o 00:02:55.766 LINK stub 00:02:55.766 CXX test/cpp_headers/ftl.o 00:02:55.766 LINK spdk_trace_record 00:02:55.766 CXX test/cpp_headers/gpt_spec.o 00:02:55.766 LINK spdk_tgt 00:02:55.766 CXX test/cpp_headers/hexlify.o 00:02:55.766 CXX test/cpp_headers/histogram_data.o 00:02:55.766 CXX test/cpp_headers/idxd.o 00:02:55.766 CXX test/cpp_headers/idxd_spec.o 00:02:55.766 LINK bdev_svc 00:02:55.766 LINK ioat_perf 00:02:55.766 CXX test/cpp_headers/init.o 00:02:55.766 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:55.766 LINK verify 00:02:56.028 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:56.028 CXX test/cpp_headers/ioat.o 00:02:56.028 CXX test/cpp_headers/ioat_spec.o 00:02:56.028 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:56.028 CXX test/cpp_headers/iscsi_spec.o 00:02:56.028 CXX test/cpp_headers/json.o 00:02:56.028 CXX test/cpp_headers/jsonrpc.o 00:02:56.028 CXX test/cpp_headers/keyring.o 00:02:56.028 CXX test/cpp_headers/keyring_module.o 00:02:56.028 CXX test/cpp_headers/likely.o 00:02:56.028 CXX test/cpp_headers/log.o 00:02:56.028 LINK spdk_dd 00:02:56.028 CXX test/cpp_headers/lvol.o 00:02:56.028 CXX test/cpp_headers/memory.o 00:02:56.028 CXX test/cpp_headers/mmio.o 00:02:56.028 CXX test/cpp_headers/nbd.o 00:02:56.028 CXX test/cpp_headers/notify.o 00:02:56.291 CXX test/cpp_headers/nvme.o 00:02:56.291 CXX test/cpp_headers/nvme_intel.o 00:02:56.291 CXX test/cpp_headers/nvme_ocssd.o 00:02:56.291 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:56.291 CXX test/cpp_headers/nvme_spec.o 00:02:56.291 CXX test/cpp_headers/nvme_zns.o 00:02:56.291 CXX test/cpp_headers/nvmf_cmd.o 00:02:56.291 LINK spdk_trace 00:02:56.291 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:56.291 CXX test/cpp_headers/nvmf.o 00:02:56.291 CXX test/cpp_headers/nvmf_spec.o 00:02:56.291 CXX test/cpp_headers/nvmf_transport.o 00:02:56.291 CXX test/cpp_headers/opal.o 00:02:56.291 LINK test_dma 00:02:56.291 LINK pci_ut 00:02:56.291 CC test/event/event_perf/event_perf.o 00:02:56.291 CC test/event/reactor/reactor.o 00:02:56.291 CXX test/cpp_headers/opal_spec.o 00:02:56.291 CXX test/cpp_headers/pci_ids.o 00:02:56.291 CC test/event/reactor_perf/reactor_perf.o 00:02:56.291 CXX test/cpp_headers/pipe.o 00:02:56.291 CC test/event/app_repeat/app_repeat.o 00:02:56.554 CXX test/cpp_headers/queue.o 00:02:56.554 CC examples/sock/hello_world/hello_sock.o 00:02:56.554 CXX test/cpp_headers/reduce.o 00:02:56.554 CXX test/cpp_headers/rpc.o 00:02:56.554 CC examples/idxd/perf/perf.o 00:02:56.554 CC examples/vmd/lsvmd/lsvmd.o 00:02:56.554 CC examples/thread/thread/thread_ex.o 00:02:56.554 CC test/event/scheduler/scheduler.o 00:02:56.554 CC examples/vmd/led/led.o 00:02:56.554 CXX test/cpp_headers/scheduler.o 00:02:56.554 CXX test/cpp_headers/scsi.o 00:02:56.554 CXX test/cpp_headers/scsi_spec.o 00:02:56.554 CXX test/cpp_headers/sock.o 00:02:56.554 CXX test/cpp_headers/stdinc.o 00:02:56.554 CXX test/cpp_headers/string.o 00:02:56.554 CXX test/cpp_headers/thread.o 00:02:56.554 CXX test/cpp_headers/trace.o 00:02:56.554 LINK nvme_fuzz 00:02:56.554 LINK spdk_bdev 00:02:56.554 CXX test/cpp_headers/trace_parser.o 00:02:56.554 CXX test/cpp_headers/tree.o 00:02:56.554 CXX test/cpp_headers/ublk.o 00:02:56.554 CXX test/cpp_headers/util.o 00:02:56.554 CXX test/cpp_headers/uuid.o 00:02:56.554 LINK reactor 00:02:56.554 CXX test/cpp_headers/version.o 00:02:56.554 CXX test/cpp_headers/vfio_user_pci.o 00:02:56.554 CXX test/cpp_headers/vfio_user_spec.o 00:02:56.554 LINK event_perf 00:02:56.822 CXX test/cpp_headers/vhost.o 00:02:56.822 CXX test/cpp_headers/vmd.o 00:02:56.822 LINK reactor_perf 00:02:56.822 CXX test/cpp_headers/xor.o 00:02:56.822 LINK lsvmd 00:02:56.822 CXX test/cpp_headers/zipf.o 00:02:56.822 LINK app_repeat 00:02:56.822 LINK spdk_nvme 00:02:56.822 LINK mem_callbacks 00:02:56.822 LINK led 00:02:56.822 CC app/vhost/vhost.o 00:02:57.081 LINK scheduler 00:02:57.081 LINK vhost_fuzz 00:02:57.081 CC test/nvme/sgl/sgl.o 00:02:57.081 CC test/nvme/aer/aer.o 00:02:57.081 CC test/nvme/reserve/reserve.o 00:02:57.081 CC test/nvme/overhead/overhead.o 00:02:57.081 CC test/nvme/reset/reset.o 00:02:57.081 CC test/nvme/simple_copy/simple_copy.o 00:02:57.081 CC test/nvme/e2edp/nvme_dp.o 00:02:57.081 CC test/nvme/startup/startup.o 00:02:57.081 LINK hello_sock 00:02:57.081 CC test/nvme/err_injection/err_injection.o 00:02:57.081 LINK thread 00:02:57.081 CC test/nvme/connect_stress/connect_stress.o 00:02:57.081 CC test/nvme/boot_partition/boot_partition.o 00:02:57.081 CC test/nvme/compliance/nvme_compliance.o 00:02:57.081 CC test/nvme/fused_ordering/fused_ordering.o 00:02:57.081 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:57.081 CC test/nvme/fdp/fdp.o 00:02:57.081 CC test/nvme/cuse/cuse.o 00:02:57.081 CC test/blobfs/mkfs/mkfs.o 00:02:57.081 CC test/accel/dif/dif.o 00:02:57.081 CC test/lvol/esnap/esnap.o 00:02:57.081 LINK spdk_nvme_identify 00:02:57.081 LINK spdk_nvme_perf 00:02:57.081 LINK vhost 00:02:57.340 LINK idxd_perf 00:02:57.340 LINK reserve 00:02:57.340 LINK doorbell_aers 00:02:57.340 LINK mkfs 00:02:57.340 LINK spdk_top 00:02:57.340 LINK boot_partition 00:02:57.340 LINK startup 00:02:57.340 LINK err_injection 00:02:57.340 LINK sgl 00:02:57.340 CC examples/nvme/abort/abort.o 00:02:57.340 LINK connect_stress 00:02:57.340 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:57.340 CC examples/nvme/reconnect/reconnect.o 00:02:57.340 CC examples/nvme/hello_world/hello_world.o 00:02:57.340 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:57.340 CC examples/nvme/arbitration/arbitration.o 00:02:57.340 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:57.340 CC examples/nvme/hotplug/hotplug.o 00:02:57.340 LINK overhead 00:02:57.340 LINK aer 00:02:57.598 CC examples/accel/perf/accel_perf.o 00:02:57.598 LINK nvme_dp 00:02:57.598 LINK fused_ordering 00:02:57.598 LINK simple_copy 00:02:57.598 CC examples/blob/cli/blobcli.o 00:02:57.598 LINK reset 00:02:57.598 CC examples/blob/hello_world/hello_blob.o 00:02:57.598 LINK fdp 00:02:57.598 LINK memory_ut 00:02:57.598 LINK nvme_compliance 00:02:57.855 LINK dif 00:02:57.855 LINK pmr_persistence 00:02:57.855 LINK cmb_copy 00:02:57.855 LINK hello_world 00:02:57.855 LINK hotplug 00:02:57.855 LINK hello_blob 00:02:57.855 LINK reconnect 00:02:57.855 LINK arbitration 00:02:58.113 LINK abort 00:02:58.113 LINK nvme_manage 00:02:58.113 LINK blobcli 00:02:58.113 LINK accel_perf 00:02:58.113 CC test/bdev/bdevio/bdevio.o 00:02:58.680 CC examples/bdev/hello_world/hello_bdev.o 00:02:58.680 CC examples/bdev/bdevperf/bdevperf.o 00:02:58.680 LINK bdevio 00:02:58.680 LINK iscsi_fuzz 00:02:58.938 LINK hello_bdev 00:02:58.938 LINK cuse 00:02:59.504 LINK bdevperf 00:03:00.070 CC examples/nvmf/nvmf/nvmf.o 00:03:00.328 LINK nvmf 00:03:03.614 LINK esnap 00:03:04.180 00:03:04.180 real 1m15.339s 00:03:04.180 user 11m15.922s 00:03:04.180 sys 2m25.543s 00:03:04.180 14:35:43 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:04.180 14:35:43 make -- common/autotest_common.sh@10 -- $ set +x 00:03:04.180 ************************************ 00:03:04.180 END TEST make 00:03:04.180 ************************************ 00:03:04.180 14:35:43 -- common/autotest_common.sh@1142 -- $ return 0 00:03:04.180 14:35:43 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:04.180 14:35:43 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:04.180 14:35:43 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:04.180 14:35:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:04.180 14:35:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:04.180 14:35:43 -- pm/common@44 -- $ pid=1657991 00:03:04.180 14:35:43 -- pm/common@50 -- $ kill -TERM 1657991 00:03:04.180 14:35:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:04.180 14:35:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:04.180 14:35:43 -- pm/common@44 -- $ pid=1657993 00:03:04.180 14:35:43 -- pm/common@50 -- $ kill -TERM 1657993 00:03:04.180 14:35:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:04.180 14:35:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:04.180 14:35:43 -- pm/common@44 -- $ pid=1657995 00:03:04.180 14:35:43 -- pm/common@50 -- $ kill -TERM 1657995 00:03:04.180 14:35:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:04.180 14:35:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:04.180 14:35:43 -- pm/common@44 -- $ pid=1658018 00:03:04.180 14:35:43 -- pm/common@50 -- $ sudo -E kill -TERM 1658018 00:03:04.180 14:35:43 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:04.180 14:35:43 -- nvmf/common.sh@7 -- # uname -s 00:03:04.180 14:35:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:04.180 14:35:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:04.180 14:35:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:04.180 14:35:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:04.180 14:35:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:04.180 14:35:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:04.180 14:35:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:04.180 14:35:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:04.180 14:35:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:04.180 14:35:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:04.180 14:35:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:04.180 14:35:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:04.180 14:35:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:04.180 14:35:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:04.180 14:35:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:04.180 14:35:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:04.180 14:35:43 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:04.180 14:35:43 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:04.180 14:35:43 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:04.180 14:35:43 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:04.180 14:35:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:04.180 14:35:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:04.180 14:35:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:04.180 14:35:43 -- paths/export.sh@5 -- # export PATH 00:03:04.180 14:35:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:04.180 14:35:43 -- nvmf/common.sh@47 -- # : 0 00:03:04.180 14:35:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:04.180 14:35:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:04.180 14:35:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:04.180 14:35:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:04.180 14:35:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:04.180 14:35:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:04.180 14:35:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:04.180 14:35:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:04.180 14:35:43 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:04.180 14:35:43 -- spdk/autotest.sh@32 -- # uname -s 00:03:04.180 14:35:43 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:04.180 14:35:43 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:04.180 14:35:43 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:04.180 14:35:43 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:04.180 14:35:43 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:04.180 14:35:43 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:04.180 14:35:43 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:04.180 14:35:43 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:04.180 14:35:43 -- spdk/autotest.sh@48 -- # udevadm_pid=1716118 00:03:04.180 14:35:43 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:04.180 14:35:43 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:04.180 14:35:43 -- pm/common@17 -- # local monitor 00:03:04.180 14:35:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:04.180 14:35:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:04.180 14:35:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:04.180 14:35:43 -- pm/common@21 -- # date +%s 00:03:04.180 14:35:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:04.180 14:35:43 -- pm/common@21 -- # date +%s 00:03:04.180 14:35:43 -- pm/common@25 -- # sleep 1 00:03:04.180 14:35:43 -- pm/common@21 -- # date +%s 00:03:04.180 14:35:43 -- pm/common@21 -- # date +%s 00:03:04.180 14:35:43 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720960543 00:03:04.180 14:35:43 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720960543 00:03:04.180 14:35:43 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720960543 00:03:04.180 14:35:43 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720960543 00:03:04.438 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720960543_collect-vmstat.pm.log 00:03:04.438 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720960543_collect-cpu-load.pm.log 00:03:04.438 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720960543_collect-cpu-temp.pm.log 00:03:04.438 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720960543_collect-bmc-pm.bmc.pm.log 00:03:05.372 14:35:44 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:05.372 14:35:44 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:05.372 14:35:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:05.372 14:35:44 -- common/autotest_common.sh@10 -- # set +x 00:03:05.372 14:35:44 -- spdk/autotest.sh@59 -- # create_test_list 00:03:05.372 14:35:44 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:05.372 14:35:44 -- common/autotest_common.sh@10 -- # set +x 00:03:05.372 14:35:44 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:05.372 14:35:44 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:05.372 14:35:44 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:05.372 14:35:44 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:05.372 14:35:44 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:05.372 14:35:44 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:05.372 14:35:44 -- common/autotest_common.sh@1455 -- # uname 00:03:05.372 14:35:44 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:05.372 14:35:44 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:05.372 14:35:44 -- common/autotest_common.sh@1475 -- # uname 00:03:05.372 14:35:44 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:05.372 14:35:44 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:05.372 14:35:44 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:05.372 14:35:44 -- spdk/autotest.sh@72 -- # hash lcov 00:03:05.372 14:35:44 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:05.372 14:35:44 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:05.372 --rc lcov_branch_coverage=1 00:03:05.372 --rc lcov_function_coverage=1 00:03:05.372 --rc genhtml_branch_coverage=1 00:03:05.372 --rc genhtml_function_coverage=1 00:03:05.372 --rc genhtml_legend=1 00:03:05.372 --rc geninfo_all_blocks=1 00:03:05.372 ' 00:03:05.372 14:35:44 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:05.372 --rc lcov_branch_coverage=1 00:03:05.372 --rc lcov_function_coverage=1 00:03:05.372 --rc genhtml_branch_coverage=1 00:03:05.372 --rc genhtml_function_coverage=1 00:03:05.372 --rc genhtml_legend=1 00:03:05.372 --rc geninfo_all_blocks=1 00:03:05.372 ' 00:03:05.372 14:35:44 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:05.372 --rc lcov_branch_coverage=1 00:03:05.372 --rc lcov_function_coverage=1 00:03:05.372 --rc genhtml_branch_coverage=1 00:03:05.372 --rc genhtml_function_coverage=1 00:03:05.372 --rc genhtml_legend=1 00:03:05.372 --rc geninfo_all_blocks=1 00:03:05.372 --no-external' 00:03:05.372 14:35:44 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:05.372 --rc lcov_branch_coverage=1 00:03:05.372 --rc lcov_function_coverage=1 00:03:05.372 --rc genhtml_branch_coverage=1 00:03:05.372 --rc genhtml_function_coverage=1 00:03:05.372 --rc genhtml_legend=1 00:03:05.372 --rc geninfo_all_blocks=1 00:03:05.372 --no-external' 00:03:05.372 14:35:44 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:05.372 lcov: LCOV version 1.14 00:03:05.372 14:35:44 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:10.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:10.694 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:10.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:10.694 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:10.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:10.694 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:10.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:10.694 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:10.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:10.694 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:10.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:10.694 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:10.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:10.694 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:10.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:10.694 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:10.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:10.694 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:10.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:10.694 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:10.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:10.694 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:10.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:10.694 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:10.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:10.694 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:10.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:10.694 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:10.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:10.694 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:10.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:10.694 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:10.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:10.694 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:10.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:10.694 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:10.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:10.694 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:10.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:10.694 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:10.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:10.694 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:10.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:10.694 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:10.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:10.694 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:10.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:10.694 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:10.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:10.694 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:10.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:10.694 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:10.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:10.694 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:10.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:10.694 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:10.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:10.694 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:10.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:10.694 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:10.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:10.694 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:10.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:10.694 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:10.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:10.694 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:10.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:10.694 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:10.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:10.694 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:10.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:10.695 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:10.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:10.695 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:10.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:10.695 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:10.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:10.695 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:10.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:10.695 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:10.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:10.695 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:10.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:10.695 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:10.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:10.695 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:10.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:10.695 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:10.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:10.695 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:10.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:10.695 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:10.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:10.695 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:10.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:10.695 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:10.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:10.695 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:10.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:10.695 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:10.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:10.695 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:10.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:10.695 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:10.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:10.695 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:10.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:10.695 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:10.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:10.695 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:10.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:10.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:10.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:10.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:10.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:10.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:10.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:10.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:10.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:10.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:10.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:10.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:10.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:10.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:10.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:10.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:10.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:10.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:10.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:10.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:10.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:10.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:10.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:10.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:10.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:10.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:10.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:10.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:10.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:10.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:10.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:10.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:10.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:10.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:10.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:10.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:10.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:10.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:10.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:10.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:10.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:10.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:10.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:10.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:10.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:10.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:10.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:10.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:10.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:10.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:10.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:10.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:10.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:10.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:10.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:10.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:10.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:10.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:10.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:10.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:10.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:10.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:10.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:10.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:10.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:10.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:32.925 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:32.925 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:38.185 14:36:16 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:38.185 14:36:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:38.185 14:36:16 -- common/autotest_common.sh@10 -- # set +x 00:03:38.185 14:36:16 -- spdk/autotest.sh@91 -- # rm -f 00:03:38.185 14:36:16 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:38.748 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:03:38.748 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:38.749 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:38.749 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:38.749 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:39.006 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:39.006 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:39.006 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:39.006 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:39.006 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:39.006 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:39.006 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:39.006 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:39.006 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:39.006 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:39.006 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:39.006 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:39.006 14:36:18 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:39.006 14:36:18 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:39.263 14:36:18 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:39.263 14:36:18 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:39.263 14:36:18 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:39.263 14:36:18 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:39.263 14:36:18 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:39.263 14:36:18 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:39.263 14:36:18 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:39.263 14:36:18 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:39.263 14:36:18 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:39.263 14:36:18 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:39.263 14:36:18 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:39.263 14:36:18 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:39.263 14:36:18 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:39.263 No valid GPT data, bailing 00:03:39.263 14:36:18 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:39.263 14:36:18 -- scripts/common.sh@391 -- # pt= 00:03:39.263 14:36:18 -- scripts/common.sh@392 -- # return 1 00:03:39.263 14:36:18 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:39.263 1+0 records in 00:03:39.263 1+0 records out 00:03:39.263 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00218547 s, 480 MB/s 00:03:39.263 14:36:18 -- spdk/autotest.sh@118 -- # sync 00:03:39.263 14:36:18 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:39.263 14:36:18 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:39.263 14:36:18 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:41.160 14:36:20 -- spdk/autotest.sh@124 -- # uname -s 00:03:41.160 14:36:20 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:41.160 14:36:20 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:41.160 14:36:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:41.160 14:36:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:41.160 14:36:20 -- common/autotest_common.sh@10 -- # set +x 00:03:41.160 ************************************ 00:03:41.160 START TEST setup.sh 00:03:41.160 ************************************ 00:03:41.160 14:36:20 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:41.160 * Looking for test storage... 00:03:41.160 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:41.160 14:36:20 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:41.160 14:36:20 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:41.160 14:36:20 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:41.160 14:36:20 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:41.160 14:36:20 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:41.160 14:36:20 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:41.160 ************************************ 00:03:41.160 START TEST acl 00:03:41.160 ************************************ 00:03:41.160 14:36:20 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:41.160 * Looking for test storage... 00:03:41.160 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:41.160 14:36:20 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:41.160 14:36:20 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:41.160 14:36:20 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:41.160 14:36:20 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:41.160 14:36:20 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:41.160 14:36:20 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:41.160 14:36:20 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:41.160 14:36:20 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:41.160 14:36:20 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:41.160 14:36:20 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:41.160 14:36:20 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:41.160 14:36:20 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:41.160 14:36:20 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:41.160 14:36:20 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:41.160 14:36:20 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:41.161 14:36:20 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:42.536 14:36:21 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:42.536 14:36:21 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:42.536 14:36:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.536 14:36:21 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:42.536 14:36:21 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.536 14:36:21 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:43.470 Hugepages 00:03:43.470 node hugesize free / total 00:03:43.470 14:36:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:43.470 14:36:22 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:43.470 14:36:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:43.730 14:36:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:43.730 14:36:22 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:43.730 14:36:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:43.731 00:03:43.731 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:43.731 14:36:22 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:43.731 14:36:22 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:43.731 14:36:22 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:43.731 14:36:22 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:43.731 ************************************ 00:03:43.731 START TEST denied 00:03:43.731 ************************************ 00:03:43.731 14:36:22 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:43.731 14:36:22 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:03:43.731 14:36:22 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:43.731 14:36:22 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:03:43.731 14:36:22 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.731 14:36:22 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:45.106 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:03:45.106 14:36:24 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:03:45.106 14:36:24 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:45.106 14:36:24 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:45.106 14:36:24 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:03:45.106 14:36:24 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:03:45.106 14:36:24 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:45.106 14:36:24 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:45.106 14:36:24 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:45.106 14:36:24 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:45.106 14:36:24 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:47.637 00:03:47.637 real 0m3.684s 00:03:47.637 user 0m1.130s 00:03:47.637 sys 0m1.647s 00:03:47.637 14:36:26 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:47.637 14:36:26 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:47.637 ************************************ 00:03:47.637 END TEST denied 00:03:47.637 ************************************ 00:03:47.637 14:36:26 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:47.637 14:36:26 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:47.637 14:36:26 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:47.637 14:36:26 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:47.637 14:36:26 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:47.637 ************************************ 00:03:47.637 START TEST allowed 00:03:47.637 ************************************ 00:03:47.637 14:36:26 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:47.637 14:36:26 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:03:47.637 14:36:26 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:47.637 14:36:26 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:03:47.637 14:36:26 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.637 14:36:26 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:50.168 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:50.168 14:36:28 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:50.168 14:36:28 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:50.168 14:36:28 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:50.168 14:36:28 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:50.168 14:36:28 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:51.546 00:03:51.546 real 0m3.896s 00:03:51.546 user 0m1.001s 00:03:51.546 sys 0m1.732s 00:03:51.546 14:36:30 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:51.546 14:36:30 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:51.546 ************************************ 00:03:51.546 END TEST allowed 00:03:51.546 ************************************ 00:03:51.546 14:36:30 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:51.546 00:03:51.546 real 0m10.358s 00:03:51.546 user 0m3.203s 00:03:51.546 sys 0m5.153s 00:03:51.546 14:36:30 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:51.546 14:36:30 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:51.546 ************************************ 00:03:51.546 END TEST acl 00:03:51.546 ************************************ 00:03:51.546 14:36:30 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:51.546 14:36:30 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:51.546 14:36:30 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:51.546 14:36:30 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:51.546 14:36:30 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:51.546 ************************************ 00:03:51.546 START TEST hugepages 00:03:51.546 ************************************ 00:03:51.546 14:36:30 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:51.546 * Looking for test storage... 00:03:51.546 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:51.546 14:36:30 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:51.546 14:36:30 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:51.546 14:36:30 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:51.546 14:36:30 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:51.546 14:36:30 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:51.546 14:36:30 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:51.546 14:36:30 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:51.546 14:36:30 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:51.546 14:36:30 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:51.546 14:36:30 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:51.546 14:36:30 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.546 14:36:30 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.546 14:36:30 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.546 14:36:30 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.546 14:36:30 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.546 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.546 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.546 14:36:30 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44036288 kB' 'MemAvailable: 47507308 kB' 'Buffers: 2704 kB' 'Cached: 10059876 kB' 'SwapCached: 0 kB' 'Active: 7013000 kB' 'Inactive: 3492380 kB' 'Active(anon): 6626008 kB' 'Inactive(anon): 0 kB' 'Active(file): 386992 kB' 'Inactive(file): 3492380 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 446216 kB' 'Mapped: 177712 kB' 'Shmem: 6183208 kB' 'KReclaimable: 170976 kB' 'Slab: 517776 kB' 'SReclaimable: 170976 kB' 'SUnreclaim: 346800 kB' 'KernelStack: 12912 kB' 'PageTables: 8408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562296 kB' 'Committed_AS: 7730100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195792 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1574492 kB' 'DirectMap2M: 13025280 kB' 'DirectMap1G: 54525952 kB' 00:03:51.546 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.546 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.546 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.546 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.546 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.546 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.546 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.546 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.546 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.546 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.546 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.546 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.546 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.546 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.546 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.546 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.546 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.546 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.546 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.547 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:51.548 14:36:30 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:51.548 14:36:30 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:51.548 14:36:30 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:51.548 14:36:30 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:51.548 ************************************ 00:03:51.548 START TEST default_setup 00:03:51.548 ************************************ 00:03:51.548 14:36:30 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:51.548 14:36:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:51.548 14:36:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:51.548 14:36:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:51.548 14:36:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:51.548 14:36:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:51.548 14:36:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:51.548 14:36:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:51.548 14:36:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:51.548 14:36:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:51.548 14:36:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:51.548 14:36:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:51.548 14:36:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:51.548 14:36:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:51.548 14:36:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:51.548 14:36:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:51.548 14:36:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:51.548 14:36:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:51.548 14:36:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:51.548 14:36:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:51.548 14:36:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:51.548 14:36:30 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:51.548 14:36:30 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:52.923 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:52.923 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:52.923 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:52.923 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:52.923 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:52.923 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:52.923 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:52.923 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:52.923 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:52.923 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:52.923 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:52.923 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:52.923 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:52.923 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:52.923 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:52.923 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:53.864 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:53.864 14:36:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:53.864 14:36:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:53.864 14:36:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:53.864 14:36:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:53.864 14:36:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:53.864 14:36:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:53.864 14:36:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:53.864 14:36:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:53.864 14:36:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:53.864 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:53.864 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:53.864 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:53.864 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:53.864 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.864 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.864 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.864 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.864 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.864 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.864 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.864 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 46148464 kB' 'MemAvailable: 49619376 kB' 'Buffers: 2704 kB' 'Cached: 10059956 kB' 'SwapCached: 0 kB' 'Active: 7034136 kB' 'Inactive: 3492380 kB' 'Active(anon): 6647144 kB' 'Inactive(anon): 0 kB' 'Active(file): 386992 kB' 'Inactive(file): 3492380 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 467180 kB' 'Mapped: 178496 kB' 'Shmem: 6183288 kB' 'KReclaimable: 170760 kB' 'Slab: 517400 kB' 'SReclaimable: 170760 kB' 'SUnreclaim: 346640 kB' 'KernelStack: 12656 kB' 'PageTables: 7816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7753880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195904 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1574492 kB' 'DirectMap2M: 13025280 kB' 'DirectMap1G: 54525952 kB' 00:03:53.864 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.864 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.864 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.864 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.864 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.864 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.864 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.864 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.864 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.864 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.864 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.864 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.864 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.864 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.864 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.864 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.864 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.864 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.864 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.864 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.864 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.864 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.865 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 46144500 kB' 'MemAvailable: 49615412 kB' 'Buffers: 2704 kB' 'Cached: 10059960 kB' 'SwapCached: 0 kB' 'Active: 7036856 kB' 'Inactive: 3492380 kB' 'Active(anon): 6649864 kB' 'Inactive(anon): 0 kB' 'Active(file): 386992 kB' 'Inactive(file): 3492380 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 469884 kB' 'Mapped: 178580 kB' 'Shmem: 6183292 kB' 'KReclaimable: 170760 kB' 'Slab: 517388 kB' 'SReclaimable: 170760 kB' 'SUnreclaim: 346628 kB' 'KernelStack: 12672 kB' 'PageTables: 7784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7757212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195876 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1574492 kB' 'DirectMap2M: 13025280 kB' 'DirectMap1G: 54525952 kB' 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.866 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.867 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 46144784 kB' 'MemAvailable: 49615696 kB' 'Buffers: 2704 kB' 'Cached: 10059976 kB' 'SwapCached: 0 kB' 'Active: 7036996 kB' 'Inactive: 3492380 kB' 'Active(anon): 6650004 kB' 'Inactive(anon): 0 kB' 'Active(file): 386992 kB' 'Inactive(file): 3492380 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 470008 kB' 'Mapped: 178740 kB' 'Shmem: 6183308 kB' 'KReclaimable: 170760 kB' 'Slab: 517480 kB' 'SReclaimable: 170760 kB' 'SUnreclaim: 346720 kB' 'KernelStack: 12704 kB' 'PageTables: 7888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7757232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195860 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1574492 kB' 'DirectMap2M: 13025280 kB' 'DirectMap1G: 54525952 kB' 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.868 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.869 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.171 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:54.172 nr_hugepages=1024 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:54.172 resv_hugepages=0 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:54.172 surplus_hugepages=0 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:54.172 anon_hugepages=0 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 46145644 kB' 'MemAvailable: 49616556 kB' 'Buffers: 2704 kB' 'Cached: 10059996 kB' 'SwapCached: 0 kB' 'Active: 7031332 kB' 'Inactive: 3492380 kB' 'Active(anon): 6644340 kB' 'Inactive(anon): 0 kB' 'Active(file): 386992 kB' 'Inactive(file): 3492380 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 464284 kB' 'Mapped: 177808 kB' 'Shmem: 6183328 kB' 'KReclaimable: 170760 kB' 'Slab: 517472 kB' 'SReclaimable: 170760 kB' 'SUnreclaim: 346712 kB' 'KernelStack: 12688 kB' 'PageTables: 7804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7751132 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195840 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1574492 kB' 'DirectMap2M: 13025280 kB' 'DirectMap1G: 54525952 kB' 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.172 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20972676 kB' 'MemUsed: 11904264 kB' 'SwapCached: 0 kB' 'Active: 5429052 kB' 'Inactive: 3263500 kB' 'Active(anon): 5244088 kB' 'Inactive(anon): 0 kB' 'Active(file): 184964 kB' 'Inactive(file): 3263500 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8418196 kB' 'Mapped: 96976 kB' 'AnonPages: 277532 kB' 'Shmem: 4969732 kB' 'KernelStack: 7816 kB' 'PageTables: 5084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105604 kB' 'Slab: 289220 kB' 'SReclaimable: 105604 kB' 'SUnreclaim: 183616 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.173 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:54.174 node0=1024 expecting 1024 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:54.174 00:03:54.174 real 0m2.497s 00:03:54.174 user 0m0.695s 00:03:54.174 sys 0m0.893s 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:54.174 14:36:33 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:54.174 ************************************ 00:03:54.174 END TEST default_setup 00:03:54.174 ************************************ 00:03:54.174 14:36:33 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:54.174 14:36:33 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:54.174 14:36:33 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:54.174 14:36:33 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:54.174 14:36:33 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:54.174 ************************************ 00:03:54.174 START TEST per_node_1G_alloc 00:03:54.174 ************************************ 00:03:54.174 14:36:33 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:54.174 14:36:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:54.174 14:36:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:54.174 14:36:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:54.174 14:36:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:54.174 14:36:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:54.174 14:36:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:54.174 14:36:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:54.174 14:36:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:54.174 14:36:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:54.174 14:36:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:54.174 14:36:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:54.174 14:36:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:54.174 14:36:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:54.174 14:36:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:54.174 14:36:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:54.174 14:36:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:54.174 14:36:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:54.174 14:36:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:54.174 14:36:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:54.174 14:36:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:54.174 14:36:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:54.174 14:36:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:54.174 14:36:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:54.174 14:36:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:54.174 14:36:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:54.174 14:36:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.174 14:36:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:55.107 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:55.107 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:55.107 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:55.107 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:55.107 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:55.107 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:55.107 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:55.107 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:55.107 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:55.107 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:55.107 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:55.107 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:55.107 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:55.107 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:55.107 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:55.107 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:55.107 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:55.371 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:55.371 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:55.371 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:55.371 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:55.371 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:55.371 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:55.371 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:55.371 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:55.371 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:55.371 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:55.371 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:55.371 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:55.371 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:55.371 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.371 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.371 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.371 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.371 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.371 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.371 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.371 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 46150776 kB' 'MemAvailable: 49621688 kB' 'Buffers: 2704 kB' 'Cached: 10060072 kB' 'SwapCached: 0 kB' 'Active: 7032132 kB' 'Inactive: 3492380 kB' 'Active(anon): 6645140 kB' 'Inactive(anon): 0 kB' 'Active(file): 386992 kB' 'Inactive(file): 3492380 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 464924 kB' 'Mapped: 177908 kB' 'Shmem: 6183404 kB' 'KReclaimable: 170760 kB' 'Slab: 517696 kB' 'SReclaimable: 170760 kB' 'SUnreclaim: 346936 kB' 'KernelStack: 12688 kB' 'PageTables: 7864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7751316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1574492 kB' 'DirectMap2M: 13025280 kB' 'DirectMap1G: 54525952 kB' 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.372 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 46151548 kB' 'MemAvailable: 49622460 kB' 'Buffers: 2704 kB' 'Cached: 10060076 kB' 'SwapCached: 0 kB' 'Active: 7031584 kB' 'Inactive: 3492380 kB' 'Active(anon): 6644592 kB' 'Inactive(anon): 0 kB' 'Active(file): 386992 kB' 'Inactive(file): 3492380 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 464396 kB' 'Mapped: 177896 kB' 'Shmem: 6183408 kB' 'KReclaimable: 170760 kB' 'Slab: 517680 kB' 'SReclaimable: 170760 kB' 'SUnreclaim: 346920 kB' 'KernelStack: 12704 kB' 'PageTables: 7872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7751336 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1574492 kB' 'DirectMap2M: 13025280 kB' 'DirectMap1G: 54525952 kB' 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.373 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.374 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 46151624 kB' 'MemAvailable: 49622536 kB' 'Buffers: 2704 kB' 'Cached: 10060092 kB' 'SwapCached: 0 kB' 'Active: 7031644 kB' 'Inactive: 3492380 kB' 'Active(anon): 6644652 kB' 'Inactive(anon): 0 kB' 'Active(file): 386992 kB' 'Inactive(file): 3492380 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 464408 kB' 'Mapped: 177820 kB' 'Shmem: 6183424 kB' 'KReclaimable: 170760 kB' 'Slab: 517680 kB' 'SReclaimable: 170760 kB' 'SUnreclaim: 346920 kB' 'KernelStack: 12720 kB' 'PageTables: 7872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7751356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1574492 kB' 'DirectMap2M: 13025280 kB' 'DirectMap1G: 54525952 kB' 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.375 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.376 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:55.377 nr_hugepages=1024 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:55.377 resv_hugepages=0 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:55.377 surplus_hugepages=0 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:55.377 anon_hugepages=0 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 46151624 kB' 'MemAvailable: 49622536 kB' 'Buffers: 2704 kB' 'Cached: 10060116 kB' 'SwapCached: 0 kB' 'Active: 7031668 kB' 'Inactive: 3492380 kB' 'Active(anon): 6644676 kB' 'Inactive(anon): 0 kB' 'Active(file): 386992 kB' 'Inactive(file): 3492380 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 464408 kB' 'Mapped: 177820 kB' 'Shmem: 6183448 kB' 'KReclaimable: 170760 kB' 'Slab: 517684 kB' 'SReclaimable: 170760 kB' 'SUnreclaim: 346924 kB' 'KernelStack: 12720 kB' 'PageTables: 7872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7751380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1574492 kB' 'DirectMap2M: 13025280 kB' 'DirectMap1G: 54525952 kB' 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.377 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.378 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22017856 kB' 'MemUsed: 10859084 kB' 'SwapCached: 0 kB' 'Active: 5429588 kB' 'Inactive: 3263500 kB' 'Active(anon): 5244624 kB' 'Inactive(anon): 0 kB' 'Active(file): 184964 kB' 'Inactive(file): 3263500 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8418320 kB' 'Mapped: 96988 kB' 'AnonPages: 277900 kB' 'Shmem: 4969856 kB' 'KernelStack: 7848 kB' 'PageTables: 5188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105604 kB' 'Slab: 289308 kB' 'SReclaimable: 105604 kB' 'SUnreclaim: 183704 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.379 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 24133768 kB' 'MemUsed: 3530984 kB' 'SwapCached: 0 kB' 'Active: 1601944 kB' 'Inactive: 228880 kB' 'Active(anon): 1399916 kB' 'Inactive(anon): 0 kB' 'Active(file): 202028 kB' 'Inactive(file): 228880 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1644524 kB' 'Mapped: 80832 kB' 'AnonPages: 186300 kB' 'Shmem: 1213616 kB' 'KernelStack: 4872 kB' 'PageTables: 2636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 65156 kB' 'Slab: 228376 kB' 'SReclaimable: 65156 kB' 'SUnreclaim: 163220 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.380 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:55.381 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:55.641 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:55.641 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:55.641 node0=512 expecting 512 00:03:55.641 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:55.641 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:55.641 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:55.641 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:55.641 node1=512 expecting 512 00:03:55.641 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:55.641 00:03:55.641 real 0m1.404s 00:03:55.641 user 0m0.556s 00:03:55.641 sys 0m0.808s 00:03:55.641 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.641 14:36:34 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:55.641 ************************************ 00:03:55.641 END TEST per_node_1G_alloc 00:03:55.641 ************************************ 00:03:55.641 14:36:34 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:55.641 14:36:34 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:55.641 14:36:34 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:55.641 14:36:34 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.641 14:36:34 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:55.641 ************************************ 00:03:55.641 START TEST even_2G_alloc 00:03:55.641 ************************************ 00:03:55.641 14:36:34 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:55.641 14:36:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:55.641 14:36:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:55.641 14:36:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:55.641 14:36:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:55.641 14:36:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:55.641 14:36:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:55.641 14:36:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:55.641 14:36:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:55.641 14:36:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:55.641 14:36:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:55.641 14:36:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:55.641 14:36:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:55.641 14:36:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:55.641 14:36:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:55.641 14:36:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:55.641 14:36:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:55.641 14:36:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:55.641 14:36:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:55.641 14:36:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:55.641 14:36:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:55.641 14:36:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:55.641 14:36:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:55.641 14:36:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:55.641 14:36:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:55.641 14:36:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:55.641 14:36:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:55.641 14:36:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.641 14:36:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:56.575 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:56.575 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:56.575 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:56.575 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:56.575 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:56.575 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:56.575 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:56.575 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:56.575 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:56.575 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:56.575 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:56.575 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:56.575 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:56.575 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:56.575 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:56.575 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:56.575 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:56.835 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:56.835 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:56.835 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:56.835 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:56.835 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:56.835 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:56.835 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:56.835 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:56.835 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:56.835 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:56.835 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:56.835 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:56.835 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.835 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.835 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.835 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.835 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.835 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.835 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.835 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.835 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 46127624 kB' 'MemAvailable: 49598536 kB' 'Buffers: 2704 kB' 'Cached: 10060212 kB' 'SwapCached: 0 kB' 'Active: 7031640 kB' 'Inactive: 3492380 kB' 'Active(anon): 6644648 kB' 'Inactive(anon): 0 kB' 'Active(file): 386992 kB' 'Inactive(file): 3492380 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 464248 kB' 'Mapped: 177896 kB' 'Shmem: 6183544 kB' 'KReclaimable: 170760 kB' 'Slab: 517564 kB' 'SReclaimable: 170760 kB' 'SUnreclaim: 346804 kB' 'KernelStack: 12688 kB' 'PageTables: 7804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7751584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1574492 kB' 'DirectMap2M: 13025280 kB' 'DirectMap1G: 54525952 kB' 00:03:56.835 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.835 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.835 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.835 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.835 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.835 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.835 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.835 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.835 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.835 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.835 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.835 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.835 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.835 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.835 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.835 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.835 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.835 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.835 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.836 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 46132428 kB' 'MemAvailable: 49603340 kB' 'Buffers: 2704 kB' 'Cached: 10060212 kB' 'SwapCached: 0 kB' 'Active: 7032076 kB' 'Inactive: 3492380 kB' 'Active(anon): 6645084 kB' 'Inactive(anon): 0 kB' 'Active(file): 386992 kB' 'Inactive(file): 3492380 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 464716 kB' 'Mapped: 177896 kB' 'Shmem: 6183544 kB' 'KReclaimable: 170760 kB' 'Slab: 517564 kB' 'SReclaimable: 170760 kB' 'SUnreclaim: 346804 kB' 'KernelStack: 12688 kB' 'PageTables: 7796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7751604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1574492 kB' 'DirectMap2M: 13025280 kB' 'DirectMap1G: 54525952 kB' 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.837 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:56.838 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 46132764 kB' 'MemAvailable: 49603676 kB' 'Buffers: 2704 kB' 'Cached: 10060232 kB' 'SwapCached: 0 kB' 'Active: 7031960 kB' 'Inactive: 3492380 kB' 'Active(anon): 6644968 kB' 'Inactive(anon): 0 kB' 'Active(file): 386992 kB' 'Inactive(file): 3492380 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 464616 kB' 'Mapped: 177832 kB' 'Shmem: 6183564 kB' 'KReclaimable: 170760 kB' 'Slab: 517564 kB' 'SReclaimable: 170760 kB' 'SUnreclaim: 346804 kB' 'KernelStack: 12720 kB' 'PageTables: 7860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7751624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1574492 kB' 'DirectMap2M: 13025280 kB' 'DirectMap1G: 54525952 kB' 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.839 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:56.840 nr_hugepages=1024 00:03:56.840 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:56.840 resv_hugepages=0 00:03:56.841 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:56.841 surplus_hugepages=0 00:03:56.841 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:56.841 anon_hugepages=0 00:03:56.841 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:56.841 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:56.841 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:56.841 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:56.841 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:56.841 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:56.841 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.841 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.841 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.841 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.841 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.841 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.841 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.841 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.841 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 46132080 kB' 'MemAvailable: 49602992 kB' 'Buffers: 2704 kB' 'Cached: 10060236 kB' 'SwapCached: 0 kB' 'Active: 7031676 kB' 'Inactive: 3492380 kB' 'Active(anon): 6644684 kB' 'Inactive(anon): 0 kB' 'Active(file): 386992 kB' 'Inactive(file): 3492380 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 464328 kB' 'Mapped: 177832 kB' 'Shmem: 6183568 kB' 'KReclaimable: 170760 kB' 'Slab: 517652 kB' 'SReclaimable: 170760 kB' 'SUnreclaim: 346892 kB' 'KernelStack: 12720 kB' 'PageTables: 7876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7751648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1574492 kB' 'DirectMap2M: 13025280 kB' 'DirectMap1G: 54525952 kB' 00:03:56.841 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.841 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.841 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.841 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.841 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.841 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.841 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.841 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.841 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.841 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.841 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.841 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.841 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.841 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.841 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.841 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.841 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.841 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.841 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.841 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.841 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.841 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.841 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.841 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.841 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.841 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.841 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.841 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.841 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.841 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.841 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.841 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.841 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.841 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.841 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.841 14:36:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.841 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22002776 kB' 'MemUsed: 10874164 kB' 'SwapCached: 0 kB' 'Active: 5429804 kB' 'Inactive: 3263500 kB' 'Active(anon): 5244840 kB' 'Inactive(anon): 0 kB' 'Active(file): 184964 kB' 'Inactive(file): 3263500 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8418452 kB' 'Mapped: 97000 kB' 'AnonPages: 278024 kB' 'Shmem: 4969988 kB' 'KernelStack: 7848 kB' 'PageTables: 5144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105604 kB' 'Slab: 289224 kB' 'SReclaimable: 105604 kB' 'SUnreclaim: 183620 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.842 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.843 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 24129304 kB' 'MemUsed: 3535448 kB' 'SwapCached: 0 kB' 'Active: 1602228 kB' 'Inactive: 228880 kB' 'Active(anon): 1400200 kB' 'Inactive(anon): 0 kB' 'Active(file): 202028 kB' 'Inactive(file): 228880 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1644528 kB' 'Mapped: 80832 kB' 'AnonPages: 186580 kB' 'Shmem: 1213620 kB' 'KernelStack: 4872 kB' 'PageTables: 2732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 65156 kB' 'Slab: 228428 kB' 'SReclaimable: 65156 kB' 'SUnreclaim: 163272 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.844 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:56.845 node0=512 expecting 512 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:56.845 node1=512 expecting 512 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:56.845 00:03:56.845 real 0m1.347s 00:03:56.845 user 0m0.580s 00:03:56.845 sys 0m0.722s 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:56.845 14:36:36 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:56.845 ************************************ 00:03:56.845 END TEST even_2G_alloc 00:03:56.845 ************************************ 00:03:56.845 14:36:36 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:56.845 14:36:36 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:56.845 14:36:36 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:56.845 14:36:36 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:56.845 14:36:36 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:56.845 ************************************ 00:03:56.845 START TEST odd_alloc 00:03:56.845 ************************************ 00:03:56.845 14:36:36 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:56.845 14:36:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:56.845 14:36:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:56.845 14:36:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:56.845 14:36:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:56.845 14:36:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:56.845 14:36:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:56.845 14:36:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:56.845 14:36:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:56.845 14:36:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:56.845 14:36:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:56.845 14:36:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:56.845 14:36:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:56.845 14:36:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:56.845 14:36:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:56.845 14:36:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:56.845 14:36:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:56.845 14:36:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:56.845 14:36:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:56.845 14:36:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:56.845 14:36:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:56.845 14:36:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:56.845 14:36:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:56.845 14:36:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:56.846 14:36:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:56.846 14:36:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:56.846 14:36:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:56.846 14:36:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.846 14:36:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:58.226 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:58.226 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:58.226 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:58.226 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:58.226 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:58.226 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:58.226 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:58.226 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:58.226 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:58.226 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:58.226 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:58.226 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:58.226 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:58.226 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:58.226 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:58.226 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:58.226 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 46128468 kB' 'MemAvailable: 49599380 kB' 'Buffers: 2704 kB' 'Cached: 10060348 kB' 'SwapCached: 0 kB' 'Active: 7028352 kB' 'Inactive: 3492380 kB' 'Active(anon): 6641360 kB' 'Inactive(anon): 0 kB' 'Active(file): 386992 kB' 'Inactive(file): 3492380 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 460848 kB' 'Mapped: 176956 kB' 'Shmem: 6183680 kB' 'KReclaimable: 170760 kB' 'Slab: 517468 kB' 'SReclaimable: 170760 kB' 'SUnreclaim: 346708 kB' 'KernelStack: 12672 kB' 'PageTables: 7476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 7736540 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195968 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1574492 kB' 'DirectMap2M: 13025280 kB' 'DirectMap1G: 54525952 kB' 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.226 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.227 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 46132328 kB' 'MemAvailable: 49603240 kB' 'Buffers: 2704 kB' 'Cached: 10060352 kB' 'SwapCached: 0 kB' 'Active: 7028512 kB' 'Inactive: 3492380 kB' 'Active(anon): 6641520 kB' 'Inactive(anon): 0 kB' 'Active(file): 386992 kB' 'Inactive(file): 3492380 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 460992 kB' 'Mapped: 176968 kB' 'Shmem: 6183684 kB' 'KReclaimable: 170760 kB' 'Slab: 517540 kB' 'SReclaimable: 170760 kB' 'SUnreclaim: 346780 kB' 'KernelStack: 12672 kB' 'PageTables: 7492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 7736560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195936 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1574492 kB' 'DirectMap2M: 13025280 kB' 'DirectMap1G: 54525952 kB' 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.228 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.229 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 46131824 kB' 'MemAvailable: 49602736 kB' 'Buffers: 2704 kB' 'Cached: 10060352 kB' 'SwapCached: 0 kB' 'Active: 7028948 kB' 'Inactive: 3492380 kB' 'Active(anon): 6641956 kB' 'Inactive(anon): 0 kB' 'Active(file): 386992 kB' 'Inactive(file): 3492380 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 461428 kB' 'Mapped: 176968 kB' 'Shmem: 6183684 kB' 'KReclaimable: 170760 kB' 'Slab: 517540 kB' 'SReclaimable: 170760 kB' 'SUnreclaim: 346780 kB' 'KernelStack: 12672 kB' 'PageTables: 7492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 7736580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195936 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1574492 kB' 'DirectMap2M: 13025280 kB' 'DirectMap1G: 54525952 kB' 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.230 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.231 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:58.232 nr_hugepages=1025 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:58.232 resv_hugepages=0 00:03:58.232 14:36:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:58.232 surplus_hugepages=0 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:58.233 anon_hugepages=0 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 46132296 kB' 'MemAvailable: 49603208 kB' 'Buffers: 2704 kB' 'Cached: 10060388 kB' 'SwapCached: 0 kB' 'Active: 7028572 kB' 'Inactive: 3492380 kB' 'Active(anon): 6641580 kB' 'Inactive(anon): 0 kB' 'Active(file): 386992 kB' 'Inactive(file): 3492380 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 460980 kB' 'Mapped: 176892 kB' 'Shmem: 6183720 kB' 'KReclaimable: 170760 kB' 'Slab: 517480 kB' 'SReclaimable: 170760 kB' 'SUnreclaim: 346720 kB' 'KernelStack: 12640 kB' 'PageTables: 7380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 7736600 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195920 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1574492 kB' 'DirectMap2M: 13025280 kB' 'DirectMap1G: 54525952 kB' 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.233 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.234 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22000764 kB' 'MemUsed: 10876176 kB' 'SwapCached: 0 kB' 'Active: 5428860 kB' 'Inactive: 3263500 kB' 'Active(anon): 5243896 kB' 'Inactive(anon): 0 kB' 'Active(file): 184964 kB' 'Inactive(file): 3263500 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8418580 kB' 'Mapped: 96284 kB' 'AnonPages: 276896 kB' 'Shmem: 4970116 kB' 'KernelStack: 7864 kB' 'PageTables: 5136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105604 kB' 'Slab: 289148 kB' 'SReclaimable: 105604 kB' 'SUnreclaim: 183544 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.235 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.236 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 24131968 kB' 'MemUsed: 3532784 kB' 'SwapCached: 0 kB' 'Active: 1600128 kB' 'Inactive: 228880 kB' 'Active(anon): 1398100 kB' 'Inactive(anon): 0 kB' 'Active(file): 202028 kB' 'Inactive(file): 228880 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1644532 kB' 'Mapped: 80640 kB' 'AnonPages: 184496 kB' 'Shmem: 1213624 kB' 'KernelStack: 4840 kB' 'PageTables: 2424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 65156 kB' 'Slab: 228332 kB' 'SReclaimable: 65156 kB' 'SUnreclaim: 163176 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.237 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:58.238 node0=512 expecting 513 00:03:58.238 14:36:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:58.239 14:36:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:58.239 14:36:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:58.239 14:36:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:58.239 node1=513 expecting 512 00:03:58.239 14:36:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:58.239 00:03:58.239 real 0m1.390s 00:03:58.239 user 0m0.588s 00:03:58.239 sys 0m0.765s 00:03:58.239 14:36:37 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:58.239 14:36:37 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:58.239 ************************************ 00:03:58.239 END TEST odd_alloc 00:03:58.239 ************************************ 00:03:58.239 14:36:37 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:58.239 14:36:37 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:58.239 14:36:37 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.239 14:36:37 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.239 14:36:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:58.497 ************************************ 00:03:58.497 START TEST custom_alloc 00:03:58.497 ************************************ 00:03:58.497 14:36:37 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:58.497 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:58.497 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:58.497 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:58.497 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:58.497 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:58.497 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:58.497 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:58.497 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:58.497 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:58.497 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.498 14:36:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:59.432 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:59.432 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:59.432 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:59.432 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:59.432 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:59.432 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:59.432 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:59.432 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:59.432 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:59.432 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:59.432 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:59.432 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:59.432 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:59.432 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:59.432 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:59.432 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:59.432 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:59.697 14:36:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:59.697 14:36:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:59.697 14:36:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:59.697 14:36:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:59.697 14:36:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:59.697 14:36:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:59.697 14:36:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:59.697 14:36:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:59.697 14:36:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:59.697 14:36:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:59.697 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:59.697 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:59.697 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:59.697 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.697 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.697 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.697 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.697 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.697 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.697 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.697 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.697 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45102616 kB' 'MemAvailable: 48573520 kB' 'Buffers: 2704 kB' 'Cached: 10060472 kB' 'SwapCached: 0 kB' 'Active: 7031168 kB' 'Inactive: 3492380 kB' 'Active(anon): 6644176 kB' 'Inactive(anon): 0 kB' 'Active(file): 386992 kB' 'Inactive(file): 3492380 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 463632 kB' 'Mapped: 177448 kB' 'Shmem: 6183804 kB' 'KReclaimable: 170744 kB' 'Slab: 517196 kB' 'SReclaimable: 170744 kB' 'SUnreclaim: 346452 kB' 'KernelStack: 12752 kB' 'PageTables: 7684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 7739868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1574492 kB' 'DirectMap2M: 13025280 kB' 'DirectMap1G: 54525952 kB' 00:03:59.697 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.697 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.697 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.697 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.697 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.697 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.697 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45100696 kB' 'MemAvailable: 48571600 kB' 'Buffers: 2704 kB' 'Cached: 10060476 kB' 'SwapCached: 0 kB' 'Active: 7034632 kB' 'Inactive: 3492380 kB' 'Active(anon): 6647640 kB' 'Inactive(anon): 0 kB' 'Active(file): 386992 kB' 'Inactive(file): 3492380 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 467068 kB' 'Mapped: 177348 kB' 'Shmem: 6183808 kB' 'KReclaimable: 170744 kB' 'Slab: 517236 kB' 'SReclaimable: 170744 kB' 'SUnreclaim: 346492 kB' 'KernelStack: 12768 kB' 'PageTables: 7676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 7742804 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195988 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1574492 kB' 'DirectMap2M: 13025280 kB' 'DirectMap1G: 54525952 kB' 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.699 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.700 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45100696 kB' 'MemAvailable: 48571600 kB' 'Buffers: 2704 kB' 'Cached: 10060492 kB' 'SwapCached: 0 kB' 'Active: 7034868 kB' 'Inactive: 3492380 kB' 'Active(anon): 6647876 kB' 'Inactive(anon): 0 kB' 'Active(file): 386992 kB' 'Inactive(file): 3492380 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 467336 kB' 'Mapped: 177348 kB' 'Shmem: 6183824 kB' 'KReclaimable: 170744 kB' 'Slab: 517236 kB' 'SReclaimable: 170744 kB' 'SUnreclaim: 346492 kB' 'KernelStack: 12768 kB' 'PageTables: 7676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 7742824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195972 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1574492 kB' 'DirectMap2M: 13025280 kB' 'DirectMap1G: 54525952 kB' 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.701 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:59.702 nr_hugepages=1536 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:59.702 resv_hugepages=0 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:59.702 surplus_hugepages=0 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:59.702 anon_hugepages=0 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:59.702 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45100696 kB' 'MemAvailable: 48571600 kB' 'Buffers: 2704 kB' 'Cached: 10060512 kB' 'SwapCached: 0 kB' 'Active: 7028752 kB' 'Inactive: 3492380 kB' 'Active(anon): 6641760 kB' 'Inactive(anon): 0 kB' 'Active(file): 386992 kB' 'Inactive(file): 3492380 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 461208 kB' 'Mapped: 176912 kB' 'Shmem: 6183844 kB' 'KReclaimable: 170744 kB' 'Slab: 517236 kB' 'SReclaimable: 170744 kB' 'SUnreclaim: 346492 kB' 'KernelStack: 12752 kB' 'PageTables: 7620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 7736724 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195936 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1574492 kB' 'DirectMap2M: 13025280 kB' 'DirectMap1G: 54525952 kB' 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.703 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22020236 kB' 'MemUsed: 10856704 kB' 'SwapCached: 0 kB' 'Active: 5428412 kB' 'Inactive: 3263500 kB' 'Active(anon): 5243448 kB' 'Inactive(anon): 0 kB' 'Active(file): 184964 kB' 'Inactive(file): 3263500 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8418680 kB' 'Mapped: 96300 kB' 'AnonPages: 276452 kB' 'Shmem: 4970216 kB' 'KernelStack: 7880 kB' 'PageTables: 5172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105588 kB' 'Slab: 289184 kB' 'SReclaimable: 105588 kB' 'SUnreclaim: 183596 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.704 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.705 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 23080460 kB' 'MemUsed: 4584292 kB' 'SwapCached: 0 kB' 'Active: 1600372 kB' 'Inactive: 228880 kB' 'Active(anon): 1398344 kB' 'Inactive(anon): 0 kB' 'Active(file): 202028 kB' 'Inactive(file): 228880 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1644560 kB' 'Mapped: 80612 kB' 'AnonPages: 184788 kB' 'Shmem: 1213652 kB' 'KernelStack: 4872 kB' 'PageTables: 2468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 65156 kB' 'Slab: 228104 kB' 'SReclaimable: 65156 kB' 'SUnreclaim: 162948 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.706 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:59.707 14:36:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:59.707 node0=512 expecting 512 00:03:59.965 14:36:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:59.965 14:36:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:59.965 14:36:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:59.965 14:36:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:59.965 node1=1024 expecting 1024 00:03:59.965 14:36:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:59.965 00:03:59.965 real 0m1.454s 00:03:59.965 user 0m0.631s 00:03:59.965 sys 0m0.756s 00:03:59.965 14:36:39 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.965 14:36:39 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:59.965 ************************************ 00:03:59.965 END TEST custom_alloc 00:03:59.965 ************************************ 00:03:59.965 14:36:39 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:59.965 14:36:39 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:59.965 14:36:39 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:59.965 14:36:39 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.965 14:36:39 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:59.965 ************************************ 00:03:59.965 START TEST no_shrink_alloc 00:03:59.965 ************************************ 00:03:59.965 14:36:39 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:59.965 14:36:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:59.965 14:36:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:59.965 14:36:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:59.965 14:36:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:59.965 14:36:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:59.965 14:36:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:59.965 14:36:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:59.965 14:36:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:59.965 14:36:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:59.965 14:36:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:59.965 14:36:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:59.965 14:36:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:59.965 14:36:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:59.965 14:36:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:59.965 14:36:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:59.965 14:36:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:59.965 14:36:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:59.965 14:36:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:59.965 14:36:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:59.965 14:36:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:59.965 14:36:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.965 14:36:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:00.899 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:00.899 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:00.899 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:00.899 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:00.899 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:00.899 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:00.899 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:00.899 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:00.899 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:00.899 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:00.899 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:00.899 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:00.899 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:00.899 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:00.899 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:00.899 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:00.899 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:01.163 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:01.163 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:01.163 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:01.163 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 46147340 kB' 'MemAvailable: 49618300 kB' 'Buffers: 2704 kB' 'Cached: 10060604 kB' 'SwapCached: 0 kB' 'Active: 7029596 kB' 'Inactive: 3492380 kB' 'Active(anon): 6642604 kB' 'Inactive(anon): 0 kB' 'Active(file): 386992 kB' 'Inactive(file): 3492380 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 462140 kB' 'Mapped: 176944 kB' 'Shmem: 6183936 kB' 'KReclaimable: 170856 kB' 'Slab: 517340 kB' 'SReclaimable: 170856 kB' 'SUnreclaim: 346484 kB' 'KernelStack: 12656 kB' 'PageTables: 7376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7737088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195888 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1574492 kB' 'DirectMap2M: 13025280 kB' 'DirectMap1G: 54525952 kB' 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.164 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 46159560 kB' 'MemAvailable: 49630520 kB' 'Buffers: 2704 kB' 'Cached: 10060604 kB' 'SwapCached: 0 kB' 'Active: 7029560 kB' 'Inactive: 3492380 kB' 'Active(anon): 6642568 kB' 'Inactive(anon): 0 kB' 'Active(file): 386992 kB' 'Inactive(file): 3492380 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 462044 kB' 'Mapped: 176924 kB' 'Shmem: 6183936 kB' 'KReclaimable: 170856 kB' 'Slab: 517268 kB' 'SReclaimable: 170856 kB' 'SUnreclaim: 346412 kB' 'KernelStack: 12704 kB' 'PageTables: 7460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7737104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195856 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1574492 kB' 'DirectMap2M: 13025280 kB' 'DirectMap1G: 54525952 kB' 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.165 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.166 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 46160156 kB' 'MemAvailable: 49631116 kB' 'Buffers: 2704 kB' 'Cached: 10060624 kB' 'SwapCached: 0 kB' 'Active: 7029536 kB' 'Inactive: 3492380 kB' 'Active(anon): 6642544 kB' 'Inactive(anon): 0 kB' 'Active(file): 386992 kB' 'Inactive(file): 3492380 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 461976 kB' 'Mapped: 176924 kB' 'Shmem: 6183956 kB' 'KReclaimable: 170856 kB' 'Slab: 517348 kB' 'SReclaimable: 170856 kB' 'SUnreclaim: 346492 kB' 'KernelStack: 12704 kB' 'PageTables: 7428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7737128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195856 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1574492 kB' 'DirectMap2M: 13025280 kB' 'DirectMap1G: 54525952 kB' 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.167 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.168 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:01.169 nr_hugepages=1024 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:01.169 resv_hugepages=0 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:01.169 surplus_hugepages=0 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:01.169 anon_hugepages=0 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 46161740 kB' 'MemAvailable: 49632700 kB' 'Buffers: 2704 kB' 'Cached: 10060644 kB' 'SwapCached: 0 kB' 'Active: 7029544 kB' 'Inactive: 3492380 kB' 'Active(anon): 6642552 kB' 'Inactive(anon): 0 kB' 'Active(file): 386992 kB' 'Inactive(file): 3492380 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 461944 kB' 'Mapped: 176924 kB' 'Shmem: 6183976 kB' 'KReclaimable: 170856 kB' 'Slab: 517348 kB' 'SReclaimable: 170856 kB' 'SUnreclaim: 346492 kB' 'KernelStack: 12688 kB' 'PageTables: 7384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7737148 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195872 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1574492 kB' 'DirectMap2M: 13025280 kB' 'DirectMap1G: 54525952 kB' 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.169 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.170 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20987432 kB' 'MemUsed: 11889508 kB' 'SwapCached: 0 kB' 'Active: 5429192 kB' 'Inactive: 3263500 kB' 'Active(anon): 5244228 kB' 'Inactive(anon): 0 kB' 'Active(file): 184964 kB' 'Inactive(file): 3263500 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8418812 kB' 'Mapped: 96312 kB' 'AnonPages: 277176 kB' 'Shmem: 4970348 kB' 'KernelStack: 7864 kB' 'PageTables: 5140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105572 kB' 'Slab: 289120 kB' 'SReclaimable: 105572 kB' 'SUnreclaim: 183548 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.171 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:01.172 node0=1024 expecting 1024 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:01.172 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.173 14:36:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:02.554 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:02.554 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:02.554 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:02.554 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:02.554 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:02.554 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:02.554 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:02.554 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:02.554 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:02.554 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:02.554 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:02.554 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:02.554 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:02.554 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:02.554 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:02.554 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:02.554 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:02.554 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:02.554 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:02.554 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:02.554 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:02.554 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:02.554 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:02.554 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:02.554 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:02.554 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:02.554 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:02.554 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:02.554 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:02.554 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.554 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.554 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.554 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.554 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.554 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.554 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.554 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.554 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.554 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 46154112 kB' 'MemAvailable: 49625068 kB' 'Buffers: 2704 kB' 'Cached: 10060712 kB' 'SwapCached: 0 kB' 'Active: 7030152 kB' 'Inactive: 3492380 kB' 'Active(anon): 6643160 kB' 'Inactive(anon): 0 kB' 'Active(file): 386992 kB' 'Inactive(file): 3492380 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 462104 kB' 'Mapped: 176988 kB' 'Shmem: 6184044 kB' 'KReclaimable: 170848 kB' 'Slab: 517520 kB' 'SReclaimable: 170848 kB' 'SUnreclaim: 346672 kB' 'KernelStack: 12736 kB' 'PageTables: 7488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7737324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1574492 kB' 'DirectMap2M: 13025280 kB' 'DirectMap1G: 54525952 kB' 00:04:02.554 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.554 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.554 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.554 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.554 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.554 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.554 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.554 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.554 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.554 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.554 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.554 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.554 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.554 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.554 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.554 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.554 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.554 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.554 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.554 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.554 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.554 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.554 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.554 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.554 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.554 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.554 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.554 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.554 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.554 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.555 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 46154848 kB' 'MemAvailable: 49625804 kB' 'Buffers: 2704 kB' 'Cached: 10060716 kB' 'SwapCached: 0 kB' 'Active: 7030112 kB' 'Inactive: 3492380 kB' 'Active(anon): 6643120 kB' 'Inactive(anon): 0 kB' 'Active(file): 386992 kB' 'Inactive(file): 3492380 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 462408 kB' 'Mapped: 176924 kB' 'Shmem: 6184048 kB' 'KReclaimable: 170848 kB' 'Slab: 517536 kB' 'SReclaimable: 170848 kB' 'SUnreclaim: 346688 kB' 'KernelStack: 12752 kB' 'PageTables: 7500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7737340 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195936 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1574492 kB' 'DirectMap2M: 13025280 kB' 'DirectMap1G: 54525952 kB' 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.556 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.557 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 46155300 kB' 'MemAvailable: 49626256 kB' 'Buffers: 2704 kB' 'Cached: 10060736 kB' 'SwapCached: 0 kB' 'Active: 7029828 kB' 'Inactive: 3492380 kB' 'Active(anon): 6642836 kB' 'Inactive(anon): 0 kB' 'Active(file): 386992 kB' 'Inactive(file): 3492380 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 462124 kB' 'Mapped: 176936 kB' 'Shmem: 6184068 kB' 'KReclaimable: 170848 kB' 'Slab: 517580 kB' 'SReclaimable: 170848 kB' 'SUnreclaim: 346732 kB' 'KernelStack: 12768 kB' 'PageTables: 7516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7737364 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195936 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1574492 kB' 'DirectMap2M: 13025280 kB' 'DirectMap1G: 54525952 kB' 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.558 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.559 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:02.560 nr_hugepages=1024 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:02.560 resv_hugepages=0 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:02.560 surplus_hugepages=0 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:02.560 anon_hugepages=0 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.560 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 46155684 kB' 'MemAvailable: 49626640 kB' 'Buffers: 2704 kB' 'Cached: 10060736 kB' 'SwapCached: 0 kB' 'Active: 7029524 kB' 'Inactive: 3492380 kB' 'Active(anon): 6642532 kB' 'Inactive(anon): 0 kB' 'Active(file): 386992 kB' 'Inactive(file): 3492380 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 461820 kB' 'Mapped: 176936 kB' 'Shmem: 6184068 kB' 'KReclaimable: 170848 kB' 'Slab: 517580 kB' 'SReclaimable: 170848 kB' 'SUnreclaim: 346732 kB' 'KernelStack: 12752 kB' 'PageTables: 7472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7737384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195936 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1574492 kB' 'DirectMap2M: 13025280 kB' 'DirectMap1G: 54525952 kB' 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.561 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.562 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20978628 kB' 'MemUsed: 11898312 kB' 'SwapCached: 0 kB' 'Active: 5429284 kB' 'Inactive: 3263500 kB' 'Active(anon): 5244320 kB' 'Inactive(anon): 0 kB' 'Active(file): 184964 kB' 'Inactive(file): 3263500 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8418920 kB' 'Mapped: 96324 kB' 'AnonPages: 277204 kB' 'Shmem: 4970456 kB' 'KernelStack: 7864 kB' 'PageTables: 5092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105572 kB' 'Slab: 289300 kB' 'SReclaimable: 105572 kB' 'SUnreclaim: 183728 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.563 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.564 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.564 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.564 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.564 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.564 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.564 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.564 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.564 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.564 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.564 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.564 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.564 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.564 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.564 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.564 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.564 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.564 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.564 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.564 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.564 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.564 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.564 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.564 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.564 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.564 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.564 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.564 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.564 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.564 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.564 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.564 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.564 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.564 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.564 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.564 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.564 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.564 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.564 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.564 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.564 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.564 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.564 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:02.564 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:02.564 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:02.564 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:02.564 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:02.564 node0=1024 expecting 1024 00:04:02.564 14:36:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:02.564 00:04:02.564 real 0m2.707s 00:04:02.564 user 0m1.166s 00:04:02.564 sys 0m1.460s 00:04:02.564 14:36:41 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:02.564 14:36:41 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:02.564 ************************************ 00:04:02.564 END TEST no_shrink_alloc 00:04:02.564 ************************************ 00:04:02.564 14:36:41 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:02.564 14:36:41 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:02.564 14:36:41 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:02.564 14:36:41 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:02.564 14:36:41 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:02.564 14:36:41 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:02.564 14:36:41 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:02.564 14:36:41 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:02.564 14:36:41 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:02.564 14:36:41 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:02.564 14:36:41 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:02.564 14:36:41 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:02.564 14:36:41 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:02.564 14:36:41 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:02.564 14:36:41 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:02.564 00:04:02.564 real 0m11.181s 00:04:02.564 user 0m4.374s 00:04:02.564 sys 0m5.650s 00:04:02.564 14:36:41 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:02.564 14:36:41 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:02.564 ************************************ 00:04:02.564 END TEST hugepages 00:04:02.564 ************************************ 00:04:02.564 14:36:41 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:02.564 14:36:41 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:02.564 14:36:41 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:02.564 14:36:41 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:02.564 14:36:41 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:02.564 ************************************ 00:04:02.564 START TEST driver 00:04:02.564 ************************************ 00:04:02.564 14:36:41 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:02.822 * Looking for test storage... 00:04:02.822 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:02.822 14:36:41 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:02.822 14:36:41 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:02.822 14:36:41 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:05.357 14:36:44 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:05.358 14:36:44 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:05.358 14:36:44 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.358 14:36:44 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:05.358 ************************************ 00:04:05.358 START TEST guess_driver 00:04:05.358 ************************************ 00:04:05.358 14:36:44 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:05.358 14:36:44 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:05.358 14:36:44 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:05.358 14:36:44 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:05.358 14:36:44 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:05.358 14:36:44 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:05.358 14:36:44 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:05.358 14:36:44 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:05.358 14:36:44 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:05.358 14:36:44 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:05.358 14:36:44 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:04:05.358 14:36:44 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:05.358 14:36:44 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:05.358 14:36:44 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:05.358 14:36:44 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:05.358 14:36:44 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:05.358 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:05.358 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:05.358 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:05.358 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:05.358 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:05.358 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:05.358 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:05.358 14:36:44 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:05.358 14:36:44 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:05.358 14:36:44 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:05.358 14:36:44 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:05.358 14:36:44 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:05.358 Looking for driver=vfio-pci 00:04:05.358 14:36:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:05.358 14:36:44 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:05.358 14:36:44 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.358 14:36:44 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:06.292 14:36:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.292 14:36:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.292 14:36:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.292 14:36:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.292 14:36:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.292 14:36:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.292 14:36:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.292 14:36:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.292 14:36:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.292 14:36:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.292 14:36:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.292 14:36:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.551 14:36:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.551 14:36:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.551 14:36:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.551 14:36:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.551 14:36:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.551 14:36:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.551 14:36:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.551 14:36:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.551 14:36:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.551 14:36:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.551 14:36:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.551 14:36:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.551 14:36:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.551 14:36:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.551 14:36:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.551 14:36:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.551 14:36:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.551 14:36:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.551 14:36:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.551 14:36:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.552 14:36:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.552 14:36:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.552 14:36:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.552 14:36:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.552 14:36:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.552 14:36:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.552 14:36:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.552 14:36:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.552 14:36:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.552 14:36:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.552 14:36:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.552 14:36:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.552 14:36:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.552 14:36:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.552 14:36:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.552 14:36:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.488 14:36:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.488 14:36:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.488 14:36:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.488 14:36:46 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:07.488 14:36:46 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:07.488 14:36:46 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:07.488 14:36:46 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:10.016 00:04:10.016 real 0m4.749s 00:04:10.016 user 0m1.110s 00:04:10.016 sys 0m1.759s 00:04:10.016 14:36:49 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:10.016 14:36:49 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:10.016 ************************************ 00:04:10.016 END TEST guess_driver 00:04:10.016 ************************************ 00:04:10.016 14:36:49 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:10.016 00:04:10.016 real 0m7.316s 00:04:10.016 user 0m1.673s 00:04:10.016 sys 0m2.774s 00:04:10.016 14:36:49 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:10.016 14:36:49 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:10.016 ************************************ 00:04:10.016 END TEST driver 00:04:10.016 ************************************ 00:04:10.016 14:36:49 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:10.016 14:36:49 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:10.016 14:36:49 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:10.016 14:36:49 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.016 14:36:49 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:10.016 ************************************ 00:04:10.016 START TEST devices 00:04:10.016 ************************************ 00:04:10.016 14:36:49 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:10.016 * Looking for test storage... 00:04:10.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:10.016 14:36:49 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:10.016 14:36:49 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:10.016 14:36:49 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:10.016 14:36:49 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:11.391 14:36:50 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:11.391 14:36:50 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:11.391 14:36:50 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:11.391 14:36:50 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:11.391 14:36:50 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:11.391 14:36:50 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:11.391 14:36:50 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:11.391 14:36:50 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:11.391 14:36:50 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:11.391 14:36:50 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:11.391 14:36:50 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:11.391 14:36:50 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:11.391 14:36:50 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:11.391 14:36:50 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:11.391 14:36:50 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:11.391 14:36:50 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:11.391 14:36:50 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:11.391 14:36:50 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:04:11.391 14:36:50 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:11.391 14:36:50 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:11.391 14:36:50 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:11.391 14:36:50 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:11.391 No valid GPT data, bailing 00:04:11.391 14:36:50 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:11.391 14:36:50 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:11.391 14:36:50 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:11.391 14:36:50 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:11.391 14:36:50 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:11.391 14:36:50 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:11.391 14:36:50 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:04:11.391 14:36:50 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:11.391 14:36:50 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:11.391 14:36:50 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:04:11.391 14:36:50 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:11.391 14:36:50 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:11.391 14:36:50 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:11.391 14:36:50 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:11.391 14:36:50 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.391 14:36:50 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:11.678 ************************************ 00:04:11.678 START TEST nvme_mount 00:04:11.678 ************************************ 00:04:11.678 14:36:50 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:11.678 14:36:50 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:11.678 14:36:50 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:11.678 14:36:50 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:11.679 14:36:50 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:11.679 14:36:50 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:11.679 14:36:50 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:11.679 14:36:50 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:11.679 14:36:50 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:11.679 14:36:50 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:11.679 14:36:50 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:11.679 14:36:50 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:11.679 14:36:50 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:11.679 14:36:50 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:11.679 14:36:50 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:11.679 14:36:50 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:11.679 14:36:50 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:11.679 14:36:50 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:11.679 14:36:50 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:11.679 14:36:50 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:12.615 Creating new GPT entries in memory. 00:04:12.615 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:12.615 other utilities. 00:04:12.615 14:36:51 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:12.615 14:36:51 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:12.615 14:36:51 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:12.615 14:36:51 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:12.615 14:36:51 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:13.550 Creating new GPT entries in memory. 00:04:13.550 The operation has completed successfully. 00:04:13.550 14:36:52 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:13.550 14:36:52 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:13.550 14:36:52 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1736630 00:04:13.550 14:36:52 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:13.550 14:36:52 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:13.550 14:36:52 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:13.550 14:36:52 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:13.550 14:36:52 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:13.550 14:36:52 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:13.550 14:36:52 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:13.550 14:36:52 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:13.550 14:36:52 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:13.550 14:36:52 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:13.550 14:36:52 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:13.550 14:36:52 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:13.550 14:36:52 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:13.550 14:36:52 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:13.550 14:36:52 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:13.550 14:36:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.550 14:36:52 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:13.550 14:36:52 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:13.550 14:36:52 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.550 14:36:52 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:14.928 14:36:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:14.928 14:36:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:14.928 14:36:53 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:14.928 14:36:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.928 14:36:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:14.928 14:36:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.928 14:36:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:14.928 14:36:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.928 14:36:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:14.928 14:36:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.928 14:36:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:14.928 14:36:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.928 14:36:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:14.928 14:36:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.928 14:36:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:14.928 14:36:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.928 14:36:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:14.928 14:36:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.928 14:36:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:14.929 14:36:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.929 14:36:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:14.929 14:36:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.929 14:36:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:14.929 14:36:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.929 14:36:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:14.929 14:36:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.929 14:36:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:14.929 14:36:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.929 14:36:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:14.929 14:36:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.929 14:36:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:14.929 14:36:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.929 14:36:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:14.929 14:36:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.929 14:36:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:14.929 14:36:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.929 14:36:54 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:14.929 14:36:54 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:14.929 14:36:54 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:14.929 14:36:54 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:14.929 14:36:54 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:14.929 14:36:54 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:14.929 14:36:54 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:14.929 14:36:54 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:14.929 14:36:54 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:14.929 14:36:54 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:14.929 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:14.929 14:36:54 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:14.929 14:36:54 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:15.186 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:15.186 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:15.186 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:15.186 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:15.186 14:36:54 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:15.186 14:36:54 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:15.186 14:36:54 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:15.186 14:36:54 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:15.186 14:36:54 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:15.186 14:36:54 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:15.186 14:36:54 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:15.186 14:36:54 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:15.186 14:36:54 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:15.186 14:36:54 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:15.186 14:36:54 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:15.186 14:36:54 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:15.186 14:36:54 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:15.186 14:36:54 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:15.186 14:36:54 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:15.186 14:36:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.186 14:36:54 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:15.186 14:36:54 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:15.186 14:36:54 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.186 14:36:54 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:16.556 14:36:55 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:17.928 14:36:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.928 14:36:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:17.928 14:36:56 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:17.928 14:36:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.928 14:36:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.928 14:36:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.928 14:36:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.928 14:36:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.928 14:36:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.928 14:36:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.928 14:36:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.928 14:36:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.928 14:36:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.928 14:36:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.929 14:36:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.929 14:36:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.929 14:36:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.929 14:36:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.929 14:36:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.929 14:36:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.929 14:36:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.929 14:36:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.929 14:36:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.929 14:36:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.929 14:36:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.929 14:36:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.929 14:36:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.929 14:36:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.929 14:36:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.929 14:36:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.929 14:36:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.929 14:36:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.929 14:36:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.929 14:36:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.929 14:36:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.929 14:36:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.929 14:36:57 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:17.929 14:36:57 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:17.929 14:36:57 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:17.929 14:36:57 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:17.929 14:36:57 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:17.929 14:36:57 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:17.929 14:36:57 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:17.929 14:36:57 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:17.929 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:17.929 00:04:17.929 real 0m6.325s 00:04:17.929 user 0m1.529s 00:04:17.929 sys 0m2.364s 00:04:17.929 14:36:57 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.929 14:36:57 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:17.929 ************************************ 00:04:17.929 END TEST nvme_mount 00:04:17.929 ************************************ 00:04:17.929 14:36:57 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:17.929 14:36:57 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:17.929 14:36:57 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.929 14:36:57 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.929 14:36:57 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:17.929 ************************************ 00:04:17.929 START TEST dm_mount 00:04:17.929 ************************************ 00:04:17.929 14:36:57 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:17.929 14:36:57 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:17.929 14:36:57 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:17.929 14:36:57 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:17.929 14:36:57 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:17.929 14:36:57 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:17.929 14:36:57 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:17.929 14:36:57 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:17.929 14:36:57 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:17.929 14:36:57 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:17.929 14:36:57 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:17.929 14:36:57 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:17.929 14:36:57 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:17.929 14:36:57 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:17.929 14:36:57 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:17.929 14:36:57 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:17.929 14:36:57 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:17.929 14:36:57 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:17.929 14:36:57 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:17.929 14:36:57 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:17.929 14:36:57 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:17.929 14:36:57 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:18.866 Creating new GPT entries in memory. 00:04:18.866 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:18.866 other utilities. 00:04:18.866 14:36:58 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:18.866 14:36:58 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:18.866 14:36:58 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:18.866 14:36:58 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:18.866 14:36:58 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:19.803 Creating new GPT entries in memory. 00:04:19.803 The operation has completed successfully. 00:04:19.803 14:36:59 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:19.803 14:36:59 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:19.803 14:36:59 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:19.803 14:36:59 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:19.803 14:36:59 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:21.181 The operation has completed successfully. 00:04:21.181 14:37:00 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:21.181 14:37:00 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:21.181 14:37:00 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1739015 00:04:21.181 14:37:00 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:21.181 14:37:00 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:21.181 14:37:00 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:21.181 14:37:00 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:21.181 14:37:00 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:21.181 14:37:00 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:21.181 14:37:00 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:21.181 14:37:00 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:21.181 14:37:00 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:21.181 14:37:00 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:21.181 14:37:00 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:21.181 14:37:00 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:21.181 14:37:00 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:21.181 14:37:00 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:21.181 14:37:00 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:21.181 14:37:00 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:21.181 14:37:00 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:21.181 14:37:00 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:21.181 14:37:00 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:21.181 14:37:00 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:21.181 14:37:00 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:21.181 14:37:00 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:21.181 14:37:00 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:21.181 14:37:00 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:21.181 14:37:00 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:21.181 14:37:00 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:21.181 14:37:00 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:21.181 14:37:00 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:21.181 14:37:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.181 14:37:00 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:21.181 14:37:00 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:21.181 14:37:00 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.181 14:37:00 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:22.116 14:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:22.116 14:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:22.116 14:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:22.116 14:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.116 14:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:22.116 14:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.116 14:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:22.116 14:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.116 14:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:22.116 14:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.116 14:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:22.116 14:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.116 14:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:22.116 14:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.116 14:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:22.116 14:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.116 14:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:22.116 14:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.116 14:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:22.116 14:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.116 14:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:22.116 14:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.116 14:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:22.116 14:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.116 14:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:22.116 14:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.116 14:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:22.116 14:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.116 14:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:22.116 14:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.116 14:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:22.116 14:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.116 14:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:22.116 14:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.116 14:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:22.116 14:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.375 14:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:22.375 14:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:22.375 14:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:22.375 14:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:22.375 14:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:22.375 14:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:22.375 14:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:22.375 14:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:22.375 14:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:22.375 14:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:22.375 14:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:22.375 14:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:22.375 14:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:22.375 14:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:22.376 14:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.376 14:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:22.376 14:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:22.376 14:37:01 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.376 14:37:01 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:23.310 14:37:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:23.310 14:37:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:23.310 14:37:02 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:23.310 14:37:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.310 14:37:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:23.310 14:37:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.310 14:37:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:23.310 14:37:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.310 14:37:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:23.310 14:37:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.310 14:37:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:23.310 14:37:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.310 14:37:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:23.310 14:37:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.310 14:37:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:23.310 14:37:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.310 14:37:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:23.310 14:37:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.310 14:37:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:23.310 14:37:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.310 14:37:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:23.310 14:37:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.310 14:37:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:23.310 14:37:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.310 14:37:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:23.310 14:37:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.310 14:37:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:23.310 14:37:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.310 14:37:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:23.310 14:37:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.310 14:37:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:23.310 14:37:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.310 14:37:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:23.310 14:37:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.310 14:37:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:23.310 14:37:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.568 14:37:02 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:23.568 14:37:02 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:23.568 14:37:02 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:23.568 14:37:02 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:23.568 14:37:02 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:23.568 14:37:02 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:23.568 14:37:02 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:23.568 14:37:02 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:23.568 14:37:02 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:23.568 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:23.568 14:37:02 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:23.568 14:37:02 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:23.568 00:04:23.568 real 0m5.657s 00:04:23.568 user 0m0.974s 00:04:23.568 sys 0m1.542s 00:04:23.568 14:37:02 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:23.568 14:37:02 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:23.568 ************************************ 00:04:23.568 END TEST dm_mount 00:04:23.568 ************************************ 00:04:23.568 14:37:02 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:23.568 14:37:02 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:23.568 14:37:02 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:23.568 14:37:02 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:23.568 14:37:02 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:23.568 14:37:02 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:23.568 14:37:02 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:23.568 14:37:02 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:23.826 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:23.826 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:23.826 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:23.826 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:23.826 14:37:03 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:23.826 14:37:03 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:23.826 14:37:03 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:23.826 14:37:03 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:23.826 14:37:03 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:23.826 14:37:03 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:23.826 14:37:03 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:23.826 00:04:23.826 real 0m13.849s 00:04:23.826 user 0m3.181s 00:04:23.826 sys 0m4.858s 00:04:23.826 14:37:03 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:23.827 14:37:03 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:23.827 ************************************ 00:04:23.827 END TEST devices 00:04:23.827 ************************************ 00:04:23.827 14:37:03 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:23.827 00:04:23.827 real 0m42.941s 00:04:23.827 user 0m12.534s 00:04:23.827 sys 0m18.586s 00:04:23.827 14:37:03 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:23.827 14:37:03 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:23.827 ************************************ 00:04:23.827 END TEST setup.sh 00:04:23.827 ************************************ 00:04:23.827 14:37:03 -- common/autotest_common.sh@1142 -- # return 0 00:04:23.827 14:37:03 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:25.199 Hugepages 00:04:25.199 node hugesize free / total 00:04:25.199 node0 1048576kB 0 / 0 00:04:25.199 node0 2048kB 2048 / 2048 00:04:25.199 node1 1048576kB 0 / 0 00:04:25.199 node1 2048kB 0 / 0 00:04:25.199 00:04:25.199 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:25.199 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:04:25.199 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:04:25.199 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:04:25.199 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:04:25.199 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:04:25.199 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:04:25.199 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:04:25.199 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:04:25.199 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:04:25.199 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:04:25.199 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:04:25.199 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:04:25.199 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:04:25.199 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:04:25.199 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:04:25.199 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:04:25.199 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:25.199 14:37:04 -- spdk/autotest.sh@130 -- # uname -s 00:04:25.199 14:37:04 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:25.199 14:37:04 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:25.199 14:37:04 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:26.134 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:26.134 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:26.134 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:26.134 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:26.134 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:26.134 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:26.134 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:26.134 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:26.134 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:26.134 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:26.134 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:26.393 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:26.393 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:26.393 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:26.393 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:26.393 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:27.330 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:27.330 14:37:06 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:28.268 14:37:07 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:28.268 14:37:07 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:28.268 14:37:07 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:28.269 14:37:07 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:28.269 14:37:07 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:28.269 14:37:07 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:28.269 14:37:07 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:28.269 14:37:07 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:28.269 14:37:07 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:28.269 14:37:07 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:28.269 14:37:07 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:04:28.269 14:37:07 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:29.646 Waiting for block devices as requested 00:04:29.646 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:04:29.646 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:29.646 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:29.905 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:29.905 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:29.905 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:29.905 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:30.163 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:30.163 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:30.163 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:30.163 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:30.419 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:30.419 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:30.419 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:30.419 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:30.677 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:30.677 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:30.677 14:37:09 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:30.677 14:37:09 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:04:30.677 14:37:09 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:30.677 14:37:09 -- common/autotest_common.sh@1502 -- # grep 0000:88:00.0/nvme/nvme 00:04:30.677 14:37:09 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:30.677 14:37:09 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:04:30.677 14:37:09 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:30.677 14:37:09 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:30.677 14:37:09 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:30.677 14:37:09 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:30.677 14:37:09 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:30.677 14:37:09 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:30.677 14:37:09 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:30.677 14:37:09 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:04:30.677 14:37:09 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:30.677 14:37:09 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:30.677 14:37:09 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:30.677 14:37:09 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:30.677 14:37:09 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:30.677 14:37:09 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:30.677 14:37:09 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:30.677 14:37:09 -- common/autotest_common.sh@1557 -- # continue 00:04:30.677 14:37:09 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:30.677 14:37:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:30.677 14:37:09 -- common/autotest_common.sh@10 -- # set +x 00:04:30.962 14:37:09 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:30.962 14:37:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:30.962 14:37:09 -- common/autotest_common.sh@10 -- # set +x 00:04:30.962 14:37:09 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:31.894 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:32.153 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:32.153 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:32.153 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:32.153 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:32.153 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:32.153 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:32.153 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:32.153 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:32.153 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:32.153 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:32.153 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:32.153 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:32.153 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:32.153 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:32.153 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:33.089 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:33.347 14:37:12 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:33.347 14:37:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:33.347 14:37:12 -- common/autotest_common.sh@10 -- # set +x 00:04:33.347 14:37:12 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:33.347 14:37:12 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:33.347 14:37:12 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:33.347 14:37:12 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:33.347 14:37:12 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:33.347 14:37:12 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:33.347 14:37:12 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:33.347 14:37:12 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:33.347 14:37:12 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:33.347 14:37:12 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:33.347 14:37:12 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:33.347 14:37:12 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:33.347 14:37:12 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:04:33.347 14:37:12 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:33.347 14:37:12 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:04:33.347 14:37:12 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:04:33.347 14:37:12 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:33.347 14:37:12 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:04:33.347 14:37:12 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:88:00.0 00:04:33.347 14:37:12 -- common/autotest_common.sh@1592 -- # [[ -z 0000:88:00.0 ]] 00:04:33.347 14:37:12 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=1744196 00:04:33.347 14:37:12 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:33.347 14:37:12 -- common/autotest_common.sh@1598 -- # waitforlisten 1744196 00:04:33.347 14:37:12 -- common/autotest_common.sh@829 -- # '[' -z 1744196 ']' 00:04:33.347 14:37:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.347 14:37:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:33.347 14:37:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.347 14:37:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:33.347 14:37:12 -- common/autotest_common.sh@10 -- # set +x 00:04:33.347 [2024-07-14 14:37:12.654403] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:33.347 [2024-07-14 14:37:12.654537] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1744196 ] 00:04:33.605 EAL: No free 2048 kB hugepages reported on node 1 00:04:33.605 [2024-07-14 14:37:12.780031] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.863 [2024-07-14 14:37:13.035694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.792 14:37:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:34.793 14:37:13 -- common/autotest_common.sh@862 -- # return 0 00:04:34.793 14:37:13 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:04:34.793 14:37:13 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:04:34.793 14:37:13 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:04:38.075 nvme0n1 00:04:38.075 14:37:17 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:38.075 [2024-07-14 14:37:17.283632] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:38.075 [2024-07-14 14:37:17.283704] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:38.075 request: 00:04:38.075 { 00:04:38.075 "nvme_ctrlr_name": "nvme0", 00:04:38.075 "password": "test", 00:04:38.075 "method": "bdev_nvme_opal_revert", 00:04:38.075 "req_id": 1 00:04:38.075 } 00:04:38.075 Got JSON-RPC error response 00:04:38.075 response: 00:04:38.075 { 00:04:38.075 "code": -32603, 00:04:38.075 "message": "Internal error" 00:04:38.075 } 00:04:38.075 14:37:17 -- common/autotest_common.sh@1604 -- # true 00:04:38.075 14:37:17 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:04:38.075 14:37:17 -- common/autotest_common.sh@1608 -- # killprocess 1744196 00:04:38.075 14:37:17 -- common/autotest_common.sh@948 -- # '[' -z 1744196 ']' 00:04:38.075 14:37:17 -- common/autotest_common.sh@952 -- # kill -0 1744196 00:04:38.075 14:37:17 -- common/autotest_common.sh@953 -- # uname 00:04:38.075 14:37:17 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:38.075 14:37:17 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1744196 00:04:38.075 14:37:17 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:38.075 14:37:17 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:38.075 14:37:17 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1744196' 00:04:38.075 killing process with pid 1744196 00:04:38.075 14:37:17 -- common/autotest_common.sh@967 -- # kill 1744196 00:04:38.075 14:37:17 -- common/autotest_common.sh@972 -- # wait 1744196 00:04:42.261 14:37:21 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:42.261 14:37:21 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:42.261 14:37:21 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:42.261 14:37:21 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:42.261 14:37:21 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:42.261 14:37:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:42.261 14:37:21 -- common/autotest_common.sh@10 -- # set +x 00:04:42.261 14:37:21 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:42.261 14:37:21 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:42.261 14:37:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:42.261 14:37:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.261 14:37:21 -- common/autotest_common.sh@10 -- # set +x 00:04:42.261 ************************************ 00:04:42.261 START TEST env 00:04:42.261 ************************************ 00:04:42.261 14:37:21 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:42.261 * Looking for test storage... 00:04:42.261 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:42.261 14:37:21 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:42.261 14:37:21 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:42.261 14:37:21 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.261 14:37:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:42.261 ************************************ 00:04:42.261 START TEST env_memory 00:04:42.261 ************************************ 00:04:42.261 14:37:21 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:42.261 00:04:42.261 00:04:42.261 CUnit - A unit testing framework for C - Version 2.1-3 00:04:42.261 http://cunit.sourceforge.net/ 00:04:42.261 00:04:42.261 00:04:42.261 Suite: memory 00:04:42.261 Test: alloc and free memory map ...[2024-07-14 14:37:21.217742] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:42.261 passed 00:04:42.261 Test: mem map translation ...[2024-07-14 14:37:21.260433] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:42.261 [2024-07-14 14:37:21.260473] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:42.261 [2024-07-14 14:37:21.260559] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:42.261 [2024-07-14 14:37:21.260589] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:42.261 passed 00:04:42.261 Test: mem map registration ...[2024-07-14 14:37:21.333013] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:42.261 [2024-07-14 14:37:21.333060] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:42.261 passed 00:04:42.261 Test: mem map adjacent registrations ...passed 00:04:42.261 00:04:42.261 Run Summary: Type Total Ran Passed Failed Inactive 00:04:42.261 suites 1 1 n/a 0 0 00:04:42.261 tests 4 4 4 0 0 00:04:42.261 asserts 152 152 152 0 n/a 00:04:42.261 00:04:42.261 Elapsed time = 0.248 seconds 00:04:42.261 00:04:42.261 real 0m0.268s 00:04:42.261 user 0m0.252s 00:04:42.261 sys 0m0.015s 00:04:42.261 14:37:21 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.261 14:37:21 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:42.261 ************************************ 00:04:42.261 END TEST env_memory 00:04:42.261 ************************************ 00:04:42.261 14:37:21 env -- common/autotest_common.sh@1142 -- # return 0 00:04:42.261 14:37:21 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:42.261 14:37:21 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:42.261 14:37:21 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.261 14:37:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:42.261 ************************************ 00:04:42.261 START TEST env_vtophys 00:04:42.261 ************************************ 00:04:42.262 14:37:21 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:42.262 EAL: lib.eal log level changed from notice to debug 00:04:42.262 EAL: Detected lcore 0 as core 0 on socket 0 00:04:42.262 EAL: Detected lcore 1 as core 1 on socket 0 00:04:42.262 EAL: Detected lcore 2 as core 2 on socket 0 00:04:42.262 EAL: Detected lcore 3 as core 3 on socket 0 00:04:42.262 EAL: Detected lcore 4 as core 4 on socket 0 00:04:42.262 EAL: Detected lcore 5 as core 5 on socket 0 00:04:42.262 EAL: Detected lcore 6 as core 8 on socket 0 00:04:42.262 EAL: Detected lcore 7 as core 9 on socket 0 00:04:42.262 EAL: Detected lcore 8 as core 10 on socket 0 00:04:42.262 EAL: Detected lcore 9 as core 11 on socket 0 00:04:42.262 EAL: Detected lcore 10 as core 12 on socket 0 00:04:42.262 EAL: Detected lcore 11 as core 13 on socket 0 00:04:42.262 EAL: Detected lcore 12 as core 0 on socket 1 00:04:42.262 EAL: Detected lcore 13 as core 1 on socket 1 00:04:42.262 EAL: Detected lcore 14 as core 2 on socket 1 00:04:42.262 EAL: Detected lcore 15 as core 3 on socket 1 00:04:42.262 EAL: Detected lcore 16 as core 4 on socket 1 00:04:42.262 EAL: Detected lcore 17 as core 5 on socket 1 00:04:42.262 EAL: Detected lcore 18 as core 8 on socket 1 00:04:42.262 EAL: Detected lcore 19 as core 9 on socket 1 00:04:42.262 EAL: Detected lcore 20 as core 10 on socket 1 00:04:42.262 EAL: Detected lcore 21 as core 11 on socket 1 00:04:42.262 EAL: Detected lcore 22 as core 12 on socket 1 00:04:42.262 EAL: Detected lcore 23 as core 13 on socket 1 00:04:42.262 EAL: Detected lcore 24 as core 0 on socket 0 00:04:42.262 EAL: Detected lcore 25 as core 1 on socket 0 00:04:42.262 EAL: Detected lcore 26 as core 2 on socket 0 00:04:42.262 EAL: Detected lcore 27 as core 3 on socket 0 00:04:42.262 EAL: Detected lcore 28 as core 4 on socket 0 00:04:42.262 EAL: Detected lcore 29 as core 5 on socket 0 00:04:42.262 EAL: Detected lcore 30 as core 8 on socket 0 00:04:42.262 EAL: Detected lcore 31 as core 9 on socket 0 00:04:42.262 EAL: Detected lcore 32 as core 10 on socket 0 00:04:42.262 EAL: Detected lcore 33 as core 11 on socket 0 00:04:42.262 EAL: Detected lcore 34 as core 12 on socket 0 00:04:42.262 EAL: Detected lcore 35 as core 13 on socket 0 00:04:42.262 EAL: Detected lcore 36 as core 0 on socket 1 00:04:42.262 EAL: Detected lcore 37 as core 1 on socket 1 00:04:42.262 EAL: Detected lcore 38 as core 2 on socket 1 00:04:42.262 EAL: Detected lcore 39 as core 3 on socket 1 00:04:42.262 EAL: Detected lcore 40 as core 4 on socket 1 00:04:42.262 EAL: Detected lcore 41 as core 5 on socket 1 00:04:42.262 EAL: Detected lcore 42 as core 8 on socket 1 00:04:42.262 EAL: Detected lcore 43 as core 9 on socket 1 00:04:42.262 EAL: Detected lcore 44 as core 10 on socket 1 00:04:42.262 EAL: Detected lcore 45 as core 11 on socket 1 00:04:42.262 EAL: Detected lcore 46 as core 12 on socket 1 00:04:42.262 EAL: Detected lcore 47 as core 13 on socket 1 00:04:42.262 EAL: Maximum logical cores by configuration: 128 00:04:42.262 EAL: Detected CPU lcores: 48 00:04:42.262 EAL: Detected NUMA nodes: 2 00:04:42.262 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:42.262 EAL: Detected shared linkage of DPDK 00:04:42.262 EAL: No shared files mode enabled, IPC will be disabled 00:04:42.520 EAL: Bus pci wants IOVA as 'DC' 00:04:42.520 EAL: Buses did not request a specific IOVA mode. 00:04:42.520 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:42.520 EAL: Selected IOVA mode 'VA' 00:04:42.520 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.520 EAL: Probing VFIO support... 00:04:42.520 EAL: IOMMU type 1 (Type 1) is supported 00:04:42.520 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:42.520 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:42.520 EAL: VFIO support initialized 00:04:42.520 EAL: Ask a virtual area of 0x2e000 bytes 00:04:42.520 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:42.520 EAL: Setting up physically contiguous memory... 00:04:42.520 EAL: Setting maximum number of open files to 524288 00:04:42.520 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:42.520 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:42.520 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:42.520 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.520 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:42.520 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.520 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.520 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:42.520 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:42.520 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.520 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:42.520 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.520 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.520 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:42.520 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:42.520 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.520 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:42.520 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.520 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.520 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:42.520 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:42.520 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.520 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:42.520 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.520 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.520 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:42.520 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:42.520 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:42.520 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.520 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:42.520 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:42.520 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.520 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:42.520 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:42.520 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.520 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:42.520 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:42.520 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.520 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:42.520 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:42.520 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.520 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:42.520 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:42.520 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.520 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:42.520 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:42.520 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.520 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:42.520 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:42.520 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.520 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:42.521 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:42.521 EAL: Hugepages will be freed exactly as allocated. 00:04:42.521 EAL: No shared files mode enabled, IPC is disabled 00:04:42.521 EAL: No shared files mode enabled, IPC is disabled 00:04:42.521 EAL: TSC frequency is ~2700000 KHz 00:04:42.521 EAL: Main lcore 0 is ready (tid=7f43b7f56a40;cpuset=[0]) 00:04:42.521 EAL: Trying to obtain current memory policy. 00:04:42.521 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.521 EAL: Restoring previous memory policy: 0 00:04:42.521 EAL: request: mp_malloc_sync 00:04:42.521 EAL: No shared files mode enabled, IPC is disabled 00:04:42.521 EAL: Heap on socket 0 was expanded by 2MB 00:04:42.521 EAL: No shared files mode enabled, IPC is disabled 00:04:42.521 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:42.521 EAL: Mem event callback 'spdk:(nil)' registered 00:04:42.521 00:04:42.521 00:04:42.521 CUnit - A unit testing framework for C - Version 2.1-3 00:04:42.521 http://cunit.sourceforge.net/ 00:04:42.521 00:04:42.521 00:04:42.521 Suite: components_suite 00:04:42.779 Test: vtophys_malloc_test ...passed 00:04:42.779 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:42.779 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.779 EAL: Restoring previous memory policy: 4 00:04:42.779 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.779 EAL: request: mp_malloc_sync 00:04:42.779 EAL: No shared files mode enabled, IPC is disabled 00:04:42.779 EAL: Heap on socket 0 was expanded by 4MB 00:04:42.779 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.779 EAL: request: mp_malloc_sync 00:04:42.779 EAL: No shared files mode enabled, IPC is disabled 00:04:42.779 EAL: Heap on socket 0 was shrunk by 4MB 00:04:42.779 EAL: Trying to obtain current memory policy. 00:04:42.779 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.779 EAL: Restoring previous memory policy: 4 00:04:42.779 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.779 EAL: request: mp_malloc_sync 00:04:42.779 EAL: No shared files mode enabled, IPC is disabled 00:04:42.779 EAL: Heap on socket 0 was expanded by 6MB 00:04:42.779 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.779 EAL: request: mp_malloc_sync 00:04:42.779 EAL: No shared files mode enabled, IPC is disabled 00:04:42.779 EAL: Heap on socket 0 was shrunk by 6MB 00:04:42.779 EAL: Trying to obtain current memory policy. 00:04:42.779 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.779 EAL: Restoring previous memory policy: 4 00:04:42.779 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.779 EAL: request: mp_malloc_sync 00:04:42.779 EAL: No shared files mode enabled, IPC is disabled 00:04:42.779 EAL: Heap on socket 0 was expanded by 10MB 00:04:42.779 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.779 EAL: request: mp_malloc_sync 00:04:42.779 EAL: No shared files mode enabled, IPC is disabled 00:04:42.779 EAL: Heap on socket 0 was shrunk by 10MB 00:04:43.037 EAL: Trying to obtain current memory policy. 00:04:43.037 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.037 EAL: Restoring previous memory policy: 4 00:04:43.037 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.037 EAL: request: mp_malloc_sync 00:04:43.037 EAL: No shared files mode enabled, IPC is disabled 00:04:43.037 EAL: Heap on socket 0 was expanded by 18MB 00:04:43.037 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.037 EAL: request: mp_malloc_sync 00:04:43.037 EAL: No shared files mode enabled, IPC is disabled 00:04:43.037 EAL: Heap on socket 0 was shrunk by 18MB 00:04:43.037 EAL: Trying to obtain current memory policy. 00:04:43.037 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.037 EAL: Restoring previous memory policy: 4 00:04:43.037 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.037 EAL: request: mp_malloc_sync 00:04:43.037 EAL: No shared files mode enabled, IPC is disabled 00:04:43.037 EAL: Heap on socket 0 was expanded by 34MB 00:04:43.037 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.037 EAL: request: mp_malloc_sync 00:04:43.037 EAL: No shared files mode enabled, IPC is disabled 00:04:43.037 EAL: Heap on socket 0 was shrunk by 34MB 00:04:43.037 EAL: Trying to obtain current memory policy. 00:04:43.037 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.037 EAL: Restoring previous memory policy: 4 00:04:43.037 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.037 EAL: request: mp_malloc_sync 00:04:43.037 EAL: No shared files mode enabled, IPC is disabled 00:04:43.037 EAL: Heap on socket 0 was expanded by 66MB 00:04:43.295 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.295 EAL: request: mp_malloc_sync 00:04:43.295 EAL: No shared files mode enabled, IPC is disabled 00:04:43.295 EAL: Heap on socket 0 was shrunk by 66MB 00:04:43.295 EAL: Trying to obtain current memory policy. 00:04:43.295 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.295 EAL: Restoring previous memory policy: 4 00:04:43.295 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.295 EAL: request: mp_malloc_sync 00:04:43.295 EAL: No shared files mode enabled, IPC is disabled 00:04:43.295 EAL: Heap on socket 0 was expanded by 130MB 00:04:43.553 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.553 EAL: request: mp_malloc_sync 00:04:43.553 EAL: No shared files mode enabled, IPC is disabled 00:04:43.553 EAL: Heap on socket 0 was shrunk by 130MB 00:04:43.812 EAL: Trying to obtain current memory policy. 00:04:43.812 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.812 EAL: Restoring previous memory policy: 4 00:04:43.812 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.812 EAL: request: mp_malloc_sync 00:04:43.812 EAL: No shared files mode enabled, IPC is disabled 00:04:43.812 EAL: Heap on socket 0 was expanded by 258MB 00:04:44.388 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.388 EAL: request: mp_malloc_sync 00:04:44.388 EAL: No shared files mode enabled, IPC is disabled 00:04:44.388 EAL: Heap on socket 0 was shrunk by 258MB 00:04:44.951 EAL: Trying to obtain current memory policy. 00:04:44.951 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.951 EAL: Restoring previous memory policy: 4 00:04:44.951 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.951 EAL: request: mp_malloc_sync 00:04:44.951 EAL: No shared files mode enabled, IPC is disabled 00:04:44.951 EAL: Heap on socket 0 was expanded by 514MB 00:04:45.884 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.141 EAL: request: mp_malloc_sync 00:04:46.141 EAL: No shared files mode enabled, IPC is disabled 00:04:46.141 EAL: Heap on socket 0 was shrunk by 514MB 00:04:47.076 EAL: Trying to obtain current memory policy. 00:04:47.076 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.076 EAL: Restoring previous memory policy: 4 00:04:47.076 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.076 EAL: request: mp_malloc_sync 00:04:47.076 EAL: No shared files mode enabled, IPC is disabled 00:04:47.076 EAL: Heap on socket 0 was expanded by 1026MB 00:04:48.977 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.235 EAL: request: mp_malloc_sync 00:04:49.235 EAL: No shared files mode enabled, IPC is disabled 00:04:49.235 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:51.169 passed 00:04:51.169 00:04:51.169 Run Summary: Type Total Ran Passed Failed Inactive 00:04:51.169 suites 1 1 n/a 0 0 00:04:51.169 tests 2 2 2 0 0 00:04:51.169 asserts 497 497 497 0 n/a 00:04:51.169 00:04:51.169 Elapsed time = 8.349 seconds 00:04:51.169 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.169 EAL: request: mp_malloc_sync 00:04:51.169 EAL: No shared files mode enabled, IPC is disabled 00:04:51.169 EAL: Heap on socket 0 was shrunk by 2MB 00:04:51.169 EAL: No shared files mode enabled, IPC is disabled 00:04:51.169 EAL: No shared files mode enabled, IPC is disabled 00:04:51.169 EAL: No shared files mode enabled, IPC is disabled 00:04:51.169 00:04:51.169 real 0m8.608s 00:04:51.169 user 0m7.475s 00:04:51.169 sys 0m1.077s 00:04:51.169 14:37:30 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.169 14:37:30 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:51.169 ************************************ 00:04:51.169 END TEST env_vtophys 00:04:51.169 ************************************ 00:04:51.169 14:37:30 env -- common/autotest_common.sh@1142 -- # return 0 00:04:51.169 14:37:30 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:51.169 14:37:30 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.169 14:37:30 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.169 14:37:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:51.169 ************************************ 00:04:51.169 START TEST env_pci 00:04:51.169 ************************************ 00:04:51.169 14:37:30 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:51.169 00:04:51.169 00:04:51.169 CUnit - A unit testing framework for C - Version 2.1-3 00:04:51.169 http://cunit.sourceforge.net/ 00:04:51.169 00:04:51.169 00:04:51.169 Suite: pci 00:04:51.169 Test: pci_hook ...[2024-07-14 14:37:30.154623] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1746280 has claimed it 00:04:51.169 EAL: Cannot find device (10000:00:01.0) 00:04:51.169 EAL: Failed to attach device on primary process 00:04:51.169 passed 00:04:51.169 00:04:51.169 Run Summary: Type Total Ran Passed Failed Inactive 00:04:51.169 suites 1 1 n/a 0 0 00:04:51.170 tests 1 1 1 0 0 00:04:51.170 asserts 25 25 25 0 n/a 00:04:51.170 00:04:51.170 Elapsed time = 0.042 seconds 00:04:51.170 00:04:51.170 real 0m0.092s 00:04:51.170 user 0m0.041s 00:04:51.170 sys 0m0.049s 00:04:51.170 14:37:30 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.170 14:37:30 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:51.170 ************************************ 00:04:51.170 END TEST env_pci 00:04:51.170 ************************************ 00:04:51.170 14:37:30 env -- common/autotest_common.sh@1142 -- # return 0 00:04:51.170 14:37:30 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:51.170 14:37:30 env -- env/env.sh@15 -- # uname 00:04:51.170 14:37:30 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:51.170 14:37:30 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:51.170 14:37:30 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:51.170 14:37:30 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:51.170 14:37:30 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.170 14:37:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:51.170 ************************************ 00:04:51.170 START TEST env_dpdk_post_init 00:04:51.170 ************************************ 00:04:51.170 14:37:30 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:51.170 EAL: Detected CPU lcores: 48 00:04:51.170 EAL: Detected NUMA nodes: 2 00:04:51.170 EAL: Detected shared linkage of DPDK 00:04:51.170 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:51.170 EAL: Selected IOVA mode 'VA' 00:04:51.170 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.170 EAL: VFIO support initialized 00:04:51.170 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:51.427 EAL: Using IOMMU type 1 (Type 1) 00:04:51.427 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:04:51.427 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:04:51.427 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:04:51.427 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:04:51.427 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:04:51.427 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:04:51.427 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:04:51.427 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:04:51.427 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:04:51.427 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:04:51.427 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:04:51.427 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:04:51.427 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:04:51.427 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:04:51.427 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:04:51.427 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:04:52.358 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:04:55.631 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:04:55.631 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:04:55.631 Starting DPDK initialization... 00:04:55.631 Starting SPDK post initialization... 00:04:55.631 SPDK NVMe probe 00:04:55.631 Attaching to 0000:88:00.0 00:04:55.631 Attached to 0000:88:00.0 00:04:55.631 Cleaning up... 00:04:55.631 00:04:55.631 real 0m4.599s 00:04:55.631 user 0m3.398s 00:04:55.631 sys 0m0.258s 00:04:55.631 14:37:34 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.631 14:37:34 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:55.631 ************************************ 00:04:55.631 END TEST env_dpdk_post_init 00:04:55.631 ************************************ 00:04:55.631 14:37:34 env -- common/autotest_common.sh@1142 -- # return 0 00:04:55.631 14:37:34 env -- env/env.sh@26 -- # uname 00:04:55.631 14:37:34 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:55.631 14:37:34 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:55.631 14:37:34 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:55.631 14:37:34 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.631 14:37:34 env -- common/autotest_common.sh@10 -- # set +x 00:04:55.631 ************************************ 00:04:55.631 START TEST env_mem_callbacks 00:04:55.631 ************************************ 00:04:55.631 14:37:34 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:55.889 EAL: Detected CPU lcores: 48 00:04:55.889 EAL: Detected NUMA nodes: 2 00:04:55.889 EAL: Detected shared linkage of DPDK 00:04:55.889 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:55.889 EAL: Selected IOVA mode 'VA' 00:04:55.889 EAL: No free 2048 kB hugepages reported on node 1 00:04:55.889 EAL: VFIO support initialized 00:04:55.889 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:55.889 00:04:55.889 00:04:55.889 CUnit - A unit testing framework for C - Version 2.1-3 00:04:55.889 http://cunit.sourceforge.net/ 00:04:55.889 00:04:55.889 00:04:55.889 Suite: memory 00:04:55.889 Test: test ... 00:04:55.889 register 0x200000200000 2097152 00:04:55.889 malloc 3145728 00:04:55.889 register 0x200000400000 4194304 00:04:55.889 buf 0x2000004fffc0 len 3145728 PASSED 00:04:55.889 malloc 64 00:04:55.889 buf 0x2000004ffec0 len 64 PASSED 00:04:55.889 malloc 4194304 00:04:55.889 register 0x200000800000 6291456 00:04:55.889 buf 0x2000009fffc0 len 4194304 PASSED 00:04:55.889 free 0x2000004fffc0 3145728 00:04:55.890 free 0x2000004ffec0 64 00:04:55.890 unregister 0x200000400000 4194304 PASSED 00:04:55.890 free 0x2000009fffc0 4194304 00:04:55.890 unregister 0x200000800000 6291456 PASSED 00:04:55.890 malloc 8388608 00:04:55.890 register 0x200000400000 10485760 00:04:55.890 buf 0x2000005fffc0 len 8388608 PASSED 00:04:55.890 free 0x2000005fffc0 8388608 00:04:55.890 unregister 0x200000400000 10485760 PASSED 00:04:55.890 passed 00:04:55.890 00:04:55.890 Run Summary: Type Total Ran Passed Failed Inactive 00:04:55.890 suites 1 1 n/a 0 0 00:04:55.890 tests 1 1 1 0 0 00:04:55.890 asserts 15 15 15 0 n/a 00:04:55.890 00:04:55.890 Elapsed time = 0.060 seconds 00:04:55.890 00:04:55.890 real 0m0.181s 00:04:55.890 user 0m0.100s 00:04:55.890 sys 0m0.080s 00:04:55.890 14:37:35 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.890 14:37:35 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:55.890 ************************************ 00:04:55.890 END TEST env_mem_callbacks 00:04:55.890 ************************************ 00:04:55.890 14:37:35 env -- common/autotest_common.sh@1142 -- # return 0 00:04:55.890 00:04:55.890 real 0m14.025s 00:04:55.890 user 0m11.365s 00:04:55.890 sys 0m1.675s 00:04:55.890 14:37:35 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.890 14:37:35 env -- common/autotest_common.sh@10 -- # set +x 00:04:55.890 ************************************ 00:04:55.890 END TEST env 00:04:55.890 ************************************ 00:04:55.890 14:37:35 -- common/autotest_common.sh@1142 -- # return 0 00:04:55.890 14:37:35 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:55.890 14:37:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:55.890 14:37:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.890 14:37:35 -- common/autotest_common.sh@10 -- # set +x 00:04:55.890 ************************************ 00:04:55.890 START TEST rpc 00:04:55.890 ************************************ 00:04:55.890 14:37:35 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:56.147 * Looking for test storage... 00:04:56.147 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:56.147 14:37:35 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1747065 00:04:56.147 14:37:35 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:56.147 14:37:35 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:56.147 14:37:35 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1747065 00:04:56.147 14:37:35 rpc -- common/autotest_common.sh@829 -- # '[' -z 1747065 ']' 00:04:56.147 14:37:35 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.147 14:37:35 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:56.148 14:37:35 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.148 14:37:35 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:56.148 14:37:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.148 [2024-07-14 14:37:35.305635] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:56.148 [2024-07-14 14:37:35.305781] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1747065 ] 00:04:56.148 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.148 [2024-07-14 14:37:35.427535] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.405 [2024-07-14 14:37:35.678760] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:56.405 [2024-07-14 14:37:35.678841] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1747065' to capture a snapshot of events at runtime. 00:04:56.405 [2024-07-14 14:37:35.678865] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:56.405 [2024-07-14 14:37:35.678904] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:56.405 [2024-07-14 14:37:35.678924] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1747065 for offline analysis/debug. 00:04:56.405 [2024-07-14 14:37:35.678974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.338 14:37:36 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:57.338 14:37:36 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:57.338 14:37:36 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:57.338 14:37:36 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:57.338 14:37:36 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:57.338 14:37:36 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:57.338 14:37:36 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.338 14:37:36 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.338 14:37:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.338 ************************************ 00:04:57.338 START TEST rpc_integrity 00:04:57.338 ************************************ 00:04:57.338 14:37:36 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:57.338 14:37:36 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:57.338 14:37:36 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.338 14:37:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.338 14:37:36 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.338 14:37:36 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:57.338 14:37:36 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:57.338 14:37:36 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:57.338 14:37:36 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:57.338 14:37:36 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.338 14:37:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.338 14:37:36 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.338 14:37:36 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:57.338 14:37:36 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:57.338 14:37:36 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.338 14:37:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.596 14:37:36 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.596 14:37:36 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:57.596 { 00:04:57.596 "name": "Malloc0", 00:04:57.596 "aliases": [ 00:04:57.596 "5a6500c9-9582-47e9-8549-d48f5ba7e1ef" 00:04:57.596 ], 00:04:57.596 "product_name": "Malloc disk", 00:04:57.596 "block_size": 512, 00:04:57.596 "num_blocks": 16384, 00:04:57.596 "uuid": "5a6500c9-9582-47e9-8549-d48f5ba7e1ef", 00:04:57.596 "assigned_rate_limits": { 00:04:57.596 "rw_ios_per_sec": 0, 00:04:57.596 "rw_mbytes_per_sec": 0, 00:04:57.596 "r_mbytes_per_sec": 0, 00:04:57.596 "w_mbytes_per_sec": 0 00:04:57.596 }, 00:04:57.596 "claimed": false, 00:04:57.596 "zoned": false, 00:04:57.596 "supported_io_types": { 00:04:57.596 "read": true, 00:04:57.596 "write": true, 00:04:57.596 "unmap": true, 00:04:57.596 "flush": true, 00:04:57.596 "reset": true, 00:04:57.596 "nvme_admin": false, 00:04:57.596 "nvme_io": false, 00:04:57.597 "nvme_io_md": false, 00:04:57.597 "write_zeroes": true, 00:04:57.597 "zcopy": true, 00:04:57.597 "get_zone_info": false, 00:04:57.597 "zone_management": false, 00:04:57.597 "zone_append": false, 00:04:57.597 "compare": false, 00:04:57.597 "compare_and_write": false, 00:04:57.597 "abort": true, 00:04:57.597 "seek_hole": false, 00:04:57.597 "seek_data": false, 00:04:57.597 "copy": true, 00:04:57.597 "nvme_iov_md": false 00:04:57.597 }, 00:04:57.597 "memory_domains": [ 00:04:57.597 { 00:04:57.597 "dma_device_id": "system", 00:04:57.597 "dma_device_type": 1 00:04:57.597 }, 00:04:57.597 { 00:04:57.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:57.597 "dma_device_type": 2 00:04:57.597 } 00:04:57.597 ], 00:04:57.597 "driver_specific": {} 00:04:57.597 } 00:04:57.597 ]' 00:04:57.597 14:37:36 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:57.597 14:37:36 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:57.597 14:37:36 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:57.597 14:37:36 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.597 14:37:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.597 [2024-07-14 14:37:36.694836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:57.597 [2024-07-14 14:37:36.694935] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:57.597 [2024-07-14 14:37:36.694976] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000022880 00:04:57.597 [2024-07-14 14:37:36.695003] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:57.597 [2024-07-14 14:37:36.697689] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:57.597 [2024-07-14 14:37:36.697736] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:57.597 Passthru0 00:04:57.597 14:37:36 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.597 14:37:36 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:57.597 14:37:36 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.597 14:37:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.597 14:37:36 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.597 14:37:36 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:57.597 { 00:04:57.597 "name": "Malloc0", 00:04:57.597 "aliases": [ 00:04:57.597 "5a6500c9-9582-47e9-8549-d48f5ba7e1ef" 00:04:57.597 ], 00:04:57.597 "product_name": "Malloc disk", 00:04:57.597 "block_size": 512, 00:04:57.597 "num_blocks": 16384, 00:04:57.597 "uuid": "5a6500c9-9582-47e9-8549-d48f5ba7e1ef", 00:04:57.597 "assigned_rate_limits": { 00:04:57.597 "rw_ios_per_sec": 0, 00:04:57.597 "rw_mbytes_per_sec": 0, 00:04:57.597 "r_mbytes_per_sec": 0, 00:04:57.597 "w_mbytes_per_sec": 0 00:04:57.597 }, 00:04:57.597 "claimed": true, 00:04:57.597 "claim_type": "exclusive_write", 00:04:57.597 "zoned": false, 00:04:57.597 "supported_io_types": { 00:04:57.597 "read": true, 00:04:57.597 "write": true, 00:04:57.597 "unmap": true, 00:04:57.597 "flush": true, 00:04:57.597 "reset": true, 00:04:57.597 "nvme_admin": false, 00:04:57.597 "nvme_io": false, 00:04:57.597 "nvme_io_md": false, 00:04:57.597 "write_zeroes": true, 00:04:57.597 "zcopy": true, 00:04:57.597 "get_zone_info": false, 00:04:57.597 "zone_management": false, 00:04:57.597 "zone_append": false, 00:04:57.597 "compare": false, 00:04:57.597 "compare_and_write": false, 00:04:57.597 "abort": true, 00:04:57.597 "seek_hole": false, 00:04:57.597 "seek_data": false, 00:04:57.597 "copy": true, 00:04:57.597 "nvme_iov_md": false 00:04:57.597 }, 00:04:57.597 "memory_domains": [ 00:04:57.597 { 00:04:57.597 "dma_device_id": "system", 00:04:57.597 "dma_device_type": 1 00:04:57.597 }, 00:04:57.597 { 00:04:57.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:57.597 "dma_device_type": 2 00:04:57.597 } 00:04:57.597 ], 00:04:57.597 "driver_specific": {} 00:04:57.597 }, 00:04:57.597 { 00:04:57.597 "name": "Passthru0", 00:04:57.597 "aliases": [ 00:04:57.597 "6aba9ab4-d186-5345-a34a-58cc9d6cbf75" 00:04:57.597 ], 00:04:57.597 "product_name": "passthru", 00:04:57.597 "block_size": 512, 00:04:57.597 "num_blocks": 16384, 00:04:57.597 "uuid": "6aba9ab4-d186-5345-a34a-58cc9d6cbf75", 00:04:57.597 "assigned_rate_limits": { 00:04:57.597 "rw_ios_per_sec": 0, 00:04:57.597 "rw_mbytes_per_sec": 0, 00:04:57.597 "r_mbytes_per_sec": 0, 00:04:57.597 "w_mbytes_per_sec": 0 00:04:57.597 }, 00:04:57.597 "claimed": false, 00:04:57.597 "zoned": false, 00:04:57.597 "supported_io_types": { 00:04:57.597 "read": true, 00:04:57.597 "write": true, 00:04:57.597 "unmap": true, 00:04:57.597 "flush": true, 00:04:57.597 "reset": true, 00:04:57.597 "nvme_admin": false, 00:04:57.597 "nvme_io": false, 00:04:57.597 "nvme_io_md": false, 00:04:57.597 "write_zeroes": true, 00:04:57.597 "zcopy": true, 00:04:57.597 "get_zone_info": false, 00:04:57.597 "zone_management": false, 00:04:57.597 "zone_append": false, 00:04:57.597 "compare": false, 00:04:57.597 "compare_and_write": false, 00:04:57.597 "abort": true, 00:04:57.597 "seek_hole": false, 00:04:57.597 "seek_data": false, 00:04:57.597 "copy": true, 00:04:57.597 "nvme_iov_md": false 00:04:57.597 }, 00:04:57.597 "memory_domains": [ 00:04:57.597 { 00:04:57.597 "dma_device_id": "system", 00:04:57.597 "dma_device_type": 1 00:04:57.597 }, 00:04:57.597 { 00:04:57.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:57.597 "dma_device_type": 2 00:04:57.597 } 00:04:57.597 ], 00:04:57.597 "driver_specific": { 00:04:57.597 "passthru": { 00:04:57.597 "name": "Passthru0", 00:04:57.597 "base_bdev_name": "Malloc0" 00:04:57.597 } 00:04:57.597 } 00:04:57.597 } 00:04:57.597 ]' 00:04:57.597 14:37:36 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:57.597 14:37:36 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:57.597 14:37:36 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:57.597 14:37:36 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.597 14:37:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.597 14:37:36 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.597 14:37:36 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:57.597 14:37:36 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.597 14:37:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.597 14:37:36 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.597 14:37:36 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:57.597 14:37:36 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.597 14:37:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.597 14:37:36 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.597 14:37:36 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:57.597 14:37:36 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:57.597 14:37:36 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:57.597 00:04:57.597 real 0m0.258s 00:04:57.597 user 0m0.139s 00:04:57.597 sys 0m0.026s 00:04:57.597 14:37:36 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.597 14:37:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.597 ************************************ 00:04:57.597 END TEST rpc_integrity 00:04:57.597 ************************************ 00:04:57.597 14:37:36 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:57.597 14:37:36 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:57.597 14:37:36 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.597 14:37:36 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.597 14:37:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.597 ************************************ 00:04:57.597 START TEST rpc_plugins 00:04:57.597 ************************************ 00:04:57.597 14:37:36 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:57.597 14:37:36 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:57.597 14:37:36 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.597 14:37:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:57.597 14:37:36 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.597 14:37:36 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:57.597 14:37:36 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:57.597 14:37:36 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.597 14:37:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:57.597 14:37:36 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.597 14:37:36 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:57.597 { 00:04:57.597 "name": "Malloc1", 00:04:57.597 "aliases": [ 00:04:57.597 "49621add-a39e-4e30-b5c8-3f696480eb4d" 00:04:57.597 ], 00:04:57.597 "product_name": "Malloc disk", 00:04:57.597 "block_size": 4096, 00:04:57.597 "num_blocks": 256, 00:04:57.597 "uuid": "49621add-a39e-4e30-b5c8-3f696480eb4d", 00:04:57.597 "assigned_rate_limits": { 00:04:57.597 "rw_ios_per_sec": 0, 00:04:57.597 "rw_mbytes_per_sec": 0, 00:04:57.597 "r_mbytes_per_sec": 0, 00:04:57.597 "w_mbytes_per_sec": 0 00:04:57.597 }, 00:04:57.597 "claimed": false, 00:04:57.597 "zoned": false, 00:04:57.597 "supported_io_types": { 00:04:57.597 "read": true, 00:04:57.597 "write": true, 00:04:57.597 "unmap": true, 00:04:57.597 "flush": true, 00:04:57.597 "reset": true, 00:04:57.597 "nvme_admin": false, 00:04:57.597 "nvme_io": false, 00:04:57.597 "nvme_io_md": false, 00:04:57.597 "write_zeroes": true, 00:04:57.597 "zcopy": true, 00:04:57.597 "get_zone_info": false, 00:04:57.597 "zone_management": false, 00:04:57.597 "zone_append": false, 00:04:57.597 "compare": false, 00:04:57.597 "compare_and_write": false, 00:04:57.597 "abort": true, 00:04:57.597 "seek_hole": false, 00:04:57.597 "seek_data": false, 00:04:57.597 "copy": true, 00:04:57.597 "nvme_iov_md": false 00:04:57.597 }, 00:04:57.597 "memory_domains": [ 00:04:57.597 { 00:04:57.598 "dma_device_id": "system", 00:04:57.598 "dma_device_type": 1 00:04:57.598 }, 00:04:57.598 { 00:04:57.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:57.598 "dma_device_type": 2 00:04:57.598 } 00:04:57.598 ], 00:04:57.598 "driver_specific": {} 00:04:57.598 } 00:04:57.598 ]' 00:04:57.598 14:37:36 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:57.855 14:37:36 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:57.855 14:37:36 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:57.855 14:37:36 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.855 14:37:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:57.855 14:37:36 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.855 14:37:36 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:57.855 14:37:36 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.855 14:37:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:57.855 14:37:36 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.855 14:37:36 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:57.855 14:37:36 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:57.855 14:37:36 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:57.855 00:04:57.855 real 0m0.121s 00:04:57.855 user 0m0.083s 00:04:57.855 sys 0m0.006s 00:04:57.855 14:37:36 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.855 14:37:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:57.855 ************************************ 00:04:57.855 END TEST rpc_plugins 00:04:57.855 ************************************ 00:04:57.855 14:37:37 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:57.855 14:37:37 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:57.855 14:37:37 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.855 14:37:37 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.855 14:37:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.855 ************************************ 00:04:57.855 START TEST rpc_trace_cmd_test 00:04:57.855 ************************************ 00:04:57.855 14:37:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:57.855 14:37:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:57.855 14:37:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:57.855 14:37:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.855 14:37:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:57.855 14:37:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.855 14:37:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:57.855 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1747065", 00:04:57.855 "tpoint_group_mask": "0x8", 00:04:57.855 "iscsi_conn": { 00:04:57.855 "mask": "0x2", 00:04:57.855 "tpoint_mask": "0x0" 00:04:57.855 }, 00:04:57.855 "scsi": { 00:04:57.855 "mask": "0x4", 00:04:57.855 "tpoint_mask": "0x0" 00:04:57.855 }, 00:04:57.855 "bdev": { 00:04:57.855 "mask": "0x8", 00:04:57.855 "tpoint_mask": "0xffffffffffffffff" 00:04:57.855 }, 00:04:57.855 "nvmf_rdma": { 00:04:57.855 "mask": "0x10", 00:04:57.855 "tpoint_mask": "0x0" 00:04:57.855 }, 00:04:57.855 "nvmf_tcp": { 00:04:57.855 "mask": "0x20", 00:04:57.855 "tpoint_mask": "0x0" 00:04:57.855 }, 00:04:57.855 "ftl": { 00:04:57.855 "mask": "0x40", 00:04:57.855 "tpoint_mask": "0x0" 00:04:57.855 }, 00:04:57.855 "blobfs": { 00:04:57.855 "mask": "0x80", 00:04:57.855 "tpoint_mask": "0x0" 00:04:57.855 }, 00:04:57.855 "dsa": { 00:04:57.855 "mask": "0x200", 00:04:57.855 "tpoint_mask": "0x0" 00:04:57.855 }, 00:04:57.855 "thread": { 00:04:57.855 "mask": "0x400", 00:04:57.855 "tpoint_mask": "0x0" 00:04:57.855 }, 00:04:57.855 "nvme_pcie": { 00:04:57.855 "mask": "0x800", 00:04:57.855 "tpoint_mask": "0x0" 00:04:57.855 }, 00:04:57.855 "iaa": { 00:04:57.855 "mask": "0x1000", 00:04:57.855 "tpoint_mask": "0x0" 00:04:57.855 }, 00:04:57.855 "nvme_tcp": { 00:04:57.855 "mask": "0x2000", 00:04:57.855 "tpoint_mask": "0x0" 00:04:57.855 }, 00:04:57.855 "bdev_nvme": { 00:04:57.855 "mask": "0x4000", 00:04:57.855 "tpoint_mask": "0x0" 00:04:57.855 }, 00:04:57.855 "sock": { 00:04:57.855 "mask": "0x8000", 00:04:57.855 "tpoint_mask": "0x0" 00:04:57.855 } 00:04:57.855 }' 00:04:57.855 14:37:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:57.855 14:37:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:57.855 14:37:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:57.855 14:37:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:57.855 14:37:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:58.113 14:37:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:58.113 14:37:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:58.113 14:37:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:58.113 14:37:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:58.113 14:37:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:58.113 00:04:58.113 real 0m0.199s 00:04:58.113 user 0m0.176s 00:04:58.113 sys 0m0.016s 00:04:58.113 14:37:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.113 14:37:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:58.113 ************************************ 00:04:58.113 END TEST rpc_trace_cmd_test 00:04:58.113 ************************************ 00:04:58.113 14:37:37 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:58.113 14:37:37 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:58.113 14:37:37 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:58.113 14:37:37 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:58.113 14:37:37 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:58.113 14:37:37 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.113 14:37:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.113 ************************************ 00:04:58.113 START TEST rpc_daemon_integrity 00:04:58.113 ************************************ 00:04:58.113 14:37:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:58.113 14:37:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:58.113 14:37:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.113 14:37:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.113 14:37:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.113 14:37:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:58.113 14:37:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:58.113 14:37:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:58.113 14:37:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:58.113 14:37:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.113 14:37:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.113 14:37:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.113 14:37:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:58.113 14:37:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:58.113 14:37:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.113 14:37:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.113 14:37:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.113 14:37:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:58.113 { 00:04:58.113 "name": "Malloc2", 00:04:58.113 "aliases": [ 00:04:58.113 "048a4542-5590-47bf-a498-49c67f6a0638" 00:04:58.113 ], 00:04:58.113 "product_name": "Malloc disk", 00:04:58.113 "block_size": 512, 00:04:58.113 "num_blocks": 16384, 00:04:58.113 "uuid": "048a4542-5590-47bf-a498-49c67f6a0638", 00:04:58.113 "assigned_rate_limits": { 00:04:58.113 "rw_ios_per_sec": 0, 00:04:58.113 "rw_mbytes_per_sec": 0, 00:04:58.113 "r_mbytes_per_sec": 0, 00:04:58.113 "w_mbytes_per_sec": 0 00:04:58.113 }, 00:04:58.113 "claimed": false, 00:04:58.113 "zoned": false, 00:04:58.113 "supported_io_types": { 00:04:58.113 "read": true, 00:04:58.113 "write": true, 00:04:58.113 "unmap": true, 00:04:58.113 "flush": true, 00:04:58.113 "reset": true, 00:04:58.113 "nvme_admin": false, 00:04:58.113 "nvme_io": false, 00:04:58.113 "nvme_io_md": false, 00:04:58.113 "write_zeroes": true, 00:04:58.113 "zcopy": true, 00:04:58.114 "get_zone_info": false, 00:04:58.114 "zone_management": false, 00:04:58.114 "zone_append": false, 00:04:58.114 "compare": false, 00:04:58.114 "compare_and_write": false, 00:04:58.114 "abort": true, 00:04:58.114 "seek_hole": false, 00:04:58.114 "seek_data": false, 00:04:58.114 "copy": true, 00:04:58.114 "nvme_iov_md": false 00:04:58.114 }, 00:04:58.114 "memory_domains": [ 00:04:58.114 { 00:04:58.114 "dma_device_id": "system", 00:04:58.114 "dma_device_type": 1 00:04:58.114 }, 00:04:58.114 { 00:04:58.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.114 "dma_device_type": 2 00:04:58.114 } 00:04:58.114 ], 00:04:58.114 "driver_specific": {} 00:04:58.114 } 00:04:58.114 ]' 00:04:58.114 14:37:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:58.114 14:37:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:58.114 14:37:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:58.114 14:37:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.114 14:37:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.114 [2024-07-14 14:37:37.408368] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:58.114 [2024-07-14 14:37:37.408437] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:58.114 [2024-07-14 14:37:37.408474] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000023a80 00:04:58.114 [2024-07-14 14:37:37.408502] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:58.114 [2024-07-14 14:37:37.411132] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:58.114 [2024-07-14 14:37:37.411187] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:58.114 Passthru0 00:04:58.114 14:37:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.114 14:37:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:58.114 14:37:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.114 14:37:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.394 14:37:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.394 14:37:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:58.394 { 00:04:58.394 "name": "Malloc2", 00:04:58.394 "aliases": [ 00:04:58.394 "048a4542-5590-47bf-a498-49c67f6a0638" 00:04:58.394 ], 00:04:58.394 "product_name": "Malloc disk", 00:04:58.394 "block_size": 512, 00:04:58.394 "num_blocks": 16384, 00:04:58.394 "uuid": "048a4542-5590-47bf-a498-49c67f6a0638", 00:04:58.394 "assigned_rate_limits": { 00:04:58.394 "rw_ios_per_sec": 0, 00:04:58.394 "rw_mbytes_per_sec": 0, 00:04:58.394 "r_mbytes_per_sec": 0, 00:04:58.394 "w_mbytes_per_sec": 0 00:04:58.394 }, 00:04:58.394 "claimed": true, 00:04:58.394 "claim_type": "exclusive_write", 00:04:58.394 "zoned": false, 00:04:58.394 "supported_io_types": { 00:04:58.394 "read": true, 00:04:58.394 "write": true, 00:04:58.394 "unmap": true, 00:04:58.394 "flush": true, 00:04:58.394 "reset": true, 00:04:58.394 "nvme_admin": false, 00:04:58.394 "nvme_io": false, 00:04:58.394 "nvme_io_md": false, 00:04:58.394 "write_zeroes": true, 00:04:58.394 "zcopy": true, 00:04:58.394 "get_zone_info": false, 00:04:58.394 "zone_management": false, 00:04:58.394 "zone_append": false, 00:04:58.394 "compare": false, 00:04:58.394 "compare_and_write": false, 00:04:58.394 "abort": true, 00:04:58.394 "seek_hole": false, 00:04:58.394 "seek_data": false, 00:04:58.394 "copy": true, 00:04:58.394 "nvme_iov_md": false 00:04:58.394 }, 00:04:58.394 "memory_domains": [ 00:04:58.394 { 00:04:58.394 "dma_device_id": "system", 00:04:58.394 "dma_device_type": 1 00:04:58.394 }, 00:04:58.394 { 00:04:58.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.394 "dma_device_type": 2 00:04:58.394 } 00:04:58.394 ], 00:04:58.394 "driver_specific": {} 00:04:58.394 }, 00:04:58.394 { 00:04:58.394 "name": "Passthru0", 00:04:58.394 "aliases": [ 00:04:58.394 "56061726-da70-544a-81c1-eaa43c207278" 00:04:58.394 ], 00:04:58.394 "product_name": "passthru", 00:04:58.394 "block_size": 512, 00:04:58.394 "num_blocks": 16384, 00:04:58.394 "uuid": "56061726-da70-544a-81c1-eaa43c207278", 00:04:58.394 "assigned_rate_limits": { 00:04:58.394 "rw_ios_per_sec": 0, 00:04:58.394 "rw_mbytes_per_sec": 0, 00:04:58.394 "r_mbytes_per_sec": 0, 00:04:58.394 "w_mbytes_per_sec": 0 00:04:58.394 }, 00:04:58.394 "claimed": false, 00:04:58.394 "zoned": false, 00:04:58.394 "supported_io_types": { 00:04:58.394 "read": true, 00:04:58.394 "write": true, 00:04:58.394 "unmap": true, 00:04:58.394 "flush": true, 00:04:58.394 "reset": true, 00:04:58.394 "nvme_admin": false, 00:04:58.394 "nvme_io": false, 00:04:58.394 "nvme_io_md": false, 00:04:58.394 "write_zeroes": true, 00:04:58.394 "zcopy": true, 00:04:58.394 "get_zone_info": false, 00:04:58.394 "zone_management": false, 00:04:58.394 "zone_append": false, 00:04:58.394 "compare": false, 00:04:58.394 "compare_and_write": false, 00:04:58.394 "abort": true, 00:04:58.394 "seek_hole": false, 00:04:58.394 "seek_data": false, 00:04:58.394 "copy": true, 00:04:58.394 "nvme_iov_md": false 00:04:58.394 }, 00:04:58.394 "memory_domains": [ 00:04:58.394 { 00:04:58.394 "dma_device_id": "system", 00:04:58.394 "dma_device_type": 1 00:04:58.394 }, 00:04:58.394 { 00:04:58.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.394 "dma_device_type": 2 00:04:58.394 } 00:04:58.394 ], 00:04:58.394 "driver_specific": { 00:04:58.394 "passthru": { 00:04:58.394 "name": "Passthru0", 00:04:58.394 "base_bdev_name": "Malloc2" 00:04:58.394 } 00:04:58.394 } 00:04:58.394 } 00:04:58.394 ]' 00:04:58.394 14:37:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:58.394 14:37:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:58.394 14:37:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:58.394 14:37:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.394 14:37:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.394 14:37:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.394 14:37:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:58.395 14:37:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.395 14:37:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.395 14:37:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.395 14:37:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:58.395 14:37:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.395 14:37:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.395 14:37:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.395 14:37:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:58.395 14:37:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:58.395 14:37:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:58.395 00:04:58.395 real 0m0.267s 00:04:58.395 user 0m0.158s 00:04:58.395 sys 0m0.021s 00:04:58.395 14:37:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.395 14:37:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.395 ************************************ 00:04:58.395 END TEST rpc_daemon_integrity 00:04:58.395 ************************************ 00:04:58.395 14:37:37 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:58.395 14:37:37 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:58.395 14:37:37 rpc -- rpc/rpc.sh@84 -- # killprocess 1747065 00:04:58.395 14:37:37 rpc -- common/autotest_common.sh@948 -- # '[' -z 1747065 ']' 00:04:58.395 14:37:37 rpc -- common/autotest_common.sh@952 -- # kill -0 1747065 00:04:58.395 14:37:37 rpc -- common/autotest_common.sh@953 -- # uname 00:04:58.395 14:37:37 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:58.395 14:37:37 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1747065 00:04:58.395 14:37:37 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:58.395 14:37:37 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:58.395 14:37:37 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1747065' 00:04:58.395 killing process with pid 1747065 00:04:58.395 14:37:37 rpc -- common/autotest_common.sh@967 -- # kill 1747065 00:04:58.395 14:37:37 rpc -- common/autotest_common.sh@972 -- # wait 1747065 00:05:00.920 00:05:00.920 real 0m4.911s 00:05:00.920 user 0m5.468s 00:05:00.920 sys 0m0.753s 00:05:00.920 14:37:40 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:00.920 14:37:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.920 ************************************ 00:05:00.920 END TEST rpc 00:05:00.920 ************************************ 00:05:00.920 14:37:40 -- common/autotest_common.sh@1142 -- # return 0 00:05:00.921 14:37:40 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:00.921 14:37:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:00.921 14:37:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.921 14:37:40 -- common/autotest_common.sh@10 -- # set +x 00:05:00.921 ************************************ 00:05:00.921 START TEST skip_rpc 00:05:00.921 ************************************ 00:05:00.921 14:37:40 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:00.921 * Looking for test storage... 00:05:00.921 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:00.921 14:37:40 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:00.921 14:37:40 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:00.921 14:37:40 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:00.921 14:37:40 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:00.921 14:37:40 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.921 14:37:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.921 ************************************ 00:05:00.921 START TEST skip_rpc 00:05:00.921 ************************************ 00:05:00.921 14:37:40 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:00.921 14:37:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1747782 00:05:00.921 14:37:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:00.921 14:37:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:00.921 14:37:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:01.178 [2024-07-14 14:37:40.284380] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:01.178 [2024-07-14 14:37:40.284528] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1747782 ] 00:05:01.178 EAL: No free 2048 kB hugepages reported on node 1 00:05:01.178 [2024-07-14 14:37:40.411578] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.435 [2024-07-14 14:37:40.667738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.692 14:37:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:06.692 14:37:45 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:06.692 14:37:45 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:06.692 14:37:45 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:06.692 14:37:45 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:06.692 14:37:45 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:06.692 14:37:45 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:06.692 14:37:45 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:06.692 14:37:45 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:06.692 14:37:45 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.692 14:37:45 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:06.692 14:37:45 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:06.692 14:37:45 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:06.692 14:37:45 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:06.692 14:37:45 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:06.692 14:37:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:06.692 14:37:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1747782 00:05:06.692 14:37:45 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 1747782 ']' 00:05:06.692 14:37:45 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 1747782 00:05:06.692 14:37:45 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:06.692 14:37:45 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:06.692 14:37:45 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1747782 00:05:06.692 14:37:45 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:06.692 14:37:45 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:06.692 14:37:45 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1747782' 00:05:06.692 killing process with pid 1747782 00:05:06.692 14:37:45 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 1747782 00:05:06.692 14:37:45 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 1747782 00:05:08.592 00:05:08.592 real 0m7.522s 00:05:08.592 user 0m7.018s 00:05:08.592 sys 0m0.488s 00:05:08.592 14:37:47 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.592 14:37:47 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.592 ************************************ 00:05:08.592 END TEST skip_rpc 00:05:08.592 ************************************ 00:05:08.592 14:37:47 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:08.592 14:37:47 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:08.592 14:37:47 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:08.592 14:37:47 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.592 14:37:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.592 ************************************ 00:05:08.592 START TEST skip_rpc_with_json 00:05:08.592 ************************************ 00:05:08.592 14:37:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:08.592 14:37:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:08.592 14:37:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1748733 00:05:08.592 14:37:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:08.592 14:37:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:08.592 14:37:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1748733 00:05:08.592 14:37:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 1748733 ']' 00:05:08.592 14:37:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.592 14:37:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:08.592 14:37:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.592 14:37:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:08.592 14:37:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:08.592 [2024-07-14 14:37:47.856516] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:08.592 [2024-07-14 14:37:47.856685] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1748733 ] 00:05:08.850 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.850 [2024-07-14 14:37:47.993537] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.108 [2024-07-14 14:37:48.248323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.043 14:37:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:10.043 14:37:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:10.043 14:37:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:10.043 14:37:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.043 14:37:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:10.043 [2024-07-14 14:37:49.109649] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:10.043 request: 00:05:10.043 { 00:05:10.043 "trtype": "tcp", 00:05:10.043 "method": "nvmf_get_transports", 00:05:10.043 "req_id": 1 00:05:10.043 } 00:05:10.043 Got JSON-RPC error response 00:05:10.043 response: 00:05:10.043 { 00:05:10.043 "code": -19, 00:05:10.043 "message": "No such device" 00:05:10.043 } 00:05:10.043 14:37:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:10.043 14:37:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:10.043 14:37:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.043 14:37:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:10.043 [2024-07-14 14:37:49.117800] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:10.043 14:37:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.043 14:37:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:10.043 14:37:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.043 14:37:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:10.043 14:37:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.043 14:37:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:10.043 { 00:05:10.043 "subsystems": [ 00:05:10.043 { 00:05:10.043 "subsystem": "keyring", 00:05:10.043 "config": [] 00:05:10.043 }, 00:05:10.043 { 00:05:10.043 "subsystem": "iobuf", 00:05:10.043 "config": [ 00:05:10.043 { 00:05:10.043 "method": "iobuf_set_options", 00:05:10.043 "params": { 00:05:10.043 "small_pool_count": 8192, 00:05:10.043 "large_pool_count": 1024, 00:05:10.043 "small_bufsize": 8192, 00:05:10.043 "large_bufsize": 135168 00:05:10.043 } 00:05:10.043 } 00:05:10.043 ] 00:05:10.043 }, 00:05:10.043 { 00:05:10.043 "subsystem": "sock", 00:05:10.043 "config": [ 00:05:10.043 { 00:05:10.043 "method": "sock_set_default_impl", 00:05:10.043 "params": { 00:05:10.043 "impl_name": "posix" 00:05:10.043 } 00:05:10.043 }, 00:05:10.043 { 00:05:10.043 "method": "sock_impl_set_options", 00:05:10.043 "params": { 00:05:10.043 "impl_name": "ssl", 00:05:10.043 "recv_buf_size": 4096, 00:05:10.043 "send_buf_size": 4096, 00:05:10.043 "enable_recv_pipe": true, 00:05:10.043 "enable_quickack": false, 00:05:10.043 "enable_placement_id": 0, 00:05:10.043 "enable_zerocopy_send_server": true, 00:05:10.043 "enable_zerocopy_send_client": false, 00:05:10.043 "zerocopy_threshold": 0, 00:05:10.043 "tls_version": 0, 00:05:10.043 "enable_ktls": false 00:05:10.043 } 00:05:10.043 }, 00:05:10.043 { 00:05:10.043 "method": "sock_impl_set_options", 00:05:10.043 "params": { 00:05:10.043 "impl_name": "posix", 00:05:10.043 "recv_buf_size": 2097152, 00:05:10.043 "send_buf_size": 2097152, 00:05:10.043 "enable_recv_pipe": true, 00:05:10.043 "enable_quickack": false, 00:05:10.043 "enable_placement_id": 0, 00:05:10.043 "enable_zerocopy_send_server": true, 00:05:10.043 "enable_zerocopy_send_client": false, 00:05:10.043 "zerocopy_threshold": 0, 00:05:10.043 "tls_version": 0, 00:05:10.043 "enable_ktls": false 00:05:10.043 } 00:05:10.043 } 00:05:10.043 ] 00:05:10.043 }, 00:05:10.043 { 00:05:10.043 "subsystem": "vmd", 00:05:10.043 "config": [] 00:05:10.043 }, 00:05:10.043 { 00:05:10.043 "subsystem": "accel", 00:05:10.043 "config": [ 00:05:10.043 { 00:05:10.043 "method": "accel_set_options", 00:05:10.043 "params": { 00:05:10.043 "small_cache_size": 128, 00:05:10.043 "large_cache_size": 16, 00:05:10.043 "task_count": 2048, 00:05:10.043 "sequence_count": 2048, 00:05:10.043 "buf_count": 2048 00:05:10.043 } 00:05:10.043 } 00:05:10.043 ] 00:05:10.043 }, 00:05:10.043 { 00:05:10.043 "subsystem": "bdev", 00:05:10.043 "config": [ 00:05:10.043 { 00:05:10.043 "method": "bdev_set_options", 00:05:10.043 "params": { 00:05:10.043 "bdev_io_pool_size": 65535, 00:05:10.043 "bdev_io_cache_size": 256, 00:05:10.043 "bdev_auto_examine": true, 00:05:10.043 "iobuf_small_cache_size": 128, 00:05:10.043 "iobuf_large_cache_size": 16 00:05:10.043 } 00:05:10.043 }, 00:05:10.043 { 00:05:10.043 "method": "bdev_raid_set_options", 00:05:10.043 "params": { 00:05:10.043 "process_window_size_kb": 1024 00:05:10.043 } 00:05:10.043 }, 00:05:10.043 { 00:05:10.043 "method": "bdev_iscsi_set_options", 00:05:10.043 "params": { 00:05:10.043 "timeout_sec": 30 00:05:10.043 } 00:05:10.043 }, 00:05:10.043 { 00:05:10.043 "method": "bdev_nvme_set_options", 00:05:10.043 "params": { 00:05:10.044 "action_on_timeout": "none", 00:05:10.044 "timeout_us": 0, 00:05:10.044 "timeout_admin_us": 0, 00:05:10.044 "keep_alive_timeout_ms": 10000, 00:05:10.044 "arbitration_burst": 0, 00:05:10.044 "low_priority_weight": 0, 00:05:10.044 "medium_priority_weight": 0, 00:05:10.044 "high_priority_weight": 0, 00:05:10.044 "nvme_adminq_poll_period_us": 10000, 00:05:10.044 "nvme_ioq_poll_period_us": 0, 00:05:10.044 "io_queue_requests": 0, 00:05:10.044 "delay_cmd_submit": true, 00:05:10.044 "transport_retry_count": 4, 00:05:10.044 "bdev_retry_count": 3, 00:05:10.044 "transport_ack_timeout": 0, 00:05:10.044 "ctrlr_loss_timeout_sec": 0, 00:05:10.044 "reconnect_delay_sec": 0, 00:05:10.044 "fast_io_fail_timeout_sec": 0, 00:05:10.044 "disable_auto_failback": false, 00:05:10.044 "generate_uuids": false, 00:05:10.044 "transport_tos": 0, 00:05:10.044 "nvme_error_stat": false, 00:05:10.044 "rdma_srq_size": 0, 00:05:10.044 "io_path_stat": false, 00:05:10.044 "allow_accel_sequence": false, 00:05:10.044 "rdma_max_cq_size": 0, 00:05:10.044 "rdma_cm_event_timeout_ms": 0, 00:05:10.044 "dhchap_digests": [ 00:05:10.044 "sha256", 00:05:10.044 "sha384", 00:05:10.044 "sha512" 00:05:10.044 ], 00:05:10.044 "dhchap_dhgroups": [ 00:05:10.044 "null", 00:05:10.044 "ffdhe2048", 00:05:10.044 "ffdhe3072", 00:05:10.044 "ffdhe4096", 00:05:10.044 "ffdhe6144", 00:05:10.044 "ffdhe8192" 00:05:10.044 ] 00:05:10.044 } 00:05:10.044 }, 00:05:10.044 { 00:05:10.044 "method": "bdev_nvme_set_hotplug", 00:05:10.044 "params": { 00:05:10.044 "period_us": 100000, 00:05:10.044 "enable": false 00:05:10.044 } 00:05:10.044 }, 00:05:10.044 { 00:05:10.044 "method": "bdev_wait_for_examine" 00:05:10.044 } 00:05:10.044 ] 00:05:10.044 }, 00:05:10.044 { 00:05:10.044 "subsystem": "scsi", 00:05:10.044 "config": null 00:05:10.044 }, 00:05:10.044 { 00:05:10.044 "subsystem": "scheduler", 00:05:10.044 "config": [ 00:05:10.044 { 00:05:10.044 "method": "framework_set_scheduler", 00:05:10.044 "params": { 00:05:10.044 "name": "static" 00:05:10.044 } 00:05:10.044 } 00:05:10.044 ] 00:05:10.044 }, 00:05:10.044 { 00:05:10.044 "subsystem": "vhost_scsi", 00:05:10.044 "config": [] 00:05:10.044 }, 00:05:10.044 { 00:05:10.044 "subsystem": "vhost_blk", 00:05:10.044 "config": [] 00:05:10.044 }, 00:05:10.044 { 00:05:10.044 "subsystem": "ublk", 00:05:10.044 "config": [] 00:05:10.044 }, 00:05:10.044 { 00:05:10.044 "subsystem": "nbd", 00:05:10.044 "config": [] 00:05:10.044 }, 00:05:10.044 { 00:05:10.044 "subsystem": "nvmf", 00:05:10.044 "config": [ 00:05:10.044 { 00:05:10.044 "method": "nvmf_set_config", 00:05:10.044 "params": { 00:05:10.044 "discovery_filter": "match_any", 00:05:10.044 "admin_cmd_passthru": { 00:05:10.044 "identify_ctrlr": false 00:05:10.044 } 00:05:10.044 } 00:05:10.044 }, 00:05:10.044 { 00:05:10.044 "method": "nvmf_set_max_subsystems", 00:05:10.044 "params": { 00:05:10.044 "max_subsystems": 1024 00:05:10.044 } 00:05:10.044 }, 00:05:10.044 { 00:05:10.044 "method": "nvmf_set_crdt", 00:05:10.044 "params": { 00:05:10.044 "crdt1": 0, 00:05:10.044 "crdt2": 0, 00:05:10.044 "crdt3": 0 00:05:10.044 } 00:05:10.044 }, 00:05:10.044 { 00:05:10.044 "method": "nvmf_create_transport", 00:05:10.044 "params": { 00:05:10.044 "trtype": "TCP", 00:05:10.044 "max_queue_depth": 128, 00:05:10.044 "max_io_qpairs_per_ctrlr": 127, 00:05:10.044 "in_capsule_data_size": 4096, 00:05:10.044 "max_io_size": 131072, 00:05:10.044 "io_unit_size": 131072, 00:05:10.044 "max_aq_depth": 128, 00:05:10.044 "num_shared_buffers": 511, 00:05:10.044 "buf_cache_size": 4294967295, 00:05:10.044 "dif_insert_or_strip": false, 00:05:10.044 "zcopy": false, 00:05:10.044 "c2h_success": true, 00:05:10.044 "sock_priority": 0, 00:05:10.044 "abort_timeout_sec": 1, 00:05:10.044 "ack_timeout": 0, 00:05:10.044 "data_wr_pool_size": 0 00:05:10.044 } 00:05:10.044 } 00:05:10.044 ] 00:05:10.044 }, 00:05:10.044 { 00:05:10.044 "subsystem": "iscsi", 00:05:10.044 "config": [ 00:05:10.044 { 00:05:10.044 "method": "iscsi_set_options", 00:05:10.044 "params": { 00:05:10.044 "node_base": "iqn.2016-06.io.spdk", 00:05:10.044 "max_sessions": 128, 00:05:10.044 "max_connections_per_session": 2, 00:05:10.044 "max_queue_depth": 64, 00:05:10.044 "default_time2wait": 2, 00:05:10.044 "default_time2retain": 20, 00:05:10.044 "first_burst_length": 8192, 00:05:10.044 "immediate_data": true, 00:05:10.044 "allow_duplicated_isid": false, 00:05:10.044 "error_recovery_level": 0, 00:05:10.044 "nop_timeout": 60, 00:05:10.044 "nop_in_interval": 30, 00:05:10.044 "disable_chap": false, 00:05:10.044 "require_chap": false, 00:05:10.044 "mutual_chap": false, 00:05:10.044 "chap_group": 0, 00:05:10.044 "max_large_datain_per_connection": 64, 00:05:10.044 "max_r2t_per_connection": 4, 00:05:10.044 "pdu_pool_size": 36864, 00:05:10.044 "immediate_data_pool_size": 16384, 00:05:10.044 "data_out_pool_size": 2048 00:05:10.044 } 00:05:10.044 } 00:05:10.044 ] 00:05:10.044 } 00:05:10.044 ] 00:05:10.044 } 00:05:10.044 14:37:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:10.044 14:37:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1748733 00:05:10.044 14:37:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1748733 ']' 00:05:10.044 14:37:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1748733 00:05:10.044 14:37:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:10.044 14:37:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:10.044 14:37:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1748733 00:05:10.044 14:37:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:10.044 14:37:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:10.044 14:37:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1748733' 00:05:10.044 killing process with pid 1748733 00:05:10.044 14:37:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1748733 00:05:10.044 14:37:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1748733 00:05:12.572 14:37:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1749147 00:05:12.572 14:37:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:12.572 14:37:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:17.865 14:37:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1749147 00:05:17.865 14:37:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1749147 ']' 00:05:17.865 14:37:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1749147 00:05:17.865 14:37:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:17.865 14:37:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:17.865 14:37:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1749147 00:05:17.865 14:37:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:17.865 14:37:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:17.865 14:37:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1749147' 00:05:17.865 killing process with pid 1749147 00:05:17.865 14:37:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1749147 00:05:17.865 14:37:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1749147 00:05:20.392 14:37:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:20.392 14:37:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:20.392 00:05:20.392 real 0m11.520s 00:05:20.392 user 0m11.022s 00:05:20.392 sys 0m1.016s 00:05:20.392 14:37:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.392 14:37:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:20.392 ************************************ 00:05:20.392 END TEST skip_rpc_with_json 00:05:20.392 ************************************ 00:05:20.392 14:37:59 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:20.392 14:37:59 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:20.392 14:37:59 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:20.392 14:37:59 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.392 14:37:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.392 ************************************ 00:05:20.392 START TEST skip_rpc_with_delay 00:05:20.392 ************************************ 00:05:20.392 14:37:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:20.392 14:37:59 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:20.392 14:37:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:20.392 14:37:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:20.392 14:37:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:20.392 14:37:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:20.392 14:37:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:20.392 14:37:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:20.392 14:37:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:20.392 14:37:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:20.392 14:37:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:20.392 14:37:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:20.392 14:37:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:20.392 [2024-07-14 14:37:59.420704] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:20.392 [2024-07-14 14:37:59.420903] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:20.392 14:37:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:20.392 14:37:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:20.392 14:37:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:20.392 14:37:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:20.392 00:05:20.392 real 0m0.140s 00:05:20.392 user 0m0.076s 00:05:20.392 sys 0m0.062s 00:05:20.392 14:37:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.392 14:37:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:20.392 ************************************ 00:05:20.392 END TEST skip_rpc_with_delay 00:05:20.392 ************************************ 00:05:20.392 14:37:59 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:20.392 14:37:59 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:20.392 14:37:59 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:20.392 14:37:59 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:20.392 14:37:59 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:20.392 14:37:59 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.392 14:37:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.392 ************************************ 00:05:20.392 START TEST exit_on_failed_rpc_init 00:05:20.392 ************************************ 00:05:20.392 14:37:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:20.392 14:37:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1750133 00:05:20.392 14:37:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:20.392 14:37:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1750133 00:05:20.392 14:37:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 1750133 ']' 00:05:20.392 14:37:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.392 14:37:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:20.392 14:37:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.392 14:37:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:20.392 14:37:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:20.392 [2024-07-14 14:37:59.607226] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:20.392 [2024-07-14 14:37:59.607401] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1750133 ] 00:05:20.392 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.650 [2024-07-14 14:37:59.732246] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.908 [2024-07-14 14:37:59.986654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.841 14:38:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:21.841 14:38:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:21.841 14:38:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:21.841 14:38:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:21.841 14:38:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:21.841 14:38:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:21.841 14:38:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:21.841 14:38:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:21.841 14:38:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:21.841 14:38:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:21.841 14:38:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:21.841 14:38:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:21.841 14:38:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:21.841 14:38:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:21.841 14:38:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:21.841 [2024-07-14 14:38:00.955380] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:21.841 [2024-07-14 14:38:00.955533] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1750272 ] 00:05:21.841 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.841 [2024-07-14 14:38:01.088140] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.098 [2024-07-14 14:38:01.341093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.098 [2024-07-14 14:38:01.341252] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:22.098 [2024-07-14 14:38:01.341288] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:22.098 [2024-07-14 14:38:01.341313] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:22.663 14:38:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:22.663 14:38:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:22.663 14:38:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:22.663 14:38:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:22.663 14:38:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:22.663 14:38:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:22.663 14:38:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:22.663 14:38:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1750133 00:05:22.663 14:38:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 1750133 ']' 00:05:22.663 14:38:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 1750133 00:05:22.663 14:38:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:22.663 14:38:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:22.663 14:38:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1750133 00:05:22.663 14:38:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:22.663 14:38:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:22.663 14:38:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1750133' 00:05:22.663 killing process with pid 1750133 00:05:22.663 14:38:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 1750133 00:05:22.663 14:38:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 1750133 00:05:25.199 00:05:25.199 real 0m4.809s 00:05:25.199 user 0m5.504s 00:05:25.199 sys 0m0.740s 00:05:25.199 14:38:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.199 14:38:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:25.199 ************************************ 00:05:25.199 END TEST exit_on_failed_rpc_init 00:05:25.199 ************************************ 00:05:25.199 14:38:04 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:25.199 14:38:04 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:25.199 00:05:25.199 real 0m24.221s 00:05:25.199 user 0m23.717s 00:05:25.199 sys 0m2.455s 00:05:25.199 14:38:04 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.199 14:38:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.199 ************************************ 00:05:25.199 END TEST skip_rpc 00:05:25.199 ************************************ 00:05:25.199 14:38:04 -- common/autotest_common.sh@1142 -- # return 0 00:05:25.199 14:38:04 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:25.199 14:38:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:25.199 14:38:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.199 14:38:04 -- common/autotest_common.sh@10 -- # set +x 00:05:25.199 ************************************ 00:05:25.199 START TEST rpc_client 00:05:25.199 ************************************ 00:05:25.199 14:38:04 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:25.199 * Looking for test storage... 00:05:25.199 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:25.199 14:38:04 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:25.199 OK 00:05:25.199 14:38:04 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:25.199 00:05:25.199 real 0m0.096s 00:05:25.199 user 0m0.045s 00:05:25.199 sys 0m0.056s 00:05:25.199 14:38:04 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.199 14:38:04 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:25.199 ************************************ 00:05:25.199 END TEST rpc_client 00:05:25.199 ************************************ 00:05:25.199 14:38:04 -- common/autotest_common.sh@1142 -- # return 0 00:05:25.199 14:38:04 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:25.199 14:38:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:25.199 14:38:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.199 14:38:04 -- common/autotest_common.sh@10 -- # set +x 00:05:25.457 ************************************ 00:05:25.457 START TEST json_config 00:05:25.457 ************************************ 00:05:25.457 14:38:04 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:25.457 14:38:04 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:25.457 14:38:04 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:25.457 14:38:04 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:25.457 14:38:04 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:25.457 14:38:04 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:25.457 14:38:04 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:25.457 14:38:04 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:25.457 14:38:04 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:25.457 14:38:04 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:25.457 14:38:04 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:25.457 14:38:04 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:25.457 14:38:04 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:25.457 14:38:04 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:25.457 14:38:04 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:25.457 14:38:04 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:25.457 14:38:04 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:25.457 14:38:04 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:25.457 14:38:04 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:25.457 14:38:04 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:25.457 14:38:04 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:25.457 14:38:04 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:25.457 14:38:04 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:25.457 14:38:04 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.457 14:38:04 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.457 14:38:04 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.457 14:38:04 json_config -- paths/export.sh@5 -- # export PATH 00:05:25.457 14:38:04 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.457 14:38:04 json_config -- nvmf/common.sh@47 -- # : 0 00:05:25.457 14:38:04 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:25.457 14:38:04 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:25.457 14:38:04 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:25.457 14:38:04 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:25.457 14:38:04 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:25.457 14:38:04 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:25.457 14:38:04 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:25.457 14:38:04 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:25.457 14:38:04 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:25.457 14:38:04 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:25.457 14:38:04 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:25.457 14:38:04 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:25.457 14:38:04 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:25.457 14:38:04 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:25.457 14:38:04 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:25.457 14:38:04 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:25.457 14:38:04 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:25.457 14:38:04 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:25.457 14:38:04 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:25.457 14:38:04 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:25.457 14:38:04 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:25.457 14:38:04 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:25.457 14:38:04 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:25.457 14:38:04 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:25.457 INFO: JSON configuration test init 00:05:25.457 14:38:04 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:25.457 14:38:04 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:25.457 14:38:04 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:25.457 14:38:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.457 14:38:04 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:25.457 14:38:04 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:25.457 14:38:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.457 14:38:04 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:25.457 14:38:04 json_config -- json_config/common.sh@9 -- # local app=target 00:05:25.457 14:38:04 json_config -- json_config/common.sh@10 -- # shift 00:05:25.457 14:38:04 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:25.457 14:38:04 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:25.457 14:38:04 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:25.457 14:38:04 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:25.457 14:38:04 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:25.457 14:38:04 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1750907 00:05:25.457 14:38:04 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:25.457 14:38:04 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:25.457 Waiting for target to run... 00:05:25.457 14:38:04 json_config -- json_config/common.sh@25 -- # waitforlisten 1750907 /var/tmp/spdk_tgt.sock 00:05:25.457 14:38:04 json_config -- common/autotest_common.sh@829 -- # '[' -z 1750907 ']' 00:05:25.457 14:38:04 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:25.457 14:38:04 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:25.457 14:38:04 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:25.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:25.457 14:38:04 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:25.457 14:38:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.457 [2024-07-14 14:38:04.681255] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:25.457 [2024-07-14 14:38:04.681401] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1750907 ] 00:05:25.457 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.022 [2024-07-14 14:38:05.269841] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.280 [2024-07-14 14:38:05.509032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.536 14:38:05 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:26.536 14:38:05 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:26.536 14:38:05 json_config -- json_config/common.sh@26 -- # echo '' 00:05:26.536 00:05:26.536 14:38:05 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:26.536 14:38:05 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:26.536 14:38:05 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:26.536 14:38:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.536 14:38:05 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:26.536 14:38:05 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:26.536 14:38:05 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:26.536 14:38:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.536 14:38:05 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:26.536 14:38:05 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:26.536 14:38:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:30.719 14:38:09 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:30.719 14:38:09 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:30.719 14:38:09 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:30.719 14:38:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.719 14:38:09 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:30.719 14:38:09 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:30.719 14:38:09 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:30.719 14:38:09 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:30.719 14:38:09 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:30.719 14:38:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:30.719 14:38:09 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:30.719 14:38:09 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:30.719 14:38:09 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:30.719 14:38:09 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:30.719 14:38:09 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:30.719 14:38:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.719 14:38:09 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:30.719 14:38:09 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:30.719 14:38:09 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:30.719 14:38:09 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:30.719 14:38:09 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:30.719 14:38:09 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:30.719 14:38:09 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:30.719 14:38:09 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:30.719 14:38:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.719 14:38:09 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:30.719 14:38:09 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:30.719 14:38:09 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:30.719 14:38:09 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:30.719 14:38:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:30.719 MallocForNvmf0 00:05:30.719 14:38:10 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:30.719 14:38:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:30.977 MallocForNvmf1 00:05:30.977 14:38:10 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:30.977 14:38:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:31.235 [2024-07-14 14:38:10.481401] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:31.235 14:38:10 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:31.235 14:38:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:31.493 14:38:10 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:31.493 14:38:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:31.751 14:38:10 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:31.751 14:38:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:32.009 14:38:11 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:32.009 14:38:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:32.267 [2024-07-14 14:38:11.440728] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:32.267 14:38:11 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:32.267 14:38:11 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:32.267 14:38:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.267 14:38:11 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:32.267 14:38:11 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:32.267 14:38:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.267 14:38:11 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:32.267 14:38:11 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:32.267 14:38:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:32.525 MallocBdevForConfigChangeCheck 00:05:32.525 14:38:11 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:32.525 14:38:11 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:32.525 14:38:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.525 14:38:11 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:32.525 14:38:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:33.091 14:38:12 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:33.091 INFO: shutting down applications... 00:05:33.091 14:38:12 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:33.091 14:38:12 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:33.091 14:38:12 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:33.091 14:38:12 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:34.995 Calling clear_iscsi_subsystem 00:05:34.995 Calling clear_nvmf_subsystem 00:05:34.995 Calling clear_nbd_subsystem 00:05:34.995 Calling clear_ublk_subsystem 00:05:34.995 Calling clear_vhost_blk_subsystem 00:05:34.995 Calling clear_vhost_scsi_subsystem 00:05:34.995 Calling clear_bdev_subsystem 00:05:34.995 14:38:13 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:34.995 14:38:13 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:34.995 14:38:13 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:34.995 14:38:13 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:34.995 14:38:13 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:34.995 14:38:13 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:34.995 14:38:14 json_config -- json_config/json_config.sh@345 -- # break 00:05:34.995 14:38:14 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:34.995 14:38:14 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:34.995 14:38:14 json_config -- json_config/common.sh@31 -- # local app=target 00:05:34.995 14:38:14 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:34.995 14:38:14 json_config -- json_config/common.sh@35 -- # [[ -n 1750907 ]] 00:05:34.995 14:38:14 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1750907 00:05:34.995 14:38:14 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:34.995 14:38:14 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:34.995 14:38:14 json_config -- json_config/common.sh@41 -- # kill -0 1750907 00:05:34.995 14:38:14 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:35.561 14:38:14 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:35.561 14:38:14 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:35.561 14:38:14 json_config -- json_config/common.sh@41 -- # kill -0 1750907 00:05:35.561 14:38:14 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:36.126 14:38:15 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:36.126 14:38:15 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:36.126 14:38:15 json_config -- json_config/common.sh@41 -- # kill -0 1750907 00:05:36.126 14:38:15 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:36.716 14:38:15 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:36.716 14:38:15 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:36.716 14:38:15 json_config -- json_config/common.sh@41 -- # kill -0 1750907 00:05:36.716 14:38:15 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:36.716 14:38:15 json_config -- json_config/common.sh@43 -- # break 00:05:36.716 14:38:15 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:36.716 14:38:15 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:36.716 SPDK target shutdown done 00:05:36.716 14:38:15 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:36.716 INFO: relaunching applications... 00:05:36.716 14:38:15 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:36.716 14:38:15 json_config -- json_config/common.sh@9 -- # local app=target 00:05:36.716 14:38:15 json_config -- json_config/common.sh@10 -- # shift 00:05:36.716 14:38:15 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:36.716 14:38:15 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:36.716 14:38:15 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:36.716 14:38:15 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:36.716 14:38:15 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:36.716 14:38:15 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1752361 00:05:36.716 14:38:15 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:36.716 Waiting for target to run... 00:05:36.716 14:38:15 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:36.716 14:38:15 json_config -- json_config/common.sh@25 -- # waitforlisten 1752361 /var/tmp/spdk_tgt.sock 00:05:36.716 14:38:15 json_config -- common/autotest_common.sh@829 -- # '[' -z 1752361 ']' 00:05:36.716 14:38:15 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:36.716 14:38:15 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.716 14:38:15 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:36.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:36.716 14:38:15 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.716 14:38:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.716 [2024-07-14 14:38:15.817240] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:36.716 [2024-07-14 14:38:15.817390] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1752361 ] 00:05:36.716 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.296 [2024-07-14 14:38:16.419676] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.554 [2024-07-14 14:38:16.653814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.739 [2024-07-14 14:38:20.381398] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:41.739 [2024-07-14 14:38:20.413948] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:41.739 14:38:20 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.739 14:38:20 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:41.739 14:38:20 json_config -- json_config/common.sh@26 -- # echo '' 00:05:41.739 00:05:41.739 14:38:20 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:41.739 14:38:20 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:41.739 INFO: Checking if target configuration is the same... 00:05:41.739 14:38:20 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:41.739 14:38:20 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:41.739 14:38:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:41.739 + '[' 2 -ne 2 ']' 00:05:41.739 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:41.739 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:41.739 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:41.739 +++ basename /dev/fd/62 00:05:41.739 ++ mktemp /tmp/62.XXX 00:05:41.739 + tmp_file_1=/tmp/62.6tS 00:05:41.739 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:41.739 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:41.739 + tmp_file_2=/tmp/spdk_tgt_config.json.i3o 00:05:41.739 + ret=0 00:05:41.739 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:41.996 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:42.253 + diff -u /tmp/62.6tS /tmp/spdk_tgt_config.json.i3o 00:05:42.253 + echo 'INFO: JSON config files are the same' 00:05:42.253 INFO: JSON config files are the same 00:05:42.253 + rm /tmp/62.6tS /tmp/spdk_tgt_config.json.i3o 00:05:42.253 + exit 0 00:05:42.253 14:38:21 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:42.253 14:38:21 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:42.253 INFO: changing configuration and checking if this can be detected... 00:05:42.253 14:38:21 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:42.253 14:38:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:42.510 14:38:21 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:42.510 14:38:21 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:42.510 14:38:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:42.510 + '[' 2 -ne 2 ']' 00:05:42.510 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:42.510 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:42.510 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:42.510 +++ basename /dev/fd/62 00:05:42.510 ++ mktemp /tmp/62.XXX 00:05:42.510 + tmp_file_1=/tmp/62.Ru2 00:05:42.510 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:42.510 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:42.510 + tmp_file_2=/tmp/spdk_tgt_config.json.fFP 00:05:42.510 + ret=0 00:05:42.510 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:42.767 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:42.767 + diff -u /tmp/62.Ru2 /tmp/spdk_tgt_config.json.fFP 00:05:42.767 + ret=1 00:05:42.767 + echo '=== Start of file: /tmp/62.Ru2 ===' 00:05:42.767 + cat /tmp/62.Ru2 00:05:42.767 + echo '=== End of file: /tmp/62.Ru2 ===' 00:05:42.767 + echo '' 00:05:42.767 + echo '=== Start of file: /tmp/spdk_tgt_config.json.fFP ===' 00:05:42.767 + cat /tmp/spdk_tgt_config.json.fFP 00:05:42.767 + echo '=== End of file: /tmp/spdk_tgt_config.json.fFP ===' 00:05:42.767 + echo '' 00:05:42.767 + rm /tmp/62.Ru2 /tmp/spdk_tgt_config.json.fFP 00:05:42.767 + exit 1 00:05:42.767 14:38:22 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:42.767 INFO: configuration change detected. 00:05:42.767 14:38:22 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:42.767 14:38:22 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:42.767 14:38:22 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:42.767 14:38:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.767 14:38:22 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:42.767 14:38:22 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:42.767 14:38:22 json_config -- json_config/json_config.sh@317 -- # [[ -n 1752361 ]] 00:05:42.767 14:38:22 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:42.767 14:38:22 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:42.767 14:38:22 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:42.767 14:38:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.767 14:38:22 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:42.767 14:38:22 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:42.767 14:38:22 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:42.767 14:38:22 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:42.767 14:38:22 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:42.767 14:38:22 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:42.767 14:38:22 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:42.767 14:38:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.767 14:38:22 json_config -- json_config/json_config.sh@323 -- # killprocess 1752361 00:05:42.767 14:38:22 json_config -- common/autotest_common.sh@948 -- # '[' -z 1752361 ']' 00:05:42.767 14:38:22 json_config -- common/autotest_common.sh@952 -- # kill -0 1752361 00:05:42.767 14:38:22 json_config -- common/autotest_common.sh@953 -- # uname 00:05:42.767 14:38:22 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:42.767 14:38:22 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1752361 00:05:43.024 14:38:22 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:43.024 14:38:22 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:43.024 14:38:22 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1752361' 00:05:43.024 killing process with pid 1752361 00:05:43.024 14:38:22 json_config -- common/autotest_common.sh@967 -- # kill 1752361 00:05:43.024 14:38:22 json_config -- common/autotest_common.sh@972 -- # wait 1752361 00:05:45.549 14:38:24 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:45.549 14:38:24 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:45.549 14:38:24 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:45.549 14:38:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.549 14:38:24 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:45.549 14:38:24 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:45.549 INFO: Success 00:05:45.549 00:05:45.549 real 0m19.995s 00:05:45.549 user 0m21.307s 00:05:45.549 sys 0m2.682s 00:05:45.549 14:38:24 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.549 14:38:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.549 ************************************ 00:05:45.549 END TEST json_config 00:05:45.549 ************************************ 00:05:45.549 14:38:24 -- common/autotest_common.sh@1142 -- # return 0 00:05:45.549 14:38:24 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:45.549 14:38:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.549 14:38:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.549 14:38:24 -- common/autotest_common.sh@10 -- # set +x 00:05:45.549 ************************************ 00:05:45.549 START TEST json_config_extra_key 00:05:45.549 ************************************ 00:05:45.550 14:38:24 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:45.550 14:38:24 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:45.550 14:38:24 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:45.550 14:38:24 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:45.550 14:38:24 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:45.550 14:38:24 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:45.550 14:38:24 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:45.550 14:38:24 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:45.550 14:38:24 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:45.550 14:38:24 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:45.550 14:38:24 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:45.550 14:38:24 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:45.550 14:38:24 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:45.550 14:38:24 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:45.550 14:38:24 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:45.550 14:38:24 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:45.550 14:38:24 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:45.550 14:38:24 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:45.550 14:38:24 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:45.550 14:38:24 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:45.550 14:38:24 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:45.550 14:38:24 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:45.550 14:38:24 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:45.550 14:38:24 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.550 14:38:24 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.550 14:38:24 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.550 14:38:24 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:45.550 14:38:24 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.550 14:38:24 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:45.550 14:38:24 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:45.550 14:38:24 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:45.550 14:38:24 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:45.550 14:38:24 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:45.550 14:38:24 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:45.550 14:38:24 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:45.550 14:38:24 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:45.550 14:38:24 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:45.550 14:38:24 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:45.550 14:38:24 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:45.550 14:38:24 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:45.550 14:38:24 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:45.550 14:38:24 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:45.550 14:38:24 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:45.550 14:38:24 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:45.550 14:38:24 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:45.550 14:38:24 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:45.550 14:38:24 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:45.550 14:38:24 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:45.550 INFO: launching applications... 00:05:45.550 14:38:24 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:45.550 14:38:24 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:45.550 14:38:24 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:45.550 14:38:24 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:45.550 14:38:24 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:45.550 14:38:24 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:45.550 14:38:24 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:45.550 14:38:24 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:45.550 14:38:24 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1753545 00:05:45.550 14:38:24 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:45.550 14:38:24 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:45.550 Waiting for target to run... 00:05:45.550 14:38:24 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1753545 /var/tmp/spdk_tgt.sock 00:05:45.550 14:38:24 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 1753545 ']' 00:05:45.550 14:38:24 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:45.550 14:38:24 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:45.550 14:38:24 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:45.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:45.550 14:38:24 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:45.550 14:38:24 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:45.550 [2024-07-14 14:38:24.714870] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:45.550 [2024-07-14 14:38:24.715037] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1753545 ] 00:05:45.550 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.116 [2024-07-14 14:38:25.312569] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.374 [2024-07-14 14:38:25.551611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.940 14:38:26 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:46.940 14:38:26 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:46.940 14:38:26 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:46.940 00:05:46.941 14:38:26 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:46.941 INFO: shutting down applications... 00:05:46.941 14:38:26 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:46.941 14:38:26 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:46.941 14:38:26 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:46.941 14:38:26 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1753545 ]] 00:05:46.941 14:38:26 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1753545 00:05:46.941 14:38:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:46.941 14:38:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:46.941 14:38:26 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1753545 00:05:46.941 14:38:26 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:47.507 14:38:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:47.507 14:38:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:47.507 14:38:26 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1753545 00:05:47.507 14:38:26 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:48.074 14:38:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:48.074 14:38:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:48.074 14:38:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1753545 00:05:48.074 14:38:27 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:48.639 14:38:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:48.639 14:38:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:48.639 14:38:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1753545 00:05:48.639 14:38:27 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:49.206 14:38:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:49.206 14:38:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:49.206 14:38:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1753545 00:05:49.206 14:38:28 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:49.463 14:38:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:49.463 14:38:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:49.463 14:38:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1753545 00:05:49.463 14:38:28 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:50.029 14:38:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:50.029 14:38:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:50.029 14:38:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1753545 00:05:50.029 14:38:29 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:50.029 14:38:29 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:50.029 14:38:29 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:50.029 14:38:29 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:50.029 SPDK target shutdown done 00:05:50.029 14:38:29 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:50.029 Success 00:05:50.029 00:05:50.029 real 0m4.695s 00:05:50.029 user 0m4.208s 00:05:50.029 sys 0m0.821s 00:05:50.029 14:38:29 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.029 14:38:29 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:50.029 ************************************ 00:05:50.029 END TEST json_config_extra_key 00:05:50.029 ************************************ 00:05:50.029 14:38:29 -- common/autotest_common.sh@1142 -- # return 0 00:05:50.029 14:38:29 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:50.029 14:38:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:50.029 14:38:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.029 14:38:29 -- common/autotest_common.sh@10 -- # set +x 00:05:50.029 ************************************ 00:05:50.029 START TEST alias_rpc 00:05:50.029 ************************************ 00:05:50.029 14:38:29 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:50.288 * Looking for test storage... 00:05:50.288 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:50.288 14:38:29 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:50.288 14:38:29 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1754128 00:05:50.288 14:38:29 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:50.288 14:38:29 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1754128 00:05:50.288 14:38:29 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 1754128 ']' 00:05:50.288 14:38:29 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.288 14:38:29 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.288 14:38:29 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.288 14:38:29 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.288 14:38:29 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.288 [2024-07-14 14:38:29.450526] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:50.288 [2024-07-14 14:38:29.450679] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1754128 ] 00:05:50.288 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.288 [2024-07-14 14:38:29.570254] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.546 [2024-07-14 14:38:29.823753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.480 14:38:30 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:51.480 14:38:30 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:51.480 14:38:30 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:51.737 14:38:30 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1754128 00:05:51.737 14:38:30 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 1754128 ']' 00:05:51.737 14:38:30 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 1754128 00:05:51.737 14:38:30 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:51.737 14:38:30 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:51.737 14:38:30 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1754128 00:05:51.737 14:38:31 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:51.737 14:38:31 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:51.737 14:38:31 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1754128' 00:05:51.737 killing process with pid 1754128 00:05:51.737 14:38:31 alias_rpc -- common/autotest_common.sh@967 -- # kill 1754128 00:05:51.737 14:38:31 alias_rpc -- common/autotest_common.sh@972 -- # wait 1754128 00:05:54.267 00:05:54.267 real 0m4.166s 00:05:54.267 user 0m4.316s 00:05:54.267 sys 0m0.573s 00:05:54.267 14:38:33 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.267 14:38:33 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.267 ************************************ 00:05:54.267 END TEST alias_rpc 00:05:54.267 ************************************ 00:05:54.267 14:38:33 -- common/autotest_common.sh@1142 -- # return 0 00:05:54.267 14:38:33 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:54.267 14:38:33 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:54.267 14:38:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:54.267 14:38:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.267 14:38:33 -- common/autotest_common.sh@10 -- # set +x 00:05:54.267 ************************************ 00:05:54.267 START TEST spdkcli_tcp 00:05:54.267 ************************************ 00:05:54.267 14:38:33 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:54.267 * Looking for test storage... 00:05:54.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:54.267 14:38:33 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:54.267 14:38:33 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:54.267 14:38:33 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:54.267 14:38:33 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:54.267 14:38:33 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:54.267 14:38:33 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:54.267 14:38:33 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:54.267 14:38:33 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:54.267 14:38:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:54.267 14:38:33 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1754717 00:05:54.267 14:38:33 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1754717 00:05:54.267 14:38:33 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:54.267 14:38:33 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 1754717 ']' 00:05:54.267 14:38:33 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.267 14:38:33 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.267 14:38:33 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.267 14:38:33 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.267 14:38:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:54.524 [2024-07-14 14:38:33.658637] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:54.524 [2024-07-14 14:38:33.658792] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1754717 ] 00:05:54.524 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.524 [2024-07-14 14:38:33.781795] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:54.780 [2024-07-14 14:38:34.037076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.780 [2024-07-14 14:38:34.037083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.712 14:38:34 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.712 14:38:34 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:55.712 14:38:34 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1754856 00:05:55.712 14:38:34 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:55.712 14:38:34 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:55.971 [ 00:05:55.971 "bdev_malloc_delete", 00:05:55.971 "bdev_malloc_create", 00:05:55.971 "bdev_null_resize", 00:05:55.971 "bdev_null_delete", 00:05:55.971 "bdev_null_create", 00:05:55.971 "bdev_nvme_cuse_unregister", 00:05:55.971 "bdev_nvme_cuse_register", 00:05:55.971 "bdev_opal_new_user", 00:05:55.971 "bdev_opal_set_lock_state", 00:05:55.971 "bdev_opal_delete", 00:05:55.971 "bdev_opal_get_info", 00:05:55.971 "bdev_opal_create", 00:05:55.971 "bdev_nvme_opal_revert", 00:05:55.971 "bdev_nvme_opal_init", 00:05:55.971 "bdev_nvme_send_cmd", 00:05:55.971 "bdev_nvme_get_path_iostat", 00:05:55.971 "bdev_nvme_get_mdns_discovery_info", 00:05:55.971 "bdev_nvme_stop_mdns_discovery", 00:05:55.971 "bdev_nvme_start_mdns_discovery", 00:05:55.971 "bdev_nvme_set_multipath_policy", 00:05:55.971 "bdev_nvme_set_preferred_path", 00:05:55.971 "bdev_nvme_get_io_paths", 00:05:55.971 "bdev_nvme_remove_error_injection", 00:05:55.971 "bdev_nvme_add_error_injection", 00:05:55.971 "bdev_nvme_get_discovery_info", 00:05:55.971 "bdev_nvme_stop_discovery", 00:05:55.971 "bdev_nvme_start_discovery", 00:05:55.971 "bdev_nvme_get_controller_health_info", 00:05:55.971 "bdev_nvme_disable_controller", 00:05:55.971 "bdev_nvme_enable_controller", 00:05:55.971 "bdev_nvme_reset_controller", 00:05:55.971 "bdev_nvme_get_transport_statistics", 00:05:55.971 "bdev_nvme_apply_firmware", 00:05:55.971 "bdev_nvme_detach_controller", 00:05:55.971 "bdev_nvme_get_controllers", 00:05:55.971 "bdev_nvme_attach_controller", 00:05:55.971 "bdev_nvme_set_hotplug", 00:05:55.971 "bdev_nvme_set_options", 00:05:55.971 "bdev_passthru_delete", 00:05:55.971 "bdev_passthru_create", 00:05:55.971 "bdev_lvol_set_parent_bdev", 00:05:55.971 "bdev_lvol_set_parent", 00:05:55.971 "bdev_lvol_check_shallow_copy", 00:05:55.971 "bdev_lvol_start_shallow_copy", 00:05:55.971 "bdev_lvol_grow_lvstore", 00:05:55.971 "bdev_lvol_get_lvols", 00:05:55.971 "bdev_lvol_get_lvstores", 00:05:55.971 "bdev_lvol_delete", 00:05:55.971 "bdev_lvol_set_read_only", 00:05:55.971 "bdev_lvol_resize", 00:05:55.971 "bdev_lvol_decouple_parent", 00:05:55.971 "bdev_lvol_inflate", 00:05:55.971 "bdev_lvol_rename", 00:05:55.971 "bdev_lvol_clone_bdev", 00:05:55.971 "bdev_lvol_clone", 00:05:55.971 "bdev_lvol_snapshot", 00:05:55.971 "bdev_lvol_create", 00:05:55.971 "bdev_lvol_delete_lvstore", 00:05:55.971 "bdev_lvol_rename_lvstore", 00:05:55.971 "bdev_lvol_create_lvstore", 00:05:55.971 "bdev_raid_set_options", 00:05:55.971 "bdev_raid_remove_base_bdev", 00:05:55.971 "bdev_raid_add_base_bdev", 00:05:55.971 "bdev_raid_delete", 00:05:55.971 "bdev_raid_create", 00:05:55.971 "bdev_raid_get_bdevs", 00:05:55.971 "bdev_error_inject_error", 00:05:55.971 "bdev_error_delete", 00:05:55.971 "bdev_error_create", 00:05:55.971 "bdev_split_delete", 00:05:55.971 "bdev_split_create", 00:05:55.971 "bdev_delay_delete", 00:05:55.971 "bdev_delay_create", 00:05:55.971 "bdev_delay_update_latency", 00:05:55.971 "bdev_zone_block_delete", 00:05:55.971 "bdev_zone_block_create", 00:05:55.971 "blobfs_create", 00:05:55.971 "blobfs_detect", 00:05:55.971 "blobfs_set_cache_size", 00:05:55.971 "bdev_aio_delete", 00:05:55.971 "bdev_aio_rescan", 00:05:55.971 "bdev_aio_create", 00:05:55.971 "bdev_ftl_set_property", 00:05:55.971 "bdev_ftl_get_properties", 00:05:55.971 "bdev_ftl_get_stats", 00:05:55.971 "bdev_ftl_unmap", 00:05:55.971 "bdev_ftl_unload", 00:05:55.971 "bdev_ftl_delete", 00:05:55.971 "bdev_ftl_load", 00:05:55.971 "bdev_ftl_create", 00:05:55.971 "bdev_virtio_attach_controller", 00:05:55.971 "bdev_virtio_scsi_get_devices", 00:05:55.971 "bdev_virtio_detach_controller", 00:05:55.971 "bdev_virtio_blk_set_hotplug", 00:05:55.971 "bdev_iscsi_delete", 00:05:55.971 "bdev_iscsi_create", 00:05:55.971 "bdev_iscsi_set_options", 00:05:55.971 "accel_error_inject_error", 00:05:55.971 "ioat_scan_accel_module", 00:05:55.971 "dsa_scan_accel_module", 00:05:55.971 "iaa_scan_accel_module", 00:05:55.971 "keyring_file_remove_key", 00:05:55.971 "keyring_file_add_key", 00:05:55.971 "keyring_linux_set_options", 00:05:55.971 "iscsi_get_histogram", 00:05:55.971 "iscsi_enable_histogram", 00:05:55.971 "iscsi_set_options", 00:05:55.971 "iscsi_get_auth_groups", 00:05:55.971 "iscsi_auth_group_remove_secret", 00:05:55.971 "iscsi_auth_group_add_secret", 00:05:55.971 "iscsi_delete_auth_group", 00:05:55.971 "iscsi_create_auth_group", 00:05:55.971 "iscsi_set_discovery_auth", 00:05:55.971 "iscsi_get_options", 00:05:55.971 "iscsi_target_node_request_logout", 00:05:55.971 "iscsi_target_node_set_redirect", 00:05:55.971 "iscsi_target_node_set_auth", 00:05:55.971 "iscsi_target_node_add_lun", 00:05:55.971 "iscsi_get_stats", 00:05:55.971 "iscsi_get_connections", 00:05:55.971 "iscsi_portal_group_set_auth", 00:05:55.971 "iscsi_start_portal_group", 00:05:55.971 "iscsi_delete_portal_group", 00:05:55.971 "iscsi_create_portal_group", 00:05:55.971 "iscsi_get_portal_groups", 00:05:55.971 "iscsi_delete_target_node", 00:05:55.971 "iscsi_target_node_remove_pg_ig_maps", 00:05:55.971 "iscsi_target_node_add_pg_ig_maps", 00:05:55.971 "iscsi_create_target_node", 00:05:55.971 "iscsi_get_target_nodes", 00:05:55.971 "iscsi_delete_initiator_group", 00:05:55.971 "iscsi_initiator_group_remove_initiators", 00:05:55.971 "iscsi_initiator_group_add_initiators", 00:05:55.971 "iscsi_create_initiator_group", 00:05:55.971 "iscsi_get_initiator_groups", 00:05:55.971 "nvmf_set_crdt", 00:05:55.972 "nvmf_set_config", 00:05:55.972 "nvmf_set_max_subsystems", 00:05:55.972 "nvmf_stop_mdns_prr", 00:05:55.972 "nvmf_publish_mdns_prr", 00:05:55.972 "nvmf_subsystem_get_listeners", 00:05:55.972 "nvmf_subsystem_get_qpairs", 00:05:55.972 "nvmf_subsystem_get_controllers", 00:05:55.972 "nvmf_get_stats", 00:05:55.972 "nvmf_get_transports", 00:05:55.972 "nvmf_create_transport", 00:05:55.972 "nvmf_get_targets", 00:05:55.972 "nvmf_delete_target", 00:05:55.972 "nvmf_create_target", 00:05:55.972 "nvmf_subsystem_allow_any_host", 00:05:55.972 "nvmf_subsystem_remove_host", 00:05:55.972 "nvmf_subsystem_add_host", 00:05:55.972 "nvmf_ns_remove_host", 00:05:55.972 "nvmf_ns_add_host", 00:05:55.972 "nvmf_subsystem_remove_ns", 00:05:55.972 "nvmf_subsystem_add_ns", 00:05:55.972 "nvmf_subsystem_listener_set_ana_state", 00:05:55.972 "nvmf_discovery_get_referrals", 00:05:55.972 "nvmf_discovery_remove_referral", 00:05:55.972 "nvmf_discovery_add_referral", 00:05:55.972 "nvmf_subsystem_remove_listener", 00:05:55.972 "nvmf_subsystem_add_listener", 00:05:55.972 "nvmf_delete_subsystem", 00:05:55.972 "nvmf_create_subsystem", 00:05:55.972 "nvmf_get_subsystems", 00:05:55.972 "env_dpdk_get_mem_stats", 00:05:55.972 "nbd_get_disks", 00:05:55.972 "nbd_stop_disk", 00:05:55.972 "nbd_start_disk", 00:05:55.972 "ublk_recover_disk", 00:05:55.972 "ublk_get_disks", 00:05:55.972 "ublk_stop_disk", 00:05:55.972 "ublk_start_disk", 00:05:55.972 "ublk_destroy_target", 00:05:55.972 "ublk_create_target", 00:05:55.972 "virtio_blk_create_transport", 00:05:55.972 "virtio_blk_get_transports", 00:05:55.972 "vhost_controller_set_coalescing", 00:05:55.972 "vhost_get_controllers", 00:05:55.972 "vhost_delete_controller", 00:05:55.972 "vhost_create_blk_controller", 00:05:55.972 "vhost_scsi_controller_remove_target", 00:05:55.972 "vhost_scsi_controller_add_target", 00:05:55.972 "vhost_start_scsi_controller", 00:05:55.972 "vhost_create_scsi_controller", 00:05:55.972 "thread_set_cpumask", 00:05:55.972 "framework_get_governor", 00:05:55.972 "framework_get_scheduler", 00:05:55.972 "framework_set_scheduler", 00:05:55.972 "framework_get_reactors", 00:05:55.972 "thread_get_io_channels", 00:05:55.972 "thread_get_pollers", 00:05:55.972 "thread_get_stats", 00:05:55.972 "framework_monitor_context_switch", 00:05:55.972 "spdk_kill_instance", 00:05:55.972 "log_enable_timestamps", 00:05:55.972 "log_get_flags", 00:05:55.972 "log_clear_flag", 00:05:55.972 "log_set_flag", 00:05:55.972 "log_get_level", 00:05:55.972 "log_set_level", 00:05:55.972 "log_get_print_level", 00:05:55.972 "log_set_print_level", 00:05:55.972 "framework_enable_cpumask_locks", 00:05:55.972 "framework_disable_cpumask_locks", 00:05:55.972 "framework_wait_init", 00:05:55.972 "framework_start_init", 00:05:55.972 "scsi_get_devices", 00:05:55.972 "bdev_get_histogram", 00:05:55.972 "bdev_enable_histogram", 00:05:55.972 "bdev_set_qos_limit", 00:05:55.972 "bdev_set_qd_sampling_period", 00:05:55.972 "bdev_get_bdevs", 00:05:55.972 "bdev_reset_iostat", 00:05:55.972 "bdev_get_iostat", 00:05:55.972 "bdev_examine", 00:05:55.972 "bdev_wait_for_examine", 00:05:55.972 "bdev_set_options", 00:05:55.972 "notify_get_notifications", 00:05:55.972 "notify_get_types", 00:05:55.972 "accel_get_stats", 00:05:55.972 "accel_set_options", 00:05:55.972 "accel_set_driver", 00:05:55.972 "accel_crypto_key_destroy", 00:05:55.972 "accel_crypto_keys_get", 00:05:55.972 "accel_crypto_key_create", 00:05:55.972 "accel_assign_opc", 00:05:55.972 "accel_get_module_info", 00:05:55.972 "accel_get_opc_assignments", 00:05:55.972 "vmd_rescan", 00:05:55.972 "vmd_remove_device", 00:05:55.972 "vmd_enable", 00:05:55.972 "sock_get_default_impl", 00:05:55.972 "sock_set_default_impl", 00:05:55.972 "sock_impl_set_options", 00:05:55.972 "sock_impl_get_options", 00:05:55.972 "iobuf_get_stats", 00:05:55.972 "iobuf_set_options", 00:05:55.972 "framework_get_pci_devices", 00:05:55.972 "framework_get_config", 00:05:55.972 "framework_get_subsystems", 00:05:55.972 "trace_get_info", 00:05:55.972 "trace_get_tpoint_group_mask", 00:05:55.972 "trace_disable_tpoint_group", 00:05:55.972 "trace_enable_tpoint_group", 00:05:55.972 "trace_clear_tpoint_mask", 00:05:55.972 "trace_set_tpoint_mask", 00:05:55.972 "keyring_get_keys", 00:05:55.972 "spdk_get_version", 00:05:55.972 "rpc_get_methods" 00:05:55.972 ] 00:05:55.972 14:38:35 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:55.972 14:38:35 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:55.972 14:38:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:55.972 14:38:35 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:55.972 14:38:35 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1754717 00:05:55.972 14:38:35 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 1754717 ']' 00:05:55.972 14:38:35 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 1754717 00:05:55.972 14:38:35 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:55.972 14:38:35 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:55.972 14:38:35 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1754717 00:05:55.972 14:38:35 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:55.972 14:38:35 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:55.972 14:38:35 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1754717' 00:05:55.972 killing process with pid 1754717 00:05:55.972 14:38:35 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 1754717 00:05:55.972 14:38:35 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 1754717 00:05:58.545 00:05:58.545 real 0m4.127s 00:05:58.545 user 0m7.338s 00:05:58.545 sys 0m0.632s 00:05:58.545 14:38:37 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.545 14:38:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:58.545 ************************************ 00:05:58.545 END TEST spdkcli_tcp 00:05:58.545 ************************************ 00:05:58.545 14:38:37 -- common/autotest_common.sh@1142 -- # return 0 00:05:58.545 14:38:37 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:58.545 14:38:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:58.545 14:38:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.545 14:38:37 -- common/autotest_common.sh@10 -- # set +x 00:05:58.545 ************************************ 00:05:58.545 START TEST dpdk_mem_utility 00:05:58.545 ************************************ 00:05:58.545 14:38:37 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:58.545 * Looking for test storage... 00:05:58.545 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:58.545 14:38:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:58.545 14:38:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1755194 00:05:58.545 14:38:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:58.545 14:38:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1755194 00:05:58.545 14:38:37 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 1755194 ']' 00:05:58.545 14:38:37 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.545 14:38:37 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.545 14:38:37 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.545 14:38:37 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.545 14:38:37 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:58.545 [2024-07-14 14:38:37.835455] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:58.545 [2024-07-14 14:38:37.835600] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1755194 ] 00:05:58.802 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.802 [2024-07-14 14:38:37.962015] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.059 [2024-07-14 14:38:38.217053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.993 14:38:39 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:59.993 14:38:39 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:59.993 14:38:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:59.993 14:38:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:59.993 14:38:39 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:59.993 14:38:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:59.993 { 00:05:59.993 "filename": "/tmp/spdk_mem_dump.txt" 00:05:59.993 } 00:05:59.993 14:38:39 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:59.993 14:38:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:59.993 DPDK memory size 820.000000 MiB in 1 heap(s) 00:05:59.993 1 heaps totaling size 820.000000 MiB 00:05:59.993 size: 820.000000 MiB heap id: 0 00:05:59.993 end heaps---------- 00:05:59.993 8 mempools totaling size 598.116089 MiB 00:05:59.993 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:59.993 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:59.993 size: 84.521057 MiB name: bdev_io_1755194 00:05:59.993 size: 51.011292 MiB name: evtpool_1755194 00:05:59.993 size: 50.003479 MiB name: msgpool_1755194 00:05:59.993 size: 21.763794 MiB name: PDU_Pool 00:05:59.993 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:59.993 size: 0.026123 MiB name: Session_Pool 00:05:59.993 end mempools------- 00:05:59.993 6 memzones totaling size 4.142822 MiB 00:05:59.993 size: 1.000366 MiB name: RG_ring_0_1755194 00:05:59.993 size: 1.000366 MiB name: RG_ring_1_1755194 00:05:59.993 size: 1.000366 MiB name: RG_ring_4_1755194 00:05:59.993 size: 1.000366 MiB name: RG_ring_5_1755194 00:05:59.993 size: 0.125366 MiB name: RG_ring_2_1755194 00:05:59.993 size: 0.015991 MiB name: RG_ring_3_1755194 00:05:59.993 end memzones------- 00:05:59.993 14:38:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:59.993 heap id: 0 total size: 820.000000 MiB number of busy elements: 41 number of free elements: 19 00:05:59.993 list of free elements. size: 18.514832 MiB 00:05:59.993 element at address: 0x200000400000 with size: 1.999451 MiB 00:05:59.993 element at address: 0x200000800000 with size: 1.996887 MiB 00:05:59.993 element at address: 0x200007000000 with size: 1.995972 MiB 00:05:59.993 element at address: 0x20000b200000 with size: 1.995972 MiB 00:05:59.993 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:59.993 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:59.993 element at address: 0x200019600000 with size: 0.999329 MiB 00:05:59.993 element at address: 0x200003e00000 with size: 0.996094 MiB 00:05:59.993 element at address: 0x200032200000 with size: 0.994324 MiB 00:05:59.993 element at address: 0x200018e00000 with size: 0.959900 MiB 00:05:59.993 element at address: 0x200019900040 with size: 0.937256 MiB 00:05:59.993 element at address: 0x200000200000 with size: 0.840942 MiB 00:05:59.993 element at address: 0x20001b000000 with size: 0.583191 MiB 00:05:59.993 element at address: 0x200019200000 with size: 0.491150 MiB 00:05:59.993 element at address: 0x200019a00000 with size: 0.485657 MiB 00:05:59.993 element at address: 0x200013800000 with size: 0.470581 MiB 00:05:59.993 element at address: 0x200028400000 with size: 0.411072 MiB 00:05:59.993 element at address: 0x200003a00000 with size: 0.356140 MiB 00:05:59.993 element at address: 0x20000b1ff040 with size: 0.001038 MiB 00:05:59.993 list of standard malloc elements. size: 199.220764 MiB 00:05:59.993 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:05:59.993 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:05:59.993 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:59.993 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:59.993 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:59.993 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:59.993 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:05:59.993 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:59.993 element at address: 0x2000137ff040 with size: 0.000427 MiB 00:05:59.993 element at address: 0x2000137ffa00 with size: 0.000366 MiB 00:05:59.993 element at address: 0x2000002d7480 with size: 0.000244 MiB 00:05:59.993 element at address: 0x2000002d7580 with size: 0.000244 MiB 00:05:59.993 element at address: 0x2000002d7680 with size: 0.000244 MiB 00:05:59.993 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:05:59.993 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:05:59.993 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:59.993 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:59.993 element at address: 0x200003aff980 with size: 0.000244 MiB 00:05:59.993 element at address: 0x200003affa80 with size: 0.000244 MiB 00:05:59.993 element at address: 0x200003eff000 with size: 0.000244 MiB 00:05:59.993 element at address: 0x20000b1ff480 with size: 0.000244 MiB 00:05:59.993 element at address: 0x20000b1ff580 with size: 0.000244 MiB 00:05:59.993 element at address: 0x20000b1ff680 with size: 0.000244 MiB 00:05:59.993 element at address: 0x20000b1ff780 with size: 0.000244 MiB 00:05:59.993 element at address: 0x20000b1ff880 with size: 0.000244 MiB 00:05:59.993 element at address: 0x20000b1ff980 with size: 0.000244 MiB 00:05:59.993 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:05:59.993 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:05:59.993 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:05:59.993 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:05:59.993 element at address: 0x2000137ff200 with size: 0.000244 MiB 00:05:59.993 element at address: 0x2000137ff300 with size: 0.000244 MiB 00:05:59.993 element at address: 0x2000137ff400 with size: 0.000244 MiB 00:05:59.993 element at address: 0x2000137ff500 with size: 0.000244 MiB 00:05:59.993 element at address: 0x2000137ff600 with size: 0.000244 MiB 00:05:59.993 element at address: 0x2000137ff700 with size: 0.000244 MiB 00:05:59.993 element at address: 0x2000137ff800 with size: 0.000244 MiB 00:05:59.993 element at address: 0x2000137ff900 with size: 0.000244 MiB 00:05:59.993 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:05:59.993 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:05:59.993 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:05:59.993 list of memzone associated elements. size: 602.264404 MiB 00:05:59.993 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:05:59.993 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:59.993 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:05:59.993 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:59.993 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:05:59.993 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1755194_0 00:05:59.993 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:05:59.994 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1755194_0 00:05:59.994 element at address: 0x200003fff340 with size: 48.003113 MiB 00:05:59.994 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1755194_0 00:05:59.994 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:05:59.994 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:59.994 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:05:59.994 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:59.994 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:05:59.994 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1755194 00:05:59.994 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:05:59.994 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1755194 00:05:59.994 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:59.994 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1755194 00:05:59.994 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:59.994 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:59.994 element at address: 0x200019abc780 with size: 1.008179 MiB 00:05:59.994 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:59.994 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:59.994 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:59.994 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:05:59.994 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:59.994 element at address: 0x200003eff100 with size: 1.000549 MiB 00:05:59.994 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1755194 00:05:59.994 element at address: 0x200003affb80 with size: 1.000549 MiB 00:05:59.994 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1755194 00:05:59.994 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:05:59.994 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1755194 00:05:59.994 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:05:59.994 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1755194 00:05:59.994 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:05:59.994 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1755194 00:05:59.994 element at address: 0x20001927dbc0 with size: 0.500549 MiB 00:05:59.994 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:59.994 element at address: 0x200013878780 with size: 0.500549 MiB 00:05:59.994 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:59.994 element at address: 0x200019a7c540 with size: 0.250549 MiB 00:05:59.994 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:59.994 element at address: 0x200003adf740 with size: 0.125549 MiB 00:05:59.994 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1755194 00:05:59.994 element at address: 0x200018ef5bc0 with size: 0.031799 MiB 00:05:59.994 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:59.994 element at address: 0x2000284693c0 with size: 0.023804 MiB 00:05:59.994 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:59.994 element at address: 0x200003adb500 with size: 0.016174 MiB 00:05:59.994 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1755194 00:05:59.994 element at address: 0x20002846f540 with size: 0.002502 MiB 00:05:59.994 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:59.994 element at address: 0x2000002d7780 with size: 0.000366 MiB 00:05:59.994 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1755194 00:05:59.994 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:05:59.994 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1755194 00:05:59.994 element at address: 0x20000b1ffa80 with size: 0.000366 MiB 00:05:59.994 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:59.994 14:38:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:59.994 14:38:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1755194 00:05:59.994 14:38:39 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 1755194 ']' 00:05:59.994 14:38:39 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 1755194 00:05:59.994 14:38:39 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:59.994 14:38:39 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:59.994 14:38:39 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1755194 00:05:59.994 14:38:39 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:59.994 14:38:39 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:59.994 14:38:39 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1755194' 00:05:59.994 killing process with pid 1755194 00:05:59.994 14:38:39 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 1755194 00:05:59.994 14:38:39 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 1755194 00:06:02.520 00:06:02.520 real 0m4.055s 00:06:02.520 user 0m4.026s 00:06:02.520 sys 0m0.616s 00:06:02.520 14:38:41 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.520 14:38:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:02.520 ************************************ 00:06:02.520 END TEST dpdk_mem_utility 00:06:02.520 ************************************ 00:06:02.520 14:38:41 -- common/autotest_common.sh@1142 -- # return 0 00:06:02.520 14:38:41 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:02.520 14:38:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:02.520 14:38:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.520 14:38:41 -- common/autotest_common.sh@10 -- # set +x 00:06:02.520 ************************************ 00:06:02.520 START TEST event 00:06:02.520 ************************************ 00:06:02.520 14:38:41 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:02.778 * Looking for test storage... 00:06:02.778 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:02.778 14:38:41 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:02.778 14:38:41 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:02.778 14:38:41 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:02.778 14:38:41 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:02.778 14:38:41 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.778 14:38:41 event -- common/autotest_common.sh@10 -- # set +x 00:06:02.778 ************************************ 00:06:02.778 START TEST event_perf 00:06:02.778 ************************************ 00:06:02.778 14:38:41 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:02.778 Running I/O for 1 seconds...[2024-07-14 14:38:41.904506] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:02.778 [2024-07-14 14:38:41.904630] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1755783 ] 00:06:02.778 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.778 [2024-07-14 14:38:42.031611] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:03.036 [2024-07-14 14:38:42.292656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.036 [2024-07-14 14:38:42.292721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.036 [2024-07-14 14:38:42.292812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.036 [2024-07-14 14:38:42.292837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:04.415 Running I/O for 1 seconds... 00:06:04.415 lcore 0: 224825 00:06:04.415 lcore 1: 224826 00:06:04.415 lcore 2: 224825 00:06:04.415 lcore 3: 224826 00:06:04.672 done. 00:06:04.672 00:06:04.672 real 0m1.874s 00:06:04.672 user 0m4.684s 00:06:04.672 sys 0m0.175s 00:06:04.672 14:38:43 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.672 14:38:43 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:04.672 ************************************ 00:06:04.672 END TEST event_perf 00:06:04.672 ************************************ 00:06:04.672 14:38:43 event -- common/autotest_common.sh@1142 -- # return 0 00:06:04.672 14:38:43 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:04.672 14:38:43 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:04.672 14:38:43 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.672 14:38:43 event -- common/autotest_common.sh@10 -- # set +x 00:06:04.672 ************************************ 00:06:04.672 START TEST event_reactor 00:06:04.672 ************************************ 00:06:04.672 14:38:43 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:04.672 [2024-07-14 14:38:43.819178] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:04.672 [2024-07-14 14:38:43.819319] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1756067 ] 00:06:04.672 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.672 [2024-07-14 14:38:43.964375] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.929 [2024-07-14 14:38:44.219951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.824 test_start 00:06:06.824 oneshot 00:06:06.824 tick 100 00:06:06.824 tick 100 00:06:06.824 tick 250 00:06:06.824 tick 100 00:06:06.824 tick 100 00:06:06.824 tick 100 00:06:06.824 tick 250 00:06:06.824 tick 500 00:06:06.824 tick 100 00:06:06.824 tick 100 00:06:06.824 tick 250 00:06:06.824 tick 100 00:06:06.824 tick 100 00:06:06.824 test_end 00:06:06.824 00:06:06.824 real 0m1.879s 00:06:06.824 user 0m1.705s 00:06:06.824 sys 0m0.165s 00:06:06.824 14:38:45 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.824 14:38:45 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:06.824 ************************************ 00:06:06.824 END TEST event_reactor 00:06:06.824 ************************************ 00:06:06.824 14:38:45 event -- common/autotest_common.sh@1142 -- # return 0 00:06:06.824 14:38:45 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:06.824 14:38:45 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:06.824 14:38:45 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.824 14:38:45 event -- common/autotest_common.sh@10 -- # set +x 00:06:06.824 ************************************ 00:06:06.824 START TEST event_reactor_perf 00:06:06.824 ************************************ 00:06:06.824 14:38:45 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:06.824 [2024-07-14 14:38:45.742789] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:06.824 [2024-07-14 14:38:45.742930] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1756232 ] 00:06:06.824 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.824 [2024-07-14 14:38:45.877634] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.080 [2024-07-14 14:38:46.133682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.450 test_start 00:06:08.450 test_end 00:06:08.450 Performance: 268309 events per second 00:06:08.450 00:06:08.450 real 0m1.870s 00:06:08.450 user 0m1.681s 00:06:08.450 sys 0m0.179s 00:06:08.450 14:38:47 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.450 14:38:47 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:08.450 ************************************ 00:06:08.450 END TEST event_reactor_perf 00:06:08.450 ************************************ 00:06:08.450 14:38:47 event -- common/autotest_common.sh@1142 -- # return 0 00:06:08.450 14:38:47 event -- event/event.sh@49 -- # uname -s 00:06:08.450 14:38:47 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:08.450 14:38:47 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:08.450 14:38:47 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:08.450 14:38:47 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.450 14:38:47 event -- common/autotest_common.sh@10 -- # set +x 00:06:08.450 ************************************ 00:06:08.450 START TEST event_scheduler 00:06:08.450 ************************************ 00:06:08.450 14:38:47 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:08.450 * Looking for test storage... 00:06:08.450 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:08.450 14:38:47 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:08.450 14:38:47 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1756544 00:06:08.450 14:38:47 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:08.450 14:38:47 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:08.450 14:38:47 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1756544 00:06:08.450 14:38:47 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 1756544 ']' 00:06:08.450 14:38:47 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.450 14:38:47 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.450 14:38:47 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.450 14:38:47 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.450 14:38:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:08.709 [2024-07-14 14:38:47.764061] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:08.709 [2024-07-14 14:38:47.764212] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1756544 ] 00:06:08.709 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.709 [2024-07-14 14:38:47.886110] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:08.967 [2024-07-14 14:38:48.110942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.967 [2024-07-14 14:38:48.111004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.967 [2024-07-14 14:38:48.111048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.967 [2024-07-14 14:38:48.111051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:09.531 14:38:48 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.531 14:38:48 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:09.531 14:38:48 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:09.531 14:38:48 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.531 14:38:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:09.531 [2024-07-14 14:38:48.697767] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:09.531 [2024-07-14 14:38:48.697828] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:09.531 [2024-07-14 14:38:48.697861] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:09.531 [2024-07-14 14:38:48.697909] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:09.531 [2024-07-14 14:38:48.697929] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:09.531 14:38:48 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.531 14:38:48 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:09.531 14:38:48 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.531 14:38:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:09.789 [2024-07-14 14:38:48.994563] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:09.789 14:38:48 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.789 14:38:48 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:09.789 14:38:48 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:09.789 14:38:48 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.789 14:38:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:09.789 ************************************ 00:06:09.789 START TEST scheduler_create_thread 00:06:09.789 ************************************ 00:06:09.789 14:38:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:09.789 14:38:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:09.789 14:38:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.789 14:38:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.789 2 00:06:09.789 14:38:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.789 14:38:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:09.789 14:38:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.789 14:38:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.789 3 00:06:09.789 14:38:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.789 14:38:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:09.789 14:38:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.789 14:38:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.789 4 00:06:09.789 14:38:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.789 14:38:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:09.789 14:38:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.789 14:38:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.789 5 00:06:09.789 14:38:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.789 14:38:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:09.789 14:38:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.789 14:38:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.789 6 00:06:09.789 14:38:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.789 14:38:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:09.789 14:38:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.789 14:38:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.789 7 00:06:09.789 14:38:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.789 14:38:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:09.789 14:38:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.789 14:38:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.789 8 00:06:09.789 14:38:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.789 14:38:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:09.789 14:38:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.789 14:38:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.789 9 00:06:09.789 14:38:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.789 14:38:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:09.789 14:38:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.789 14:38:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.047 10 00:06:10.047 14:38:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.047 14:38:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:10.047 14:38:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.047 14:38:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.047 14:38:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.047 14:38:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:10.047 14:38:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:10.047 14:38:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.047 14:38:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.047 14:38:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.047 14:38:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:10.047 14:38:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.047 14:38:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.047 14:38:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.047 14:38:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:10.047 14:38:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:10.047 14:38:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.047 14:38:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.047 14:38:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.047 00:06:10.047 real 0m0.108s 00:06:10.047 user 0m0.015s 00:06:10.047 sys 0m0.004s 00:06:10.047 14:38:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.047 14:38:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.047 ************************************ 00:06:10.047 END TEST scheduler_create_thread 00:06:10.047 ************************************ 00:06:10.047 14:38:49 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:10.047 14:38:49 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:10.047 14:38:49 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1756544 00:06:10.047 14:38:49 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 1756544 ']' 00:06:10.047 14:38:49 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 1756544 00:06:10.047 14:38:49 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:10.047 14:38:49 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:10.047 14:38:49 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1756544 00:06:10.047 14:38:49 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:10.047 14:38:49 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:10.047 14:38:49 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1756544' 00:06:10.047 killing process with pid 1756544 00:06:10.047 14:38:49 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 1756544 00:06:10.047 14:38:49 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 1756544 00:06:10.612 [2024-07-14 14:38:49.617939] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:11.547 00:06:11.547 real 0m3.100s 00:06:11.547 user 0m5.010s 00:06:11.547 sys 0m0.498s 00:06:11.547 14:38:50 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.547 14:38:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:11.547 ************************************ 00:06:11.547 END TEST event_scheduler 00:06:11.547 ************************************ 00:06:11.547 14:38:50 event -- common/autotest_common.sh@1142 -- # return 0 00:06:11.547 14:38:50 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:11.547 14:38:50 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:11.547 14:38:50 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:11.547 14:38:50 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.547 14:38:50 event -- common/autotest_common.sh@10 -- # set +x 00:06:11.547 ************************************ 00:06:11.547 START TEST app_repeat 00:06:11.547 ************************************ 00:06:11.547 14:38:50 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:11.547 14:38:50 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.547 14:38:50 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.547 14:38:50 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:11.547 14:38:50 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:11.547 14:38:50 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:11.547 14:38:50 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:11.547 14:38:50 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:11.547 14:38:50 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1756994 00:06:11.547 14:38:50 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:11.547 14:38:50 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:11.547 14:38:50 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1756994' 00:06:11.547 Process app_repeat pid: 1756994 00:06:11.547 14:38:50 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:11.547 14:38:50 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:11.547 spdk_app_start Round 0 00:06:11.547 14:38:50 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1756994 /var/tmp/spdk-nbd.sock 00:06:11.547 14:38:50 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1756994 ']' 00:06:11.547 14:38:50 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:11.547 14:38:50 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.547 14:38:50 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:11.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:11.547 14:38:50 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.547 14:38:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:11.547 [2024-07-14 14:38:50.837480] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:11.547 [2024-07-14 14:38:50.837628] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1756994 ] 00:06:11.805 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.805 [2024-07-14 14:38:50.970547] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:12.063 [2024-07-14 14:38:51.229191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.063 [2024-07-14 14:38:51.229196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.629 14:38:51 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.629 14:38:51 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:12.629 14:38:51 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:12.888 Malloc0 00:06:12.888 14:38:52 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:13.146 Malloc1 00:06:13.146 14:38:52 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:13.146 14:38:52 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.146 14:38:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:13.146 14:38:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:13.146 14:38:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.146 14:38:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:13.146 14:38:52 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:13.146 14:38:52 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.146 14:38:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:13.146 14:38:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:13.146 14:38:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.146 14:38:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:13.146 14:38:52 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:13.146 14:38:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:13.146 14:38:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.146 14:38:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:13.403 /dev/nbd0 00:06:13.403 14:38:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:13.403 14:38:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:13.403 14:38:52 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:13.403 14:38:52 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:13.403 14:38:52 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:13.403 14:38:52 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:13.403 14:38:52 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:13.403 14:38:52 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:13.403 14:38:52 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:13.661 14:38:52 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:13.661 14:38:52 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:13.661 1+0 records in 00:06:13.661 1+0 records out 00:06:13.661 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000186159 s, 22.0 MB/s 00:06:13.661 14:38:52 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:13.661 14:38:52 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:13.661 14:38:52 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:13.661 14:38:52 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:13.661 14:38:52 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:13.661 14:38:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.661 14:38:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.661 14:38:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:13.918 /dev/nbd1 00:06:13.918 14:38:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:13.918 14:38:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:13.918 14:38:52 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:13.918 14:38:52 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:13.918 14:38:52 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:13.918 14:38:52 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:13.918 14:38:52 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:13.918 14:38:52 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:13.918 14:38:52 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:13.918 14:38:52 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:13.918 14:38:52 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:13.918 1+0 records in 00:06:13.918 1+0 records out 00:06:13.918 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277753 s, 14.7 MB/s 00:06:13.918 14:38:53 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:13.918 14:38:53 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:13.918 14:38:53 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:13.918 14:38:53 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:13.918 14:38:53 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:13.918 14:38:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.918 14:38:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.918 14:38:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.918 14:38:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.918 14:38:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:14.176 14:38:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:14.176 { 00:06:14.176 "nbd_device": "/dev/nbd0", 00:06:14.176 "bdev_name": "Malloc0" 00:06:14.176 }, 00:06:14.176 { 00:06:14.176 "nbd_device": "/dev/nbd1", 00:06:14.176 "bdev_name": "Malloc1" 00:06:14.176 } 00:06:14.176 ]' 00:06:14.176 14:38:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:14.176 { 00:06:14.176 "nbd_device": "/dev/nbd0", 00:06:14.176 "bdev_name": "Malloc0" 00:06:14.176 }, 00:06:14.176 { 00:06:14.176 "nbd_device": "/dev/nbd1", 00:06:14.176 "bdev_name": "Malloc1" 00:06:14.176 } 00:06:14.176 ]' 00:06:14.176 14:38:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:14.176 14:38:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:14.176 /dev/nbd1' 00:06:14.176 14:38:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:14.176 /dev/nbd1' 00:06:14.176 14:38:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:14.176 14:38:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:14.176 14:38:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:14.176 14:38:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:14.176 14:38:53 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:14.176 14:38:53 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:14.176 14:38:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.176 14:38:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:14.176 14:38:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:14.176 14:38:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:14.176 14:38:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:14.176 14:38:53 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:14.176 256+0 records in 00:06:14.176 256+0 records out 00:06:14.176 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00489364 s, 214 MB/s 00:06:14.176 14:38:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.176 14:38:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:14.176 256+0 records in 00:06:14.176 256+0 records out 00:06:14.176 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0237399 s, 44.2 MB/s 00:06:14.176 14:38:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.176 14:38:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:14.176 256+0 records in 00:06:14.176 256+0 records out 00:06:14.176 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0288547 s, 36.3 MB/s 00:06:14.176 14:38:53 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:14.176 14:38:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.176 14:38:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:14.176 14:38:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:14.176 14:38:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:14.176 14:38:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:14.176 14:38:53 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:14.176 14:38:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:14.176 14:38:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:14.176 14:38:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:14.176 14:38:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:14.176 14:38:53 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:14.176 14:38:53 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:14.176 14:38:53 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.176 14:38:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.176 14:38:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:14.176 14:38:53 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:14.176 14:38:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.176 14:38:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:14.433 14:38:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:14.433 14:38:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:14.433 14:38:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:14.433 14:38:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.433 14:38:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.433 14:38:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:14.433 14:38:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:14.433 14:38:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.433 14:38:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.433 14:38:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:14.690 14:38:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:14.690 14:38:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:14.690 14:38:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:14.690 14:38:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.690 14:38:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.690 14:38:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:14.690 14:38:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:14.690 14:38:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.691 14:38:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:14.691 14:38:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.691 14:38:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:14.948 14:38:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:14.948 14:38:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:14.948 14:38:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:14.948 14:38:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:14.948 14:38:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:14.948 14:38:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:14.948 14:38:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:14.948 14:38:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:14.948 14:38:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:14.948 14:38:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:14.948 14:38:54 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:14.948 14:38:54 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:14.948 14:38:54 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:15.547 14:38:54 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:16.920 [2024-07-14 14:38:56.039731] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:17.178 [2024-07-14 14:38:56.295638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.178 [2024-07-14 14:38:56.295640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.437 [2024-07-14 14:38:56.503082] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:17.437 [2024-07-14 14:38:56.503186] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:18.370 14:38:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:18.370 14:38:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:18.370 spdk_app_start Round 1 00:06:18.370 14:38:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1756994 /var/tmp/spdk-nbd.sock 00:06:18.370 14:38:57 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1756994 ']' 00:06:18.370 14:38:57 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:18.370 14:38:57 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:18.370 14:38:57 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:18.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:18.370 14:38:57 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:18.370 14:38:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:18.627 14:38:57 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:18.627 14:38:57 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:18.627 14:38:57 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:19.192 Malloc0 00:06:19.192 14:38:58 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:19.451 Malloc1 00:06:19.451 14:38:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:19.451 14:38:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.451 14:38:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:19.451 14:38:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:19.451 14:38:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.451 14:38:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:19.451 14:38:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:19.451 14:38:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.451 14:38:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:19.451 14:38:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:19.451 14:38:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.451 14:38:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:19.451 14:38:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:19.451 14:38:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:19.451 14:38:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.451 14:38:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:19.726 /dev/nbd0 00:06:19.726 14:38:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:19.726 14:38:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:19.726 14:38:58 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:19.726 14:38:58 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:19.727 14:38:58 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:19.727 14:38:58 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:19.727 14:38:58 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:19.727 14:38:58 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:19.727 14:38:58 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:19.727 14:38:58 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:19.727 14:38:58 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.727 1+0 records in 00:06:19.727 1+0 records out 00:06:19.727 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238205 s, 17.2 MB/s 00:06:19.727 14:38:58 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:19.727 14:38:58 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:19.727 14:38:58 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:19.727 14:38:58 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:19.727 14:38:58 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:19.727 14:38:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.727 14:38:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.727 14:38:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:19.994 /dev/nbd1 00:06:19.994 14:38:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:19.994 14:38:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:19.994 14:38:59 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:19.994 14:38:59 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:19.994 14:38:59 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:19.994 14:38:59 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:19.994 14:38:59 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:19.994 14:38:59 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:19.994 14:38:59 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:19.994 14:38:59 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:19.994 14:38:59 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.994 1+0 records in 00:06:19.994 1+0 records out 00:06:19.994 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235416 s, 17.4 MB/s 00:06:19.994 14:38:59 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:19.994 14:38:59 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:19.994 14:38:59 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:19.994 14:38:59 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:19.994 14:38:59 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:19.994 14:38:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.994 14:38:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.994 14:38:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.994 14:38:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.994 14:38:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:20.251 14:38:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:20.251 { 00:06:20.251 "nbd_device": "/dev/nbd0", 00:06:20.251 "bdev_name": "Malloc0" 00:06:20.251 }, 00:06:20.251 { 00:06:20.251 "nbd_device": "/dev/nbd1", 00:06:20.251 "bdev_name": "Malloc1" 00:06:20.251 } 00:06:20.251 ]' 00:06:20.251 14:38:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:20.251 { 00:06:20.251 "nbd_device": "/dev/nbd0", 00:06:20.251 "bdev_name": "Malloc0" 00:06:20.251 }, 00:06:20.251 { 00:06:20.251 "nbd_device": "/dev/nbd1", 00:06:20.251 "bdev_name": "Malloc1" 00:06:20.251 } 00:06:20.251 ]' 00:06:20.251 14:38:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:20.251 14:38:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:20.251 /dev/nbd1' 00:06:20.251 14:38:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:20.251 /dev/nbd1' 00:06:20.251 14:38:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:20.251 14:38:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:20.251 14:38:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:20.251 14:38:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:20.251 14:38:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:20.251 14:38:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:20.251 14:38:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.251 14:38:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:20.251 14:38:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:20.251 14:38:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:20.251 14:38:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:20.252 14:38:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:20.252 256+0 records in 00:06:20.252 256+0 records out 00:06:20.252 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00503202 s, 208 MB/s 00:06:20.252 14:38:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.252 14:38:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:20.509 256+0 records in 00:06:20.509 256+0 records out 00:06:20.509 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0299412 s, 35.0 MB/s 00:06:20.509 14:38:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.509 14:38:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:20.509 256+0 records in 00:06:20.509 256+0 records out 00:06:20.509 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0331904 s, 31.6 MB/s 00:06:20.509 14:38:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:20.509 14:38:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.509 14:38:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:20.509 14:38:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:20.509 14:38:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:20.509 14:38:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:20.509 14:38:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:20.509 14:38:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.509 14:38:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:20.509 14:38:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.509 14:38:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:20.509 14:38:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:20.509 14:38:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:20.509 14:38:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.509 14:38:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.509 14:38:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:20.509 14:38:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:20.509 14:38:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.509 14:38:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:20.767 14:38:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:20.767 14:38:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:20.767 14:38:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:20.767 14:38:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.767 14:38:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.767 14:38:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:20.767 14:38:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:20.767 14:38:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.767 14:38:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.767 14:38:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:21.024 14:39:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:21.024 14:39:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:21.024 14:39:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:21.024 14:39:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.024 14:39:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.024 14:39:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:21.024 14:39:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:21.024 14:39:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.024 14:39:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:21.024 14:39:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.024 14:39:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:21.281 14:39:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:21.281 14:39:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:21.281 14:39:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:21.281 14:39:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:21.281 14:39:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:21.281 14:39:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.281 14:39:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:21.281 14:39:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:21.281 14:39:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:21.281 14:39:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:21.281 14:39:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:21.281 14:39:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:21.281 14:39:00 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:21.847 14:39:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:23.222 [2024-07-14 14:39:02.307317] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:23.480 [2024-07-14 14:39:02.558575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.480 [2024-07-14 14:39:02.558575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.480 [2024-07-14 14:39:02.779775] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:23.480 [2024-07-14 14:39:02.779847] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:24.853 14:39:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:24.853 14:39:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:24.853 spdk_app_start Round 2 00:06:24.853 14:39:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1756994 /var/tmp/spdk-nbd.sock 00:06:24.853 14:39:03 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1756994 ']' 00:06:24.853 14:39:03 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:24.853 14:39:03 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.853 14:39:03 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:24.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:24.853 14:39:03 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.853 14:39:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:25.111 14:39:04 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:25.111 14:39:04 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:25.111 14:39:04 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:25.369 Malloc0 00:06:25.369 14:39:04 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:25.627 Malloc1 00:06:25.627 14:39:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:25.627 14:39:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.627 14:39:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:25.627 14:39:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:25.627 14:39:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.627 14:39:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:25.627 14:39:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:25.627 14:39:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.627 14:39:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:25.627 14:39:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:25.627 14:39:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.627 14:39:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:25.627 14:39:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:25.627 14:39:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:25.627 14:39:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:25.627 14:39:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:25.886 /dev/nbd0 00:06:25.886 14:39:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:25.886 14:39:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:25.886 14:39:05 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:25.886 14:39:05 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:25.886 14:39:05 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:25.886 14:39:05 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:25.886 14:39:05 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:25.886 14:39:05 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:25.886 14:39:05 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:25.886 14:39:05 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:25.886 14:39:05 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:25.886 1+0 records in 00:06:25.886 1+0 records out 00:06:25.886 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0001927 s, 21.3 MB/s 00:06:25.886 14:39:05 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:25.886 14:39:05 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:25.886 14:39:05 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:25.886 14:39:05 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:25.886 14:39:05 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:25.886 14:39:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:25.886 14:39:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:25.886 14:39:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:26.144 /dev/nbd1 00:06:26.144 14:39:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:26.144 14:39:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:26.144 14:39:05 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:26.144 14:39:05 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:26.144 14:39:05 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:26.144 14:39:05 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:26.144 14:39:05 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:26.144 14:39:05 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:26.144 14:39:05 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:26.144 14:39:05 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:26.144 14:39:05 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:26.144 1+0 records in 00:06:26.144 1+0 records out 00:06:26.144 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000233093 s, 17.6 MB/s 00:06:26.144 14:39:05 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:26.144 14:39:05 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:26.144 14:39:05 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:26.144 14:39:05 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:26.144 14:39:05 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:26.144 14:39:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.144 14:39:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.144 14:39:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:26.144 14:39:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.144 14:39:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:26.404 14:39:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:26.404 { 00:06:26.404 "nbd_device": "/dev/nbd0", 00:06:26.404 "bdev_name": "Malloc0" 00:06:26.404 }, 00:06:26.404 { 00:06:26.404 "nbd_device": "/dev/nbd1", 00:06:26.404 "bdev_name": "Malloc1" 00:06:26.404 } 00:06:26.404 ]' 00:06:26.404 14:39:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:26.404 { 00:06:26.404 "nbd_device": "/dev/nbd0", 00:06:26.404 "bdev_name": "Malloc0" 00:06:26.404 }, 00:06:26.404 { 00:06:26.404 "nbd_device": "/dev/nbd1", 00:06:26.404 "bdev_name": "Malloc1" 00:06:26.404 } 00:06:26.404 ]' 00:06:26.404 14:39:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:26.662 14:39:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:26.662 /dev/nbd1' 00:06:26.662 14:39:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:26.662 /dev/nbd1' 00:06:26.662 14:39:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:26.662 14:39:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:26.662 14:39:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:26.662 14:39:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:26.662 14:39:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:26.662 14:39:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:26.662 14:39:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.662 14:39:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:26.662 14:39:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:26.662 14:39:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:26.662 14:39:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:26.662 14:39:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:26.662 256+0 records in 00:06:26.662 256+0 records out 00:06:26.662 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00514096 s, 204 MB/s 00:06:26.662 14:39:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:26.662 14:39:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:26.662 256+0 records in 00:06:26.662 256+0 records out 00:06:26.662 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0246579 s, 42.5 MB/s 00:06:26.662 14:39:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:26.662 14:39:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:26.662 256+0 records in 00:06:26.662 256+0 records out 00:06:26.662 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0291967 s, 35.9 MB/s 00:06:26.662 14:39:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:26.662 14:39:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.662 14:39:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:26.662 14:39:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:26.662 14:39:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:26.662 14:39:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:26.662 14:39:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:26.662 14:39:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:26.662 14:39:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:26.662 14:39:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:26.662 14:39:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:26.662 14:39:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:26.662 14:39:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:26.662 14:39:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.662 14:39:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.662 14:39:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:26.662 14:39:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:26.662 14:39:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:26.662 14:39:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:26.920 14:39:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:26.920 14:39:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:26.920 14:39:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:26.920 14:39:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:26.920 14:39:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:26.920 14:39:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:26.920 14:39:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:26.920 14:39:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:26.920 14:39:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:26.920 14:39:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:27.178 14:39:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:27.178 14:39:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:27.178 14:39:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:27.178 14:39:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.178 14:39:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.178 14:39:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:27.178 14:39:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:27.178 14:39:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.178 14:39:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:27.178 14:39:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.178 14:39:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:27.436 14:39:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:27.436 14:39:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:27.436 14:39:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.436 14:39:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:27.436 14:39:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:27.436 14:39:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.436 14:39:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:27.436 14:39:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:27.436 14:39:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:27.436 14:39:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:27.436 14:39:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:27.436 14:39:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:27.436 14:39:06 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:28.001 14:39:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:29.373 [2024-07-14 14:39:08.474350] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:29.630 [2024-07-14 14:39:08.732996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.630 [2024-07-14 14:39:08.733000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.888 [2024-07-14 14:39:08.949818] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:29.888 [2024-07-14 14:39:08.949892] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:30.819 14:39:10 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1756994 /var/tmp/spdk-nbd.sock 00:06:30.819 14:39:10 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1756994 ']' 00:06:30.819 14:39:10 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:30.819 14:39:10 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:30.819 14:39:10 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:30.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:30.819 14:39:10 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:30.819 14:39:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:31.076 14:39:10 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:31.076 14:39:10 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:31.076 14:39:10 event.app_repeat -- event/event.sh@39 -- # killprocess 1756994 00:06:31.076 14:39:10 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 1756994 ']' 00:06:31.076 14:39:10 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 1756994 00:06:31.076 14:39:10 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:31.076 14:39:10 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:31.076 14:39:10 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1756994 00:06:31.076 14:39:10 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:31.076 14:39:10 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:31.076 14:39:10 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1756994' 00:06:31.076 killing process with pid 1756994 00:06:31.076 14:39:10 event.app_repeat -- common/autotest_common.sh@967 -- # kill 1756994 00:06:31.076 14:39:10 event.app_repeat -- common/autotest_common.sh@972 -- # wait 1756994 00:06:32.484 spdk_app_start is called in Round 0. 00:06:32.484 Shutdown signal received, stop current app iteration 00:06:32.484 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:06:32.484 spdk_app_start is called in Round 1. 00:06:32.484 Shutdown signal received, stop current app iteration 00:06:32.484 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:06:32.484 spdk_app_start is called in Round 2. 00:06:32.484 Shutdown signal received, stop current app iteration 00:06:32.484 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:06:32.484 spdk_app_start is called in Round 3. 00:06:32.484 Shutdown signal received, stop current app iteration 00:06:32.484 14:39:11 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:32.484 14:39:11 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:32.484 00:06:32.484 real 0m20.839s 00:06:32.484 user 0m42.873s 00:06:32.484 sys 0m3.446s 00:06:32.484 14:39:11 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.484 14:39:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:32.484 ************************************ 00:06:32.484 END TEST app_repeat 00:06:32.484 ************************************ 00:06:32.484 14:39:11 event -- common/autotest_common.sh@1142 -- # return 0 00:06:32.484 14:39:11 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:32.484 14:39:11 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:32.484 14:39:11 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:32.484 14:39:11 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.484 14:39:11 event -- common/autotest_common.sh@10 -- # set +x 00:06:32.484 ************************************ 00:06:32.484 START TEST cpu_locks 00:06:32.484 ************************************ 00:06:32.484 14:39:11 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:32.484 * Looking for test storage... 00:06:32.484 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:32.484 14:39:11 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:32.484 14:39:11 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:32.484 14:39:11 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:32.484 14:39:11 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:32.484 14:39:11 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:32.484 14:39:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.484 14:39:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:32.484 ************************************ 00:06:32.484 START TEST default_locks 00:06:32.484 ************************************ 00:06:32.484 14:39:11 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:32.484 14:39:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1760266 00:06:32.484 14:39:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1760266 00:06:32.484 14:39:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:32.484 14:39:11 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1760266 ']' 00:06:32.484 14:39:11 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.484 14:39:11 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:32.484 14:39:11 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.484 14:39:11 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:32.484 14:39:11 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:32.744 [2024-07-14 14:39:11.852383] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:32.744 [2024-07-14 14:39:11.852536] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1760266 ] 00:06:32.744 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.744 [2024-07-14 14:39:11.976521] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.004 [2024-07-14 14:39:12.229592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.940 14:39:13 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:33.940 14:39:13 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:33.940 14:39:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1760266 00:06:33.940 14:39:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1760266 00:06:33.940 14:39:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:34.507 lslocks: write error 00:06:34.507 14:39:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1760266 00:06:34.507 14:39:13 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 1760266 ']' 00:06:34.507 14:39:13 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 1760266 00:06:34.507 14:39:13 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:34.507 14:39:13 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:34.507 14:39:13 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1760266 00:06:34.507 14:39:13 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:34.507 14:39:13 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:34.507 14:39:13 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1760266' 00:06:34.507 killing process with pid 1760266 00:06:34.507 14:39:13 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 1760266 00:06:34.507 14:39:13 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 1760266 00:06:37.040 14:39:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1760266 00:06:37.040 14:39:16 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:37.040 14:39:16 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1760266 00:06:37.040 14:39:16 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:37.040 14:39:16 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:37.040 14:39:16 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:37.040 14:39:16 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:37.040 14:39:16 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 1760266 00:06:37.040 14:39:16 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1760266 ']' 00:06:37.040 14:39:16 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.040 14:39:16 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:37.040 14:39:16 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.040 14:39:16 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:37.040 14:39:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.040 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1760266) - No such process 00:06:37.040 ERROR: process (pid: 1760266) is no longer running 00:06:37.040 14:39:16 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:37.040 14:39:16 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:37.040 14:39:16 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:37.040 14:39:16 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:37.040 14:39:16 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:37.040 14:39:16 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:37.040 14:39:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:37.040 14:39:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:37.040 14:39:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:37.040 14:39:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:37.040 00:06:37.040 real 0m4.423s 00:06:37.040 user 0m4.399s 00:06:37.040 sys 0m0.784s 00:06:37.040 14:39:16 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.040 14:39:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.040 ************************************ 00:06:37.040 END TEST default_locks 00:06:37.040 ************************************ 00:06:37.040 14:39:16 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:37.040 14:39:16 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:37.040 14:39:16 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:37.040 14:39:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.040 14:39:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.040 ************************************ 00:06:37.040 START TEST default_locks_via_rpc 00:06:37.040 ************************************ 00:06:37.040 14:39:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:37.040 14:39:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1760816 00:06:37.040 14:39:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:37.040 14:39:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1760816 00:06:37.040 14:39:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1760816 ']' 00:06:37.040 14:39:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.041 14:39:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:37.041 14:39:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.041 14:39:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:37.041 14:39:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.041 [2024-07-14 14:39:16.328474] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:37.041 [2024-07-14 14:39:16.328610] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1760816 ] 00:06:37.300 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.300 [2024-07-14 14:39:16.460203] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.560 [2024-07-14 14:39:16.720609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.497 14:39:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:38.497 14:39:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:38.497 14:39:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:38.497 14:39:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.497 14:39:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.497 14:39:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.497 14:39:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:38.497 14:39:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:38.497 14:39:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:38.497 14:39:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:38.497 14:39:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:38.497 14:39:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.497 14:39:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.497 14:39:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.497 14:39:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1760816 00:06:38.497 14:39:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1760816 00:06:38.497 14:39:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:38.757 14:39:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1760816 00:06:38.757 14:39:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 1760816 ']' 00:06:38.757 14:39:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 1760816 00:06:38.757 14:39:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:38.757 14:39:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:38.757 14:39:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1760816 00:06:38.757 14:39:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:38.757 14:39:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:38.757 14:39:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1760816' 00:06:38.757 killing process with pid 1760816 00:06:38.757 14:39:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 1760816 00:06:38.757 14:39:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 1760816 00:06:41.290 00:06:41.290 real 0m4.359s 00:06:41.290 user 0m4.345s 00:06:41.290 sys 0m0.759s 00:06:41.290 14:39:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.290 14:39:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.290 ************************************ 00:06:41.290 END TEST default_locks_via_rpc 00:06:41.290 ************************************ 00:06:41.547 14:39:20 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:41.547 14:39:20 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:41.547 14:39:20 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:41.547 14:39:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.547 14:39:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.547 ************************************ 00:06:41.547 START TEST non_locking_app_on_locked_coremask 00:06:41.547 ************************************ 00:06:41.547 14:39:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:41.547 14:39:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1761372 00:06:41.547 14:39:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:41.547 14:39:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1761372 /var/tmp/spdk.sock 00:06:41.547 14:39:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1761372 ']' 00:06:41.547 14:39:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.547 14:39:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:41.547 14:39:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.547 14:39:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:41.547 14:39:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.547 [2024-07-14 14:39:20.733542] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:41.547 [2024-07-14 14:39:20.733721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1761372 ] 00:06:41.547 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.806 [2024-07-14 14:39:20.861604] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.807 [2024-07-14 14:39:21.114968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.741 14:39:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:42.741 14:39:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:42.741 14:39:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1761558 00:06:42.741 14:39:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1761558 /var/tmp/spdk2.sock 00:06:42.741 14:39:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1761558 ']' 00:06:42.741 14:39:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:42.741 14:39:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:42.741 14:39:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:42.741 14:39:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:42.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:42.741 14:39:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:42.741 14:39:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.000 [2024-07-14 14:39:22.101704] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:43.000 [2024-07-14 14:39:22.101841] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1761558 ] 00:06:43.000 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.000 [2024-07-14 14:39:22.291166] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:43.000 [2024-07-14 14:39:22.291251] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.567 [2024-07-14 14:39:22.813435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.098 14:39:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:46.098 14:39:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:46.098 14:39:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1761372 00:06:46.098 14:39:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1761372 00:06:46.098 14:39:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:46.098 lslocks: write error 00:06:46.098 14:39:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1761372 00:06:46.098 14:39:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1761372 ']' 00:06:46.098 14:39:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1761372 00:06:46.098 14:39:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:46.098 14:39:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:46.098 14:39:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1761372 00:06:46.357 14:39:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:46.357 14:39:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:46.357 14:39:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1761372' 00:06:46.357 killing process with pid 1761372 00:06:46.357 14:39:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1761372 00:06:46.357 14:39:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1761372 00:06:51.631 14:39:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1761558 00:06:51.631 14:39:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1761558 ']' 00:06:51.631 14:39:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1761558 00:06:51.632 14:39:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:51.632 14:39:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:51.632 14:39:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1761558 00:06:51.632 14:39:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:51.632 14:39:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:51.632 14:39:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1761558' 00:06:51.632 killing process with pid 1761558 00:06:51.632 14:39:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1761558 00:06:51.632 14:39:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1761558 00:06:54.161 00:06:54.161 real 0m12.424s 00:06:54.161 user 0m12.798s 00:06:54.161 sys 0m1.506s 00:06:54.161 14:39:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.161 14:39:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.161 ************************************ 00:06:54.161 END TEST non_locking_app_on_locked_coremask 00:06:54.161 ************************************ 00:06:54.161 14:39:33 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:54.161 14:39:33 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:54.161 14:39:33 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:54.161 14:39:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.161 14:39:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:54.161 ************************************ 00:06:54.161 START TEST locking_app_on_unlocked_coremask 00:06:54.161 ************************************ 00:06:54.161 14:39:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:54.161 14:39:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1762866 00:06:54.161 14:39:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1762866 /var/tmp/spdk.sock 00:06:54.161 14:39:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:54.161 14:39:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1762866 ']' 00:06:54.161 14:39:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.161 14:39:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:54.161 14:39:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.161 14:39:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:54.161 14:39:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.161 [2024-07-14 14:39:33.219163] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:54.161 [2024-07-14 14:39:33.219298] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1762866 ] 00:06:54.161 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.161 [2024-07-14 14:39:33.352828] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:54.161 [2024-07-14 14:39:33.352907] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.420 [2024-07-14 14:39:33.618190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.388 14:39:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:55.388 14:39:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:55.388 14:39:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1763128 00:06:55.388 14:39:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:55.388 14:39:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1763128 /var/tmp/spdk2.sock 00:06:55.388 14:39:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1763128 ']' 00:06:55.388 14:39:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:55.388 14:39:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:55.388 14:39:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:55.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:55.388 14:39:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:55.388 14:39:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.388 [2024-07-14 14:39:34.609004] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:55.388 [2024-07-14 14:39:34.609144] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1763128 ] 00:06:55.649 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.649 [2024-07-14 14:39:34.782647] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.217 [2024-07-14 14:39:35.304109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.124 14:39:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:58.124 14:39:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:58.124 14:39:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1763128 00:06:58.124 14:39:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1763128 00:06:58.124 14:39:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:58.690 lslocks: write error 00:06:58.690 14:39:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1762866 00:06:58.690 14:39:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1762866 ']' 00:06:58.690 14:39:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1762866 00:06:58.690 14:39:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:58.690 14:39:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:58.690 14:39:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1762866 00:06:58.948 14:39:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:58.948 14:39:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:58.948 14:39:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1762866' 00:06:58.948 killing process with pid 1762866 00:06:58.948 14:39:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1762866 00:06:58.948 14:39:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1762866 00:07:04.224 14:39:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1763128 00:07:04.224 14:39:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1763128 ']' 00:07:04.224 14:39:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1763128 00:07:04.224 14:39:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:04.224 14:39:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:04.224 14:39:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1763128 00:07:04.224 14:39:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:04.224 14:39:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:04.224 14:39:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1763128' 00:07:04.224 killing process with pid 1763128 00:07:04.224 14:39:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1763128 00:07:04.224 14:39:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1763128 00:07:06.758 00:07:06.758 real 0m12.556s 00:07:06.758 user 0m12.927s 00:07:06.758 sys 0m1.558s 00:07:06.758 14:39:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.758 14:39:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.758 ************************************ 00:07:06.758 END TEST locking_app_on_unlocked_coremask 00:07:06.758 ************************************ 00:07:06.758 14:39:45 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:06.758 14:39:45 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:06.758 14:39:45 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:06.758 14:39:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.758 14:39:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:06.758 ************************************ 00:07:06.758 START TEST locking_app_on_locked_coremask 00:07:06.758 ************************************ 00:07:06.758 14:39:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:07:06.758 14:39:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1764408 00:07:06.758 14:39:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:06.758 14:39:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1764408 /var/tmp/spdk.sock 00:07:06.758 14:39:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1764408 ']' 00:07:06.758 14:39:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.758 14:39:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:06.758 14:39:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.758 14:39:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:06.758 14:39:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.758 [2024-07-14 14:39:45.825155] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:06.758 [2024-07-14 14:39:45.825298] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1764408 ] 00:07:06.758 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.758 [2024-07-14 14:39:45.955399] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.016 [2024-07-14 14:39:46.217316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.951 14:39:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:07.951 14:39:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:07.951 14:39:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1764628 00:07:07.951 14:39:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1764628 /var/tmp/spdk2.sock 00:07:07.951 14:39:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:07.951 14:39:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:07.951 14:39:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1764628 /var/tmp/spdk2.sock 00:07:07.951 14:39:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:07.951 14:39:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.951 14:39:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:07.951 14:39:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.951 14:39:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1764628 /var/tmp/spdk2.sock 00:07:07.951 14:39:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1764628 ']' 00:07:07.951 14:39:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:07.951 14:39:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:07.951 14:39:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:07.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:07.951 14:39:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:07.951 14:39:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:07.951 [2024-07-14 14:39:47.207223] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:07.951 [2024-07-14 14:39:47.207356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1764628 ] 00:07:08.210 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.210 [2024-07-14 14:39:47.396252] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1764408 has claimed it. 00:07:08.210 [2024-07-14 14:39:47.396344] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:08.780 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1764628) - No such process 00:07:08.780 ERROR: process (pid: 1764628) is no longer running 00:07:08.780 14:39:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:08.780 14:39:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:08.780 14:39:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:08.780 14:39:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:08.780 14:39:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:08.780 14:39:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:08.780 14:39:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1764408 00:07:08.780 14:39:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1764408 00:07:08.780 14:39:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:09.039 lslocks: write error 00:07:09.039 14:39:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1764408 00:07:09.039 14:39:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1764408 ']' 00:07:09.039 14:39:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1764408 00:07:09.039 14:39:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:09.039 14:39:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:09.039 14:39:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1764408 00:07:09.039 14:39:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:09.039 14:39:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:09.039 14:39:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1764408' 00:07:09.039 killing process with pid 1764408 00:07:09.039 14:39:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1764408 00:07:09.039 14:39:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1764408 00:07:11.584 00:07:11.584 real 0m5.106s 00:07:11.584 user 0m5.254s 00:07:11.584 sys 0m0.967s 00:07:11.584 14:39:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.584 14:39:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:11.584 ************************************ 00:07:11.584 END TEST locking_app_on_locked_coremask 00:07:11.584 ************************************ 00:07:11.584 14:39:50 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:11.584 14:39:50 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:11.584 14:39:50 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:11.584 14:39:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.584 14:39:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.584 ************************************ 00:07:11.584 START TEST locking_overlapped_coremask 00:07:11.584 ************************************ 00:07:11.584 14:39:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:07:11.584 14:39:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1765069 00:07:11.584 14:39:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:11.584 14:39:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1765069 /var/tmp/spdk.sock 00:07:11.584 14:39:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1765069 ']' 00:07:11.584 14:39:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.584 14:39:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:11.584 14:39:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.584 14:39:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:11.584 14:39:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:11.844 [2024-07-14 14:39:50.986277] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:11.844 [2024-07-14 14:39:50.986423] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1765069 ] 00:07:11.844 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.844 [2024-07-14 14:39:51.117172] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:12.102 [2024-07-14 14:39:51.382694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.102 [2024-07-14 14:39:51.382744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.102 [2024-07-14 14:39:51.382750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:13.038 14:39:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:13.038 14:39:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:13.038 14:39:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1765207 00:07:13.038 14:39:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:13.038 14:39:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1765207 /var/tmp/spdk2.sock 00:07:13.038 14:39:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:13.038 14:39:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1765207 /var/tmp/spdk2.sock 00:07:13.038 14:39:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:13.038 14:39:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:13.038 14:39:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:13.038 14:39:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:13.038 14:39:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1765207 /var/tmp/spdk2.sock 00:07:13.038 14:39:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1765207 ']' 00:07:13.038 14:39:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:13.038 14:39:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:13.038 14:39:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:13.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:13.038 14:39:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:13.038 14:39:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.295 [2024-07-14 14:39:52.383740] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:13.295 [2024-07-14 14:39:52.383895] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1765207 ] 00:07:13.295 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.295 [2024-07-14 14:39:52.563429] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1765069 has claimed it. 00:07:13.295 [2024-07-14 14:39:52.563521] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:13.862 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1765207) - No such process 00:07:13.862 ERROR: process (pid: 1765207) is no longer running 00:07:13.862 14:39:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:13.862 14:39:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:13.862 14:39:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:13.862 14:39:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:13.862 14:39:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:13.862 14:39:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:13.863 14:39:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:13.863 14:39:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:13.863 14:39:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:13.863 14:39:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:13.863 14:39:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1765069 00:07:13.863 14:39:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 1765069 ']' 00:07:13.863 14:39:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 1765069 00:07:13.863 14:39:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:07:13.863 14:39:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:13.863 14:39:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1765069 00:07:13.863 14:39:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:13.863 14:39:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:13.863 14:39:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1765069' 00:07:13.863 killing process with pid 1765069 00:07:13.863 14:39:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 1765069 00:07:13.863 14:39:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 1765069 00:07:16.393 00:07:16.393 real 0m4.480s 00:07:16.393 user 0m11.510s 00:07:16.393 sys 0m0.803s 00:07:16.393 14:39:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.393 14:39:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:16.393 ************************************ 00:07:16.393 END TEST locking_overlapped_coremask 00:07:16.393 ************************************ 00:07:16.393 14:39:55 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:16.393 14:39:55 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:16.393 14:39:55 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:16.393 14:39:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.393 14:39:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:16.393 ************************************ 00:07:16.393 START TEST locking_overlapped_coremask_via_rpc 00:07:16.393 ************************************ 00:07:16.393 14:39:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:07:16.393 14:39:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1765641 00:07:16.393 14:39:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:16.393 14:39:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1765641 /var/tmp/spdk.sock 00:07:16.393 14:39:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1765641 ']' 00:07:16.393 14:39:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.393 14:39:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:16.393 14:39:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.394 14:39:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:16.394 14:39:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.394 [2024-07-14 14:39:55.518607] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:16.394 [2024-07-14 14:39:55.518770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1765641 ] 00:07:16.394 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.394 [2024-07-14 14:39:55.644994] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:16.394 [2024-07-14 14:39:55.645047] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:16.651 [2024-07-14 14:39:55.902911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.651 [2024-07-14 14:39:55.902960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.651 [2024-07-14 14:39:55.902964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:17.629 14:39:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:17.629 14:39:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:17.629 14:39:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1765781 00:07:17.629 14:39:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1765781 /var/tmp/spdk2.sock 00:07:17.629 14:39:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1765781 ']' 00:07:17.629 14:39:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:17.629 14:39:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:17.629 14:39:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:17.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:17.629 14:39:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:17.629 14:39:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:17.629 14:39:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.629 [2024-07-14 14:39:56.900346] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:17.629 [2024-07-14 14:39:56.900508] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1765781 ] 00:07:17.888 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.888 [2024-07-14 14:39:57.083400] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:17.888 [2024-07-14 14:39:57.083455] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:18.452 [2024-07-14 14:39:57.552571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:18.452 [2024-07-14 14:39:57.555957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.452 [2024-07-14 14:39:57.555969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:20.356 14:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:20.356 14:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:20.356 14:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:20.356 14:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.356 14:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.356 14:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.356 14:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:20.356 14:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:20.356 14:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:20.356 14:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:20.356 14:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.356 14:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:20.356 14:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.356 14:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:20.356 14:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.356 14:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.356 [2024-07-14 14:39:59.647054] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1765641 has claimed it. 00:07:20.356 request: 00:07:20.356 { 00:07:20.356 "method": "framework_enable_cpumask_locks", 00:07:20.356 "req_id": 1 00:07:20.356 } 00:07:20.356 Got JSON-RPC error response 00:07:20.356 response: 00:07:20.356 { 00:07:20.356 "code": -32603, 00:07:20.356 "message": "Failed to claim CPU core: 2" 00:07:20.356 } 00:07:20.356 14:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:20.356 14:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:20.356 14:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:20.356 14:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:20.356 14:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:20.356 14:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1765641 /var/tmp/spdk.sock 00:07:20.356 14:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1765641 ']' 00:07:20.356 14:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.356 14:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:20.356 14:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.356 14:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:20.356 14:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.614 14:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:20.614 14:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:20.614 14:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1765781 /var/tmp/spdk2.sock 00:07:20.614 14:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1765781 ']' 00:07:20.614 14:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:20.614 14:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:20.614 14:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:20.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:20.614 14:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:20.614 14:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.873 14:40:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:20.873 14:40:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:20.873 14:40:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:20.873 14:40:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:20.873 14:40:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:20.873 14:40:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:20.873 00:07:20.873 real 0m4.735s 00:07:20.873 user 0m1.542s 00:07:20.873 sys 0m0.256s 00:07:20.873 14:40:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.873 14:40:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.873 ************************************ 00:07:20.873 END TEST locking_overlapped_coremask_via_rpc 00:07:20.873 ************************************ 00:07:20.873 14:40:00 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:20.873 14:40:00 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:20.873 14:40:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1765641 ]] 00:07:20.873 14:40:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1765641 00:07:20.873 14:40:00 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1765641 ']' 00:07:20.873 14:40:00 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1765641 00:07:20.873 14:40:00 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:21.131 14:40:00 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:21.132 14:40:00 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1765641 00:07:21.132 14:40:00 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:21.132 14:40:00 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:21.132 14:40:00 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1765641' 00:07:21.132 killing process with pid 1765641 00:07:21.132 14:40:00 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1765641 00:07:21.132 14:40:00 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1765641 00:07:23.667 14:40:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1765781 ]] 00:07:23.667 14:40:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1765781 00:07:23.667 14:40:02 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1765781 ']' 00:07:23.667 14:40:02 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1765781 00:07:23.667 14:40:02 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:23.667 14:40:02 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:23.667 14:40:02 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1765781 00:07:23.667 14:40:02 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:23.667 14:40:02 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:23.667 14:40:02 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1765781' 00:07:23.667 killing process with pid 1765781 00:07:23.667 14:40:02 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1765781 00:07:23.667 14:40:02 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1765781 00:07:25.570 14:40:04 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:25.570 14:40:04 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:25.570 14:40:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1765641 ]] 00:07:25.570 14:40:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1765641 00:07:25.570 14:40:04 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1765641 ']' 00:07:25.570 14:40:04 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1765641 00:07:25.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1765641) - No such process 00:07:25.570 14:40:04 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1765641 is not found' 00:07:25.570 Process with pid 1765641 is not found 00:07:25.570 14:40:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1765781 ]] 00:07:25.570 14:40:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1765781 00:07:25.570 14:40:04 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1765781 ']' 00:07:25.570 14:40:04 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1765781 00:07:25.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1765781) - No such process 00:07:25.570 14:40:04 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1765781 is not found' 00:07:25.570 Process with pid 1765781 is not found 00:07:25.570 14:40:04 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:25.570 00:07:25.570 real 0m53.076s 00:07:25.570 user 1m27.634s 00:07:25.570 sys 0m7.893s 00:07:25.570 14:40:04 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.570 14:40:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:25.570 ************************************ 00:07:25.570 END TEST cpu_locks 00:07:25.570 ************************************ 00:07:25.570 14:40:04 event -- common/autotest_common.sh@1142 -- # return 0 00:07:25.570 00:07:25.570 real 1m22.982s 00:07:25.570 user 2m23.736s 00:07:25.570 sys 0m12.573s 00:07:25.570 14:40:04 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.570 14:40:04 event -- common/autotest_common.sh@10 -- # set +x 00:07:25.570 ************************************ 00:07:25.570 END TEST event 00:07:25.570 ************************************ 00:07:25.570 14:40:04 -- common/autotest_common.sh@1142 -- # return 0 00:07:25.570 14:40:04 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:25.570 14:40:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:25.570 14:40:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.570 14:40:04 -- common/autotest_common.sh@10 -- # set +x 00:07:25.570 ************************************ 00:07:25.570 START TEST thread 00:07:25.570 ************************************ 00:07:25.570 14:40:04 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:25.570 * Looking for test storage... 00:07:25.570 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:25.570 14:40:04 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:25.570 14:40:04 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:25.570 14:40:04 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.570 14:40:04 thread -- common/autotest_common.sh@10 -- # set +x 00:07:25.830 ************************************ 00:07:25.830 START TEST thread_poller_perf 00:07:25.830 ************************************ 00:07:25.830 14:40:04 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:25.830 [2024-07-14 14:40:04.935043] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:25.830 [2024-07-14 14:40:04.935184] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1766813 ] 00:07:25.830 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.830 [2024-07-14 14:40:05.076379] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.089 [2024-07-14 14:40:05.332425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.089 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:27.469 ====================================== 00:07:27.469 busy:2714908406 (cyc) 00:07:27.469 total_run_count: 282000 00:07:27.469 tsc_hz: 2700000000 (cyc) 00:07:27.469 ====================================== 00:07:27.469 poller_cost: 9627 (cyc), 3565 (nsec) 00:07:27.469 00:07:27.469 real 0m1.876s 00:07:27.469 user 0m1.711s 00:07:27.469 sys 0m0.155s 00:07:27.469 14:40:06 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.469 14:40:06 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:27.469 ************************************ 00:07:27.469 END TEST thread_poller_perf 00:07:27.469 ************************************ 00:07:27.729 14:40:06 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:27.729 14:40:06 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:27.729 14:40:06 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:27.729 14:40:06 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.729 14:40:06 thread -- common/autotest_common.sh@10 -- # set +x 00:07:27.729 ************************************ 00:07:27.729 START TEST thread_poller_perf 00:07:27.729 ************************************ 00:07:27.729 14:40:06 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:27.729 [2024-07-14 14:40:06.855088] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:27.729 [2024-07-14 14:40:06.855221] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1767099 ] 00:07:27.729 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.729 [2024-07-14 14:40:06.985005] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.988 [2024-07-14 14:40:07.240939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.988 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:29.894 ====================================== 00:07:29.894 busy:2704714584 (cyc) 00:07:29.894 total_run_count: 3664000 00:07:29.894 tsc_hz: 2700000000 (cyc) 00:07:29.894 ====================================== 00:07:29.894 poller_cost: 738 (cyc), 273 (nsec) 00:07:29.894 00:07:29.894 real 0m1.867s 00:07:29.894 user 0m1.697s 00:07:29.894 sys 0m0.159s 00:07:29.894 14:40:08 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.894 14:40:08 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:29.894 ************************************ 00:07:29.894 END TEST thread_poller_perf 00:07:29.894 ************************************ 00:07:29.894 14:40:08 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:29.895 14:40:08 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:29.895 00:07:29.895 real 0m3.886s 00:07:29.895 user 0m3.475s 00:07:29.895 sys 0m0.400s 00:07:29.895 14:40:08 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.895 14:40:08 thread -- common/autotest_common.sh@10 -- # set +x 00:07:29.895 ************************************ 00:07:29.895 END TEST thread 00:07:29.895 ************************************ 00:07:29.895 14:40:08 -- common/autotest_common.sh@1142 -- # return 0 00:07:29.895 14:40:08 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:29.895 14:40:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:29.895 14:40:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.895 14:40:08 -- common/autotest_common.sh@10 -- # set +x 00:07:29.895 ************************************ 00:07:29.895 START TEST accel 00:07:29.895 ************************************ 00:07:29.895 14:40:08 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:29.895 * Looking for test storage... 00:07:29.895 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:29.895 14:40:08 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:29.895 14:40:08 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:29.895 14:40:08 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:29.895 14:40:08 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1767414 00:07:29.895 14:40:08 accel -- accel/accel.sh@63 -- # waitforlisten 1767414 00:07:29.895 14:40:08 accel -- common/autotest_common.sh@829 -- # '[' -z 1767414 ']' 00:07:29.895 14:40:08 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.895 14:40:08 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:29.895 14:40:08 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:29.895 14:40:08 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:29.895 14:40:08 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.895 14:40:08 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.895 14:40:08 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.895 14:40:08 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:29.895 14:40:08 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.895 14:40:08 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.895 14:40:08 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.895 14:40:08 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:29.895 14:40:08 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:29.895 14:40:08 accel -- accel/accel.sh@41 -- # jq -r . 00:07:29.895 [2024-07-14 14:40:08.904209] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:29.895 [2024-07-14 14:40:08.904360] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1767414 ] 00:07:29.895 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.895 [2024-07-14 14:40:09.033248] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.155 [2024-07-14 14:40:09.286519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.096 14:40:10 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:31.096 14:40:10 accel -- common/autotest_common.sh@862 -- # return 0 00:07:31.096 14:40:10 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:31.096 14:40:10 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:31.096 14:40:10 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:31.096 14:40:10 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:31.096 14:40:10 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:31.096 14:40:10 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:31.096 14:40:10 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.096 14:40:10 accel -- common/autotest_common.sh@10 -- # set +x 00:07:31.096 14:40:10 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:31.096 14:40:10 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.096 14:40:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:31.096 14:40:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:31.096 14:40:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:31.096 14:40:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:31.096 14:40:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:31.096 14:40:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:31.096 14:40:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:31.096 14:40:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:31.096 14:40:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:31.096 14:40:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:31.096 14:40:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:31.096 14:40:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:31.096 14:40:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:31.096 14:40:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:31.096 14:40:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:31.096 14:40:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:31.096 14:40:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:31.096 14:40:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:31.096 14:40:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:31.096 14:40:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:31.096 14:40:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:31.096 14:40:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:31.096 14:40:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:31.096 14:40:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:31.096 14:40:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:31.096 14:40:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:31.096 14:40:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:31.096 14:40:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:31.096 14:40:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:31.096 14:40:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:31.096 14:40:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:31.096 14:40:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:31.096 14:40:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:31.096 14:40:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:31.096 14:40:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:31.096 14:40:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:31.096 14:40:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:31.096 14:40:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:31.096 14:40:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:31.096 14:40:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:31.096 14:40:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:31.096 14:40:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:31.096 14:40:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:31.096 14:40:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:31.096 14:40:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:31.096 14:40:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:31.096 14:40:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:31.096 14:40:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:31.096 14:40:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:31.096 14:40:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:31.097 14:40:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:31.097 14:40:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:31.097 14:40:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:31.097 14:40:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:31.097 14:40:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:31.097 14:40:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:31.097 14:40:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:31.097 14:40:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:31.097 14:40:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:31.097 14:40:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:31.097 14:40:10 accel -- accel/accel.sh@75 -- # killprocess 1767414 00:07:31.097 14:40:10 accel -- common/autotest_common.sh@948 -- # '[' -z 1767414 ']' 00:07:31.097 14:40:10 accel -- common/autotest_common.sh@952 -- # kill -0 1767414 00:07:31.097 14:40:10 accel -- common/autotest_common.sh@953 -- # uname 00:07:31.097 14:40:10 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:31.097 14:40:10 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1767414 00:07:31.097 14:40:10 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:31.097 14:40:10 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:31.097 14:40:10 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1767414' 00:07:31.097 killing process with pid 1767414 00:07:31.097 14:40:10 accel -- common/autotest_common.sh@967 -- # kill 1767414 00:07:31.097 14:40:10 accel -- common/autotest_common.sh@972 -- # wait 1767414 00:07:33.632 14:40:12 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:33.632 14:40:12 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:33.632 14:40:12 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:33.632 14:40:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.632 14:40:12 accel -- common/autotest_common.sh@10 -- # set +x 00:07:33.632 14:40:12 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:07:33.632 14:40:12 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:33.632 14:40:12 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:33.632 14:40:12 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:33.632 14:40:12 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:33.632 14:40:12 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.632 14:40:12 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.632 14:40:12 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:33.632 14:40:12 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:33.632 14:40:12 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:33.632 14:40:12 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.632 14:40:12 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:33.632 14:40:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:33.632 14:40:12 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:33.632 14:40:12 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:33.632 14:40:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.632 14:40:12 accel -- common/autotest_common.sh@10 -- # set +x 00:07:33.632 ************************************ 00:07:33.632 START TEST accel_missing_filename 00:07:33.632 ************************************ 00:07:33.632 14:40:12 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:07:33.632 14:40:12 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:33.632 14:40:12 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:33.632 14:40:12 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:33.632 14:40:12 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:33.632 14:40:12 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:33.632 14:40:12 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:33.632 14:40:12 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:33.632 14:40:12 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:33.632 14:40:12 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:33.632 14:40:12 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:33.632 14:40:12 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:33.632 14:40:12 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.632 14:40:12 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.632 14:40:12 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:33.632 14:40:12 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:33.632 14:40:12 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:33.632 [2024-07-14 14:40:12.845184] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:33.632 [2024-07-14 14:40:12.845315] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1767866 ] 00:07:33.632 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.891 [2024-07-14 14:40:12.978747] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.150 [2024-07-14 14:40:13.234473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.409 [2024-07-14 14:40:13.468205] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:34.976 [2024-07-14 14:40:14.020245] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:35.236 A filename is required. 00:07:35.236 14:40:14 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:35.236 14:40:14 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:35.236 14:40:14 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:35.236 14:40:14 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:35.236 14:40:14 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:35.236 14:40:14 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:35.236 00:07:35.236 real 0m1.680s 00:07:35.236 user 0m1.470s 00:07:35.236 sys 0m0.238s 00:07:35.236 14:40:14 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.236 14:40:14 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:35.236 ************************************ 00:07:35.236 END TEST accel_missing_filename 00:07:35.236 ************************************ 00:07:35.236 14:40:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:35.236 14:40:14 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:35.236 14:40:14 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:35.236 14:40:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.236 14:40:14 accel -- common/autotest_common.sh@10 -- # set +x 00:07:35.236 ************************************ 00:07:35.236 START TEST accel_compress_verify 00:07:35.236 ************************************ 00:07:35.236 14:40:14 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:35.236 14:40:14 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:35.236 14:40:14 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:35.236 14:40:14 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:35.236 14:40:14 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:35.236 14:40:14 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:35.236 14:40:14 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:35.236 14:40:14 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:35.236 14:40:14 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:35.236 14:40:14 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:35.236 14:40:14 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.236 14:40:14 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.236 14:40:14 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.236 14:40:14 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.236 14:40:14 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:35.236 14:40:14 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:35.236 14:40:14 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:35.496 [2024-07-14 14:40:14.574013] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:35.496 [2024-07-14 14:40:14.574162] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1768136 ] 00:07:35.496 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.496 [2024-07-14 14:40:14.717491] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.754 [2024-07-14 14:40:14.974463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.011 [2024-07-14 14:40:15.203739] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:36.580 [2024-07-14 14:40:15.764118] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:37.163 00:07:37.163 Compression does not support the verify option, aborting. 00:07:37.163 14:40:16 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:37.163 14:40:16 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:37.163 14:40:16 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:37.163 14:40:16 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:37.163 14:40:16 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:37.163 14:40:16 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:37.163 00:07:37.163 real 0m1.692s 00:07:37.163 user 0m1.471s 00:07:37.163 sys 0m0.250s 00:07:37.163 14:40:16 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.163 14:40:16 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:37.163 ************************************ 00:07:37.163 END TEST accel_compress_verify 00:07:37.163 ************************************ 00:07:37.163 14:40:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:37.163 14:40:16 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:37.163 14:40:16 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:37.163 14:40:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.163 14:40:16 accel -- common/autotest_common.sh@10 -- # set +x 00:07:37.163 ************************************ 00:07:37.163 START TEST accel_wrong_workload 00:07:37.163 ************************************ 00:07:37.163 14:40:16 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:07:37.163 14:40:16 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:37.163 14:40:16 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:37.163 14:40:16 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:37.163 14:40:16 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:37.163 14:40:16 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:37.163 14:40:16 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:37.163 14:40:16 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:37.163 14:40:16 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:37.163 14:40:16 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:37.163 14:40:16 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:37.163 14:40:16 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:37.163 14:40:16 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.163 14:40:16 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.163 14:40:16 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:37.163 14:40:16 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:37.163 14:40:16 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:37.163 Unsupported workload type: foobar 00:07:37.163 [2024-07-14 14:40:16.312723] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:37.163 accel_perf options: 00:07:37.163 [-h help message] 00:07:37.163 [-q queue depth per core] 00:07:37.163 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:37.163 [-T number of threads per core 00:07:37.163 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:37.163 [-t time in seconds] 00:07:37.163 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:37.163 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:37.163 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:37.163 [-l for compress/decompress workloads, name of uncompressed input file 00:07:37.163 [-S for crc32c workload, use this seed value (default 0) 00:07:37.163 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:37.163 [-f for fill workload, use this BYTE value (default 255) 00:07:37.163 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:37.163 [-y verify result if this switch is on] 00:07:37.163 [-a tasks to allocate per core (default: same value as -q)] 00:07:37.163 Can be used to spread operations across a wider range of memory. 00:07:37.163 14:40:16 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:37.163 14:40:16 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:37.163 14:40:16 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:37.163 14:40:16 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:37.163 00:07:37.163 real 0m0.058s 00:07:37.163 user 0m0.052s 00:07:37.163 sys 0m0.042s 00:07:37.163 14:40:16 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.163 14:40:16 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:37.163 ************************************ 00:07:37.163 END TEST accel_wrong_workload 00:07:37.163 ************************************ 00:07:37.163 14:40:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:37.163 14:40:16 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:37.163 14:40:16 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:37.163 14:40:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.163 14:40:16 accel -- common/autotest_common.sh@10 -- # set +x 00:07:37.163 ************************************ 00:07:37.163 START TEST accel_negative_buffers 00:07:37.163 ************************************ 00:07:37.163 14:40:16 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:37.163 14:40:16 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:37.163 14:40:16 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:37.163 14:40:16 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:37.163 14:40:16 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:37.163 14:40:16 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:37.163 14:40:16 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:37.163 14:40:16 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:37.163 14:40:16 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:37.163 14:40:16 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:37.163 14:40:16 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:37.163 14:40:16 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:37.163 14:40:16 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.163 14:40:16 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.163 14:40:16 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:37.163 14:40:16 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:37.163 14:40:16 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:37.163 -x option must be non-negative. 00:07:37.163 [2024-07-14 14:40:16.413127] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:37.163 accel_perf options: 00:07:37.163 [-h help message] 00:07:37.163 [-q queue depth per core] 00:07:37.163 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:37.163 [-T number of threads per core 00:07:37.163 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:37.163 [-t time in seconds] 00:07:37.163 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:37.163 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:37.163 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:37.163 [-l for compress/decompress workloads, name of uncompressed input file 00:07:37.163 [-S for crc32c workload, use this seed value (default 0) 00:07:37.163 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:37.163 [-f for fill workload, use this BYTE value (default 255) 00:07:37.163 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:37.163 [-y verify result if this switch is on] 00:07:37.163 [-a tasks to allocate per core (default: same value as -q)] 00:07:37.163 Can be used to spread operations across a wider range of memory. 00:07:37.163 14:40:16 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:37.163 14:40:16 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:37.163 14:40:16 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:37.163 14:40:16 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:37.163 00:07:37.163 real 0m0.055s 00:07:37.163 user 0m0.057s 00:07:37.163 sys 0m0.033s 00:07:37.163 14:40:16 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.163 14:40:16 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:37.163 ************************************ 00:07:37.163 END TEST accel_negative_buffers 00:07:37.163 ************************************ 00:07:37.163 14:40:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:37.163 14:40:16 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:37.163 14:40:16 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:37.163 14:40:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.163 14:40:16 accel -- common/autotest_common.sh@10 -- # set +x 00:07:37.443 ************************************ 00:07:37.443 START TEST accel_crc32c 00:07:37.443 ************************************ 00:07:37.443 14:40:16 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:37.443 14:40:16 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:37.443 14:40:16 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:37.443 14:40:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:37.443 14:40:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:37.443 14:40:16 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:37.443 14:40:16 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:37.443 14:40:16 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:37.443 14:40:16 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:37.443 14:40:16 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:37.443 14:40:16 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.443 14:40:16 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.443 14:40:16 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:37.443 14:40:16 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:37.443 14:40:16 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:37.443 [2024-07-14 14:40:16.518643] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:37.443 [2024-07-14 14:40:16.518763] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1768461 ] 00:07:37.443 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.443 [2024-07-14 14:40:16.648039] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.701 [2024-07-14 14:40:16.909585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:37.960 14:40:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:37.961 14:40:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:37.961 14:40:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:37.961 14:40:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.862 14:40:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:39.862 14:40:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.862 14:40:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.862 14:40:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.862 14:40:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:39.862 14:40:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.862 14:40:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.862 14:40:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.862 14:40:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:39.862 14:40:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.862 14:40:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.862 14:40:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.862 14:40:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:39.862 14:40:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.862 14:40:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.862 14:40:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.862 14:40:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:39.862 14:40:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.862 14:40:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.862 14:40:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.862 14:40:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:39.862 14:40:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.862 14:40:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.862 14:40:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.862 14:40:19 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:39.862 14:40:19 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:39.862 14:40:19 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:39.862 00:07:39.862 real 0m2.691s 00:07:39.862 user 0m2.435s 00:07:39.862 sys 0m0.253s 00:07:39.862 14:40:19 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.862 14:40:19 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:39.862 ************************************ 00:07:39.862 END TEST accel_crc32c 00:07:39.862 ************************************ 00:07:40.122 14:40:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:40.122 14:40:19 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:40.122 14:40:19 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:40.122 14:40:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.122 14:40:19 accel -- common/autotest_common.sh@10 -- # set +x 00:07:40.122 ************************************ 00:07:40.122 START TEST accel_crc32c_C2 00:07:40.122 ************************************ 00:07:40.122 14:40:19 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:40.122 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:40.122 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:40.122 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.122 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:40.122 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.122 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:40.122 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:40.122 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:40.122 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:40.122 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.122 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.122 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:40.122 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:40.122 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:40.122 [2024-07-14 14:40:19.253944] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:40.122 [2024-07-14 14:40:19.254088] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1768753 ] 00:07:40.122 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.122 [2024-07-14 14:40:19.383301] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.379 [2024-07-14 14:40:19.644685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.636 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:40.636 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.636 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.636 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.636 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:40.636 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.636 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.636 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.636 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:40.636 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.636 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.636 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.636 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:40.636 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.636 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.636 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.637 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:40.637 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.637 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.637 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.637 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:40.637 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.637 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:40.637 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.637 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.637 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:40.637 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.637 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.637 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.637 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:40.637 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.637 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.637 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.637 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:40.637 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.637 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.637 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.637 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:40.637 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.637 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:40.637 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.637 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.637 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:40.637 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.637 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.637 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.637 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:40.637 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.637 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.637 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.637 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:40.637 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.637 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.637 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.637 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:40.637 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.637 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.637 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.637 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:40.637 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.637 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.637 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.637 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:40.637 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.637 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.637 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.637 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:40.637 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.637 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.637 14:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.171 14:40:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:43.171 14:40:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.171 14:40:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.171 14:40:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.171 14:40:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:43.171 14:40:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.171 14:40:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.171 14:40:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.171 14:40:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:43.171 14:40:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.171 14:40:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.171 14:40:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.171 14:40:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:43.171 14:40:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.171 14:40:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.171 14:40:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.171 14:40:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:43.171 14:40:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.171 14:40:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.171 14:40:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.171 14:40:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:43.171 14:40:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.171 14:40:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.171 14:40:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.171 14:40:21 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:43.171 14:40:21 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:43.171 14:40:21 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:43.171 00:07:43.171 real 0m2.693s 00:07:43.171 user 0m2.442s 00:07:43.171 sys 0m0.248s 00:07:43.171 14:40:21 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.171 14:40:21 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:43.171 ************************************ 00:07:43.171 END TEST accel_crc32c_C2 00:07:43.171 ************************************ 00:07:43.171 14:40:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:43.171 14:40:21 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:43.171 14:40:21 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:43.171 14:40:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.171 14:40:21 accel -- common/autotest_common.sh@10 -- # set +x 00:07:43.171 ************************************ 00:07:43.171 START TEST accel_copy 00:07:43.171 ************************************ 00:07:43.171 14:40:21 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:43.171 14:40:21 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:43.171 14:40:21 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:43.171 14:40:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:43.171 14:40:21 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:43.171 14:40:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:43.171 14:40:21 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:43.171 14:40:21 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:43.171 14:40:21 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:43.171 14:40:21 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:43.171 14:40:21 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.171 14:40:21 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.171 14:40:21 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:43.171 14:40:21 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:43.171 14:40:21 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:43.171 [2024-07-14 14:40:21.991923] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:43.171 [2024-07-14 14:40:21.992065] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1769053 ] 00:07:43.171 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.172 [2024-07-14 14:40:22.122367] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.172 [2024-07-14 14:40:22.385436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:43.431 14:40:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.339 14:40:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:45.339 14:40:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.339 14:40:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.339 14:40:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.339 14:40:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:45.339 14:40:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.339 14:40:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.339 14:40:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.339 14:40:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:45.339 14:40:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.339 14:40:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.339 14:40:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.339 14:40:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:45.339 14:40:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.339 14:40:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.339 14:40:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.339 14:40:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:45.339 14:40:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.339 14:40:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.339 14:40:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.339 14:40:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:45.339 14:40:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.339 14:40:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.339 14:40:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.339 14:40:24 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:45.339 14:40:24 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:45.339 14:40:24 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:45.339 00:07:45.339 real 0m2.684s 00:07:45.339 user 0m2.439s 00:07:45.339 sys 0m0.240s 00:07:45.339 14:40:24 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.339 14:40:24 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:45.339 ************************************ 00:07:45.339 END TEST accel_copy 00:07:45.339 ************************************ 00:07:45.596 14:40:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:45.596 14:40:24 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:45.596 14:40:24 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:45.596 14:40:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.596 14:40:24 accel -- common/autotest_common.sh@10 -- # set +x 00:07:45.596 ************************************ 00:07:45.596 START TEST accel_fill 00:07:45.596 ************************************ 00:07:45.596 14:40:24 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:45.596 14:40:24 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:45.596 14:40:24 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:45.596 14:40:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:45.596 14:40:24 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:45.596 14:40:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:45.596 14:40:24 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:45.596 14:40:24 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:45.596 14:40:24 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:45.596 14:40:24 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:45.596 14:40:24 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.596 14:40:24 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.596 14:40:24 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:45.596 14:40:24 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:45.596 14:40:24 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:45.596 [2024-07-14 14:40:24.721449] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:45.596 [2024-07-14 14:40:24.721592] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1769450 ] 00:07:45.596 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.596 [2024-07-14 14:40:24.866027] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.854 [2024-07-14 14:40:25.125591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.111 14:40:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:46.111 14:40:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:46.111 14:40:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:46.111 14:40:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:46.111 14:40:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:46.111 14:40:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:46.111 14:40:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:46.111 14:40:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:46.111 14:40:25 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:46.111 14:40:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:46.111 14:40:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:46.111 14:40:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:46.111 14:40:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:46.111 14:40:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:46.111 14:40:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:46.111 14:40:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:46.111 14:40:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:46.111 14:40:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:46.111 14:40:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:46.111 14:40:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:46.111 14:40:25 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:46.111 14:40:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:46.111 14:40:25 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:46.111 14:40:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:46.111 14:40:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:46.111 14:40:25 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:46.111 14:40:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:46.111 14:40:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:46.111 14:40:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:46.111 14:40:25 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:46.111 14:40:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:46.111 14:40:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:46.111 14:40:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:46.111 14:40:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:46.111 14:40:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:46.111 14:40:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:46.111 14:40:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:46.111 14:40:25 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:46.111 14:40:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:46.111 14:40:25 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:46.111 14:40:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:46.111 14:40:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:46.111 14:40:25 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:46.111 14:40:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:46.111 14:40:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:46.111 14:40:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:46.111 14:40:25 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:46.111 14:40:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:46.111 14:40:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:46.111 14:40:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:46.111 14:40:25 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:46.111 14:40:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:46.111 14:40:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:46.112 14:40:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:46.112 14:40:25 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:46.112 14:40:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:46.112 14:40:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:46.112 14:40:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:46.112 14:40:25 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:46.112 14:40:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:46.112 14:40:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:46.112 14:40:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:46.112 14:40:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:46.112 14:40:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:46.112 14:40:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:46.112 14:40:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:46.112 14:40:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:46.112 14:40:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:46.112 14:40:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:46.112 14:40:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:48.650 14:40:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:48.650 14:40:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:48.650 14:40:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:48.650 14:40:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:48.650 14:40:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:48.650 14:40:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:48.650 14:40:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:48.650 14:40:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:48.650 14:40:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:48.650 14:40:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:48.650 14:40:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:48.650 14:40:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:48.650 14:40:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:48.650 14:40:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:48.650 14:40:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:48.650 14:40:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:48.650 14:40:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:48.650 14:40:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:48.650 14:40:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:48.650 14:40:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:48.650 14:40:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:48.650 14:40:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:48.650 14:40:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:48.650 14:40:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:48.650 14:40:27 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:48.650 14:40:27 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:48.650 14:40:27 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:48.650 00:07:48.650 real 0m2.674s 00:07:48.650 user 0m2.429s 00:07:48.650 sys 0m0.240s 00:07:48.650 14:40:27 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:48.650 14:40:27 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:48.650 ************************************ 00:07:48.650 END TEST accel_fill 00:07:48.650 ************************************ 00:07:48.650 14:40:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:48.650 14:40:27 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:48.650 14:40:27 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:48.650 14:40:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.650 14:40:27 accel -- common/autotest_common.sh@10 -- # set +x 00:07:48.650 ************************************ 00:07:48.650 START TEST accel_copy_crc32c 00:07:48.650 ************************************ 00:07:48.650 14:40:27 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:48.650 14:40:27 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:48.650 14:40:27 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:48.650 14:40:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:48.650 14:40:27 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:48.650 14:40:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:48.650 14:40:27 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:48.650 14:40:27 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:48.650 14:40:27 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:48.651 14:40:27 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:48.651 14:40:27 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:48.651 14:40:27 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:48.651 14:40:27 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:48.651 14:40:27 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:48.651 14:40:27 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:48.651 [2024-07-14 14:40:27.436780] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:48.651 [2024-07-14 14:40:27.436940] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1769748 ] 00:07:48.651 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.651 [2024-07-14 14:40:27.567754] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.651 [2024-07-14 14:40:27.830538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:48.911 14:40:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:50.818 14:40:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:50.818 14:40:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:50.818 14:40:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:50.818 14:40:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:50.818 14:40:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:50.818 14:40:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:50.818 14:40:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:50.818 14:40:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:50.818 14:40:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:50.818 14:40:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:50.818 14:40:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:50.818 14:40:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:50.818 14:40:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:50.818 14:40:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:50.818 14:40:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:50.818 14:40:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:50.818 14:40:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:50.818 14:40:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:50.818 14:40:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:50.818 14:40:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:50.818 14:40:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:50.818 14:40:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:50.818 14:40:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:50.818 14:40:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:50.818 14:40:30 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:50.818 14:40:30 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:50.818 14:40:30 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:50.818 00:07:50.818 real 0m2.698s 00:07:50.818 user 0m0.011s 00:07:50.818 sys 0m0.002s 00:07:50.818 14:40:30 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.818 14:40:30 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:50.818 ************************************ 00:07:50.818 END TEST accel_copy_crc32c 00:07:50.818 ************************************ 00:07:50.818 14:40:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:50.818 14:40:30 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:50.818 14:40:30 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:50.818 14:40:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.818 14:40:30 accel -- common/autotest_common.sh@10 -- # set +x 00:07:51.079 ************************************ 00:07:51.079 START TEST accel_copy_crc32c_C2 00:07:51.079 ************************************ 00:07:51.079 14:40:30 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:51.079 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:51.079 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:51.079 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:51.079 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:51.079 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:51.079 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:51.079 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:51.079 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:51.079 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:51.079 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:51.079 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:51.079 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:51.079 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:51.079 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:51.079 [2024-07-14 14:40:30.178421] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:51.079 [2024-07-14 14:40:30.178548] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1770157 ] 00:07:51.079 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.079 [2024-07-14 14:40:30.308818] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.339 [2024-07-14 14:40:30.570320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.597 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:51.597 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:51.598 14:40:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.131 14:40:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:54.131 14:40:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.131 14:40:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.131 14:40:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.131 14:40:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:54.131 14:40:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.131 14:40:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.131 14:40:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.131 14:40:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:54.131 14:40:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.131 14:40:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.131 14:40:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.131 14:40:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:54.131 14:40:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.131 14:40:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.131 14:40:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.131 14:40:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:54.131 14:40:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.131 14:40:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.131 14:40:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.131 14:40:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:54.131 14:40:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.131 14:40:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.131 14:40:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.131 14:40:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:54.131 14:40:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:54.131 14:40:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:54.131 00:07:54.131 real 0m2.693s 00:07:54.131 user 0m2.453s 00:07:54.131 sys 0m0.237s 00:07:54.131 14:40:32 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:54.131 14:40:32 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:54.131 ************************************ 00:07:54.131 END TEST accel_copy_crc32c_C2 00:07:54.131 ************************************ 00:07:54.131 14:40:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:54.131 14:40:32 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:54.131 14:40:32 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:54.131 14:40:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.131 14:40:32 accel -- common/autotest_common.sh@10 -- # set +x 00:07:54.131 ************************************ 00:07:54.131 START TEST accel_dualcast 00:07:54.131 ************************************ 00:07:54.131 14:40:32 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:54.131 14:40:32 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:54.131 14:40:32 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:54.131 14:40:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:54.131 14:40:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:54.131 14:40:32 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:54.131 14:40:32 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:54.131 14:40:32 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:54.131 14:40:32 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:54.131 14:40:32 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:54.131 14:40:32 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:54.131 14:40:32 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:54.131 14:40:32 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:54.131 14:40:32 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:54.131 14:40:32 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:54.131 [2024-07-14 14:40:32.917962] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:54.131 [2024-07-14 14:40:32.918103] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1770448 ] 00:07:54.131 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.131 [2024-07-14 14:40:33.048156] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.131 [2024-07-14 14:40:33.309367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:54.391 14:40:33 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:56.295 14:40:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:56.295 14:40:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:56.295 14:40:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:56.295 14:40:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:56.295 14:40:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:56.295 14:40:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:56.295 14:40:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:56.295 14:40:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:56.295 14:40:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:56.295 14:40:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:56.295 14:40:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:56.295 14:40:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:56.295 14:40:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:56.295 14:40:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:56.295 14:40:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:56.295 14:40:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:56.295 14:40:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:56.295 14:40:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:56.295 14:40:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:56.295 14:40:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:56.295 14:40:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:56.295 14:40:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:56.295 14:40:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:56.295 14:40:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:56.295 14:40:35 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:56.295 14:40:35 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:56.295 14:40:35 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:56.295 00:07:56.295 real 0m2.683s 00:07:56.295 user 0m2.448s 00:07:56.295 sys 0m0.230s 00:07:56.295 14:40:35 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:56.295 14:40:35 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:56.295 ************************************ 00:07:56.295 END TEST accel_dualcast 00:07:56.295 ************************************ 00:07:56.295 14:40:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:56.295 14:40:35 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:56.295 14:40:35 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:56.295 14:40:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.295 14:40:35 accel -- common/autotest_common.sh@10 -- # set +x 00:07:56.553 ************************************ 00:07:56.553 START TEST accel_compare 00:07:56.553 ************************************ 00:07:56.553 14:40:35 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:56.553 14:40:35 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:56.553 14:40:35 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:56.553 14:40:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:56.553 14:40:35 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:56.553 14:40:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:56.553 14:40:35 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:56.553 14:40:35 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:56.553 14:40:35 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:56.553 14:40:35 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:56.553 14:40:35 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:56.553 14:40:35 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:56.553 14:40:35 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:56.553 14:40:35 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:56.553 14:40:35 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:56.553 [2024-07-14 14:40:35.647161] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:56.553 [2024-07-14 14:40:35.647313] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1770741 ] 00:07:56.553 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.553 [2024-07-14 14:40:35.789361] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.812 [2024-07-14 14:40:36.049891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:57.071 14:40:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:58.996 14:40:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:58.996 14:40:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:58.996 14:40:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:58.996 14:40:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:58.996 14:40:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:58.996 14:40:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:58.996 14:40:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:58.996 14:40:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:58.996 14:40:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:58.996 14:40:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:58.996 14:40:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:58.996 14:40:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:58.996 14:40:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:58.996 14:40:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:58.996 14:40:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:58.996 14:40:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:58.996 14:40:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:58.996 14:40:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:58.996 14:40:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:58.996 14:40:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:58.996 14:40:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:58.996 14:40:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:58.996 14:40:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:58.996 14:40:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:59.256 14:40:38 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:59.256 14:40:38 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:59.256 14:40:38 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:59.256 00:07:59.256 real 0m2.705s 00:07:59.256 user 0m2.451s 00:07:59.256 sys 0m0.249s 00:07:59.256 14:40:38 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:59.256 14:40:38 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:59.256 ************************************ 00:07:59.256 END TEST accel_compare 00:07:59.256 ************************************ 00:07:59.256 14:40:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:59.256 14:40:38 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:59.256 14:40:38 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:59.256 14:40:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.256 14:40:38 accel -- common/autotest_common.sh@10 -- # set +x 00:07:59.256 ************************************ 00:07:59.256 START TEST accel_xor 00:07:59.256 ************************************ 00:07:59.256 14:40:38 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:59.256 14:40:38 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:59.256 14:40:38 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:59.256 14:40:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:59.256 14:40:38 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:59.256 14:40:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:59.256 14:40:38 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:59.256 14:40:38 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:59.256 14:40:38 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:59.256 14:40:38 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:59.256 14:40:38 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:59.256 14:40:38 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:59.256 14:40:38 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:59.256 14:40:38 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:59.256 14:40:38 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:59.256 [2024-07-14 14:40:38.393081] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:59.256 [2024-07-14 14:40:38.393223] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1771149 ] 00:07:59.256 EAL: No free 2048 kB hugepages reported on node 1 00:07:59.256 [2024-07-14 14:40:38.524234] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.515 [2024-07-14 14:40:38.785121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:59.774 14:40:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:02.336 14:40:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:02.336 14:40:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:02.336 14:40:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:02.336 14:40:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:02.336 14:40:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:02.336 14:40:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:02.336 14:40:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:02.336 14:40:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:02.336 14:40:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:02.336 14:40:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:02.336 14:40:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:02.336 14:40:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:02.336 14:40:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:02.336 14:40:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:02.336 14:40:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:02.336 14:40:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:02.336 14:40:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:02.336 14:40:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:02.336 14:40:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:02.336 14:40:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:02.336 14:40:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:02.336 14:40:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:02.336 14:40:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:02.336 14:40:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:02.336 14:40:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:02.336 14:40:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:02.336 14:40:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:02.336 00:08:02.336 real 0m2.690s 00:08:02.336 user 0m2.433s 00:08:02.336 sys 0m0.253s 00:08:02.336 14:40:41 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:02.336 14:40:41 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:02.336 ************************************ 00:08:02.336 END TEST accel_xor 00:08:02.336 ************************************ 00:08:02.336 14:40:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:02.336 14:40:41 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:08:02.336 14:40:41 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:02.336 14:40:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.336 14:40:41 accel -- common/autotest_common.sh@10 -- # set +x 00:08:02.336 ************************************ 00:08:02.336 START TEST accel_xor 00:08:02.336 ************************************ 00:08:02.336 14:40:41 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:08:02.336 14:40:41 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:02.336 14:40:41 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:02.336 14:40:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:02.336 14:40:41 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:08:02.336 14:40:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:02.336 14:40:41 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:02.336 14:40:41 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:02.336 14:40:41 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:02.336 14:40:41 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:02.336 14:40:41 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:02.336 14:40:41 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:02.336 14:40:41 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:02.336 14:40:41 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:02.336 14:40:41 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:02.336 [2024-07-14 14:40:41.133359] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:02.336 [2024-07-14 14:40:41.133507] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1771440 ] 00:08:02.336 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.336 [2024-07-14 14:40:41.278767] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.336 [2024-07-14 14:40:41.539531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.594 14:40:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:02.594 14:40:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:02.594 14:40:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:02.594 14:40:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:02.594 14:40:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:02.594 14:40:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:02.594 14:40:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:02.594 14:40:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:02.594 14:40:41 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:02.594 14:40:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:02.594 14:40:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:02.594 14:40:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:02.594 14:40:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:02.594 14:40:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:02.594 14:40:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:02.594 14:40:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:02.594 14:40:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:02.594 14:40:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:02.594 14:40:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:02.594 14:40:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:02.594 14:40:41 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:02.594 14:40:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:02.594 14:40:41 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:02.594 14:40:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:02.594 14:40:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:02.594 14:40:41 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:08:02.594 14:40:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:02.594 14:40:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:02.594 14:40:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:02.594 14:40:41 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:02.594 14:40:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:02.594 14:40:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:02.594 14:40:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:02.594 14:40:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:02.594 14:40:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:02.594 14:40:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:02.594 14:40:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:02.594 14:40:41 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:02.594 14:40:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:02.595 14:40:41 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:02.595 14:40:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:02.595 14:40:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:02.595 14:40:41 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:02.595 14:40:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:02.595 14:40:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:02.595 14:40:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:02.595 14:40:41 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:02.595 14:40:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:02.595 14:40:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:02.595 14:40:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:02.595 14:40:41 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:02.595 14:40:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:02.595 14:40:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:02.595 14:40:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:02.595 14:40:41 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:02.595 14:40:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:02.595 14:40:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:02.595 14:40:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:02.595 14:40:41 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:02.595 14:40:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:02.595 14:40:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:02.595 14:40:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:02.595 14:40:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:02.595 14:40:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:02.595 14:40:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:02.595 14:40:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:02.595 14:40:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:02.595 14:40:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:02.595 14:40:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:02.595 14:40:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:04.503 14:40:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:04.503 14:40:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:04.503 14:40:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:04.503 14:40:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:04.503 14:40:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:04.503 14:40:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:04.503 14:40:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:04.503 14:40:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:04.503 14:40:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:04.503 14:40:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:04.503 14:40:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:04.503 14:40:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:04.503 14:40:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:04.503 14:40:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:04.503 14:40:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:04.503 14:40:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:04.503 14:40:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:04.503 14:40:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:04.503 14:40:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:04.503 14:40:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:04.503 14:40:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:04.503 14:40:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:04.503 14:40:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:04.503 14:40:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:04.503 14:40:43 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:04.503 14:40:43 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:04.503 14:40:43 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:04.503 00:08:04.503 real 0m2.709s 00:08:04.503 user 0m2.450s 00:08:04.503 sys 0m0.255s 00:08:04.503 14:40:43 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:04.503 14:40:43 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:04.503 ************************************ 00:08:04.503 END TEST accel_xor 00:08:04.503 ************************************ 00:08:04.763 14:40:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:04.763 14:40:43 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:08:04.763 14:40:43 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:04.763 14:40:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.763 14:40:43 accel -- common/autotest_common.sh@10 -- # set +x 00:08:04.763 ************************************ 00:08:04.763 START TEST accel_dif_verify 00:08:04.763 ************************************ 00:08:04.763 14:40:43 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:08:04.763 14:40:43 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:08:04.763 14:40:43 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:08:04.763 14:40:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:04.763 14:40:43 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:08:04.763 14:40:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:04.763 14:40:43 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:04.763 14:40:43 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:04.763 14:40:43 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:04.763 14:40:43 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:04.763 14:40:43 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:04.763 14:40:43 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:04.763 14:40:43 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:04.763 14:40:43 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:04.763 14:40:43 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:08:04.763 [2024-07-14 14:40:43.893328] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:04.763 [2024-07-14 14:40:43.893455] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1771845 ] 00:08:04.763 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.763 [2024-07-14 14:40:44.024165] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.023 [2024-07-14 14:40:44.289534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.283 14:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:07.826 14:40:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:07.826 14:40:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:07.826 14:40:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:07.826 14:40:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:07.826 14:40:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:07.826 14:40:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:07.826 14:40:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:07.826 14:40:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:07.826 14:40:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:07.826 14:40:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:07.826 14:40:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:07.826 14:40:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:07.826 14:40:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:07.826 14:40:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:07.826 14:40:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:07.826 14:40:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:07.826 14:40:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:07.826 14:40:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:07.826 14:40:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:07.826 14:40:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:07.826 14:40:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:07.826 14:40:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:07.826 14:40:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:07.826 14:40:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:07.826 14:40:46 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:07.827 14:40:46 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:08:07.827 14:40:46 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:07.827 00:08:07.827 real 0m2.682s 00:08:07.827 user 0m0.013s 00:08:07.827 sys 0m0.001s 00:08:07.827 14:40:46 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:07.827 14:40:46 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:08:07.827 ************************************ 00:08:07.827 END TEST accel_dif_verify 00:08:07.827 ************************************ 00:08:07.827 14:40:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:07.827 14:40:46 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:08:07.827 14:40:46 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:07.827 14:40:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:07.827 14:40:46 accel -- common/autotest_common.sh@10 -- # set +x 00:08:07.827 ************************************ 00:08:07.827 START TEST accel_dif_generate 00:08:07.827 ************************************ 00:08:07.827 14:40:46 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:08:07.827 14:40:46 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:08:07.827 14:40:46 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:08:07.827 14:40:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:07.827 14:40:46 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:08:07.827 14:40:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:07.827 14:40:46 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:07.827 14:40:46 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:08:07.827 14:40:46 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:07.827 14:40:46 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:07.827 14:40:46 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:07.827 14:40:46 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:07.827 14:40:46 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:07.827 14:40:46 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:08:07.827 14:40:46 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:08:07.827 [2024-07-14 14:40:46.626808] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:07.827 [2024-07-14 14:40:46.626960] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1772136 ] 00:08:07.827 EAL: No free 2048 kB hugepages reported on node 1 00:08:07.827 [2024-07-14 14:40:46.756448] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.827 [2024-07-14 14:40:47.014708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.087 14:40:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:08.087 14:40:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.087 14:40:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.087 14:40:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.087 14:40:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:08.087 14:40:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.087 14:40:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.087 14:40:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.087 14:40:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:08:08.087 14:40:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.087 14:40:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.087 14:40:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.087 14:40:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:08.087 14:40:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.087 14:40:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.087 14:40:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.087 14:40:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:08.087 14:40:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.087 14:40:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.087 14:40:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.087 14:40:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:08:08.087 14:40:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.087 14:40:47 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.088 14:40:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.999 14:40:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:09.999 14:40:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.999 14:40:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.999 14:40:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.999 14:40:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:09.999 14:40:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.999 14:40:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.999 14:40:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.999 14:40:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:09.999 14:40:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.999 14:40:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.999 14:40:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.999 14:40:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:09.999 14:40:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.999 14:40:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.999 14:40:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.999 14:40:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:09.999 14:40:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.999 14:40:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.999 14:40:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.999 14:40:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:09.999 14:40:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.999 14:40:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.999 14:40:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.999 14:40:49 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:09.999 14:40:49 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:08:09.999 14:40:49 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:09.999 00:08:09.999 real 0m2.694s 00:08:09.999 user 0m2.469s 00:08:09.999 sys 0m0.223s 00:08:09.999 14:40:49 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.999 14:40:49 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:08:09.999 ************************************ 00:08:09.999 END TEST accel_dif_generate 00:08:09.999 ************************************ 00:08:09.999 14:40:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:09.999 14:40:49 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:08:09.999 14:40:49 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:09.999 14:40:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.999 14:40:49 accel -- common/autotest_common.sh@10 -- # set +x 00:08:10.259 ************************************ 00:08:10.259 START TEST accel_dif_generate_copy 00:08:10.259 ************************************ 00:08:10.259 14:40:49 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:08:10.259 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:10.259 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:08:10.259 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.259 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.259 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:08:10.259 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:10.259 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:10.259 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:10.259 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:10.259 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:10.259 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:10.259 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:10.259 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:10.259 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:08:10.259 [2024-07-14 14:40:49.366025] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:10.259 [2024-07-14 14:40:49.366167] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1772550 ] 00:08:10.259 EAL: No free 2048 kB hugepages reported on node 1 00:08:10.259 [2024-07-14 14:40:49.493690] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.519 [2024-07-14 14:40:49.756677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:10.784 14:40:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.784 14:40:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.784 14:40:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.784 14:40:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:10.784 14:40:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.784 14:40:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.784 14:40:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.318 14:40:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:13.318 14:40:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.318 14:40:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.318 14:40:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.318 14:40:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:13.318 14:40:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.318 14:40:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.318 14:40:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.318 14:40:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:13.318 14:40:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.318 14:40:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.318 14:40:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.318 14:40:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:13.318 14:40:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.318 14:40:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.318 14:40:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.318 14:40:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:13.318 14:40:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.318 14:40:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.318 14:40:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.318 14:40:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:13.318 14:40:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.318 14:40:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.318 14:40:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.318 14:40:52 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:13.318 14:40:52 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:08:13.318 14:40:52 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:13.318 00:08:13.318 real 0m2.698s 00:08:13.318 user 0m2.453s 00:08:13.318 sys 0m0.242s 00:08:13.318 14:40:52 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:13.318 14:40:52 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:08:13.318 ************************************ 00:08:13.318 END TEST accel_dif_generate_copy 00:08:13.318 ************************************ 00:08:13.318 14:40:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:13.318 14:40:52 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:08:13.318 14:40:52 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:13.318 14:40:52 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:08:13.319 14:40:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.319 14:40:52 accel -- common/autotest_common.sh@10 -- # set +x 00:08:13.319 ************************************ 00:08:13.319 START TEST accel_comp 00:08:13.319 ************************************ 00:08:13.319 14:40:52 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:13.319 14:40:52 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:08:13.319 14:40:52 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:08:13.319 14:40:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.319 14:40:52 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:13.319 14:40:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.319 14:40:52 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:13.319 14:40:52 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:08:13.319 14:40:52 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:13.319 14:40:52 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:13.319 14:40:52 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:13.319 14:40:52 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:13.319 14:40:52 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:13.319 14:40:52 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:08:13.319 14:40:52 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:08:13.319 [2024-07-14 14:40:52.112254] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:13.319 [2024-07-14 14:40:52.112376] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1772843 ] 00:08:13.319 EAL: No free 2048 kB hugepages reported on node 1 00:08:13.319 [2024-07-14 14:40:52.240804] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.319 [2024-07-14 14:40:52.502235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.577 14:40:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:13.577 14:40:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.577 14:40:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.577 14:40:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.577 14:40:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:13.577 14:40:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.577 14:40:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.577 14:40:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.577 14:40:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:13.577 14:40:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.577 14:40:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.577 14:40:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.577 14:40:52 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:08:13.577 14:40:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.577 14:40:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.577 14:40:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.577 14:40:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:13.577 14:40:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.577 14:40:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.577 14:40:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.577 14:40:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:13.577 14:40:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.577 14:40:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.577 14:40:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.577 14:40:52 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:08:13.577 14:40:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.577 14:40:52 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:08:13.577 14:40:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.577 14:40:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.577 14:40:52 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:13.578 14:40:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.578 14:40:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.578 14:40:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.578 14:40:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:13.578 14:40:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.578 14:40:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.578 14:40:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.578 14:40:52 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:08:13.578 14:40:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.578 14:40:52 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:08:13.578 14:40:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.578 14:40:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.578 14:40:52 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:13.578 14:40:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.578 14:40:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.578 14:40:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.578 14:40:52 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:13.578 14:40:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.578 14:40:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.578 14:40:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.578 14:40:52 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:13.578 14:40:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.578 14:40:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.578 14:40:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.578 14:40:52 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:08:13.578 14:40:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.578 14:40:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.578 14:40:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.578 14:40:52 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:13.578 14:40:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.578 14:40:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.578 14:40:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.578 14:40:52 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:08:13.578 14:40:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.578 14:40:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.578 14:40:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.578 14:40:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:13.578 14:40:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.578 14:40:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.578 14:40:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.578 14:40:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:13.578 14:40:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.578 14:40:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.578 14:40:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:15.483 14:40:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:15.483 14:40:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.483 14:40:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:15.483 14:40:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:15.483 14:40:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:15.483 14:40:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.483 14:40:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:15.483 14:40:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:15.483 14:40:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:15.483 14:40:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.483 14:40:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:15.483 14:40:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:15.483 14:40:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:15.483 14:40:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.483 14:40:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:15.483 14:40:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:15.483 14:40:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:15.483 14:40:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.483 14:40:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:15.483 14:40:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:15.483 14:40:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:15.483 14:40:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.483 14:40:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:15.483 14:40:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:15.483 14:40:54 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:15.483 14:40:54 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:08:15.483 14:40:54 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:15.483 00:08:15.483 real 0m2.689s 00:08:15.483 user 0m2.453s 00:08:15.483 sys 0m0.233s 00:08:15.483 14:40:54 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:15.483 14:40:54 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:08:15.483 ************************************ 00:08:15.483 END TEST accel_comp 00:08:15.483 ************************************ 00:08:15.483 14:40:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:15.483 14:40:54 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:15.483 14:40:54 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:15.483 14:40:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.483 14:40:54 accel -- common/autotest_common.sh@10 -- # set +x 00:08:15.742 ************************************ 00:08:15.742 START TEST accel_decomp 00:08:15.742 ************************************ 00:08:15.742 14:40:54 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:15.742 14:40:54 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:08:15.742 14:40:54 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:08:15.742 14:40:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:15.742 14:40:54 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:15.742 14:40:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:15.742 14:40:54 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:15.742 14:40:54 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:08:15.742 14:40:54 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:15.742 14:40:54 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:15.742 14:40:54 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:15.742 14:40:54 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:15.742 14:40:54 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:15.742 14:40:54 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:08:15.742 14:40:54 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:08:15.742 [2024-07-14 14:40:54.848775] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:15.742 [2024-07-14 14:40:54.848931] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1773128 ] 00:08:15.742 EAL: No free 2048 kB hugepages reported on node 1 00:08:15.742 [2024-07-14 14:40:54.991433] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.000 [2024-07-14 14:40:55.258006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.260 14:40:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.799 14:40:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:18.799 14:40:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.799 14:40:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.799 14:40:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.799 14:40:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:18.799 14:40:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.799 14:40:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.799 14:40:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.799 14:40:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:18.799 14:40:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.799 14:40:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.799 14:40:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.799 14:40:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:18.799 14:40:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.799 14:40:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.799 14:40:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.799 14:40:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:18.799 14:40:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.799 14:40:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.799 14:40:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.800 14:40:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:18.800 14:40:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.800 14:40:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.800 14:40:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.800 14:40:57 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:18.800 14:40:57 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:18.800 14:40:57 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:18.800 00:08:18.800 real 0m2.710s 00:08:18.800 user 0m2.457s 00:08:18.800 sys 0m0.250s 00:08:18.800 14:40:57 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:18.800 14:40:57 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:08:18.800 ************************************ 00:08:18.800 END TEST accel_decomp 00:08:18.800 ************************************ 00:08:18.800 14:40:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:18.800 14:40:57 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:18.800 14:40:57 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:18.800 14:40:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:18.800 14:40:57 accel -- common/autotest_common.sh@10 -- # set +x 00:08:18.800 ************************************ 00:08:18.800 START TEST accel_decomp_full 00:08:18.800 ************************************ 00:08:18.800 14:40:57 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:18.800 14:40:57 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:08:18.800 14:40:57 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:08:18.800 14:40:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.800 14:40:57 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:18.800 14:40:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.800 14:40:57 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:18.800 14:40:57 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:08:18.800 14:40:57 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:18.800 14:40:57 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:18.800 14:40:57 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:18.800 14:40:57 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:18.800 14:40:57 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:18.800 14:40:57 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:08:18.800 14:40:57 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:08:18.800 [2024-07-14 14:40:57.607239] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:18.800 [2024-07-14 14:40:57.607385] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1773539 ] 00:08:18.800 EAL: No free 2048 kB hugepages reported on node 1 00:08:18.800 [2024-07-14 14:40:57.752334] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.800 [2024-07-14 14:40:58.010512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:19.066 14:40:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:19.067 14:40:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:19.067 14:40:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:19.067 14:40:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:19.067 14:40:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:20.990 14:41:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:20.990 14:41:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:20.990 14:41:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:20.990 14:41:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:20.990 14:41:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:20.990 14:41:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:20.990 14:41:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:20.990 14:41:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:20.990 14:41:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:20.990 14:41:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:20.990 14:41:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:20.990 14:41:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:20.990 14:41:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:20.990 14:41:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:20.990 14:41:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:20.990 14:41:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:20.990 14:41:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:20.990 14:41:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:20.990 14:41:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:20.990 14:41:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:20.990 14:41:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:20.990 14:41:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:20.990 14:41:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:20.990 14:41:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:20.990 14:41:00 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:20.990 14:41:00 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:20.990 14:41:00 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:20.990 00:08:20.990 real 0m2.731s 00:08:20.990 user 0m2.478s 00:08:20.990 sys 0m0.249s 00:08:20.990 14:41:00 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:20.990 14:41:00 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:08:20.990 ************************************ 00:08:20.990 END TEST accel_decomp_full 00:08:20.990 ************************************ 00:08:21.249 14:41:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:21.249 14:41:00 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:21.249 14:41:00 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:21.249 14:41:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:21.249 14:41:00 accel -- common/autotest_common.sh@10 -- # set +x 00:08:21.249 ************************************ 00:08:21.249 START TEST accel_decomp_mcore 00:08:21.249 ************************************ 00:08:21.249 14:41:00 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:21.249 14:41:00 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:21.249 14:41:00 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:21.249 14:41:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.249 14:41:00 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:21.249 14:41:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.249 14:41:00 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:21.249 14:41:00 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:21.249 14:41:00 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:21.249 14:41:00 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:21.249 14:41:00 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:21.249 14:41:00 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:21.249 14:41:00 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:21.249 14:41:00 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:21.249 14:41:00 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:21.249 [2024-07-14 14:41:00.385557] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:21.249 [2024-07-14 14:41:00.385686] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1773835 ] 00:08:21.249 EAL: No free 2048 kB hugepages reported on node 1 00:08:21.249 [2024-07-14 14:41:00.515241] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:21.508 [2024-07-14 14:41:00.782882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:21.508 [2024-07-14 14:41:00.782907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:21.508 [2024-07-14 14:41:00.782951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.508 [2024-07-14 14:41:00.782960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.768 14:41:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.309 14:41:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:24.309 14:41:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.309 14:41:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.309 14:41:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.309 14:41:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:24.309 14:41:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.309 14:41:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.309 14:41:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.309 14:41:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:24.310 14:41:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.310 14:41:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.310 14:41:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.310 14:41:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:24.310 14:41:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.310 14:41:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.310 14:41:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.310 14:41:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:24.310 14:41:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.310 14:41:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.310 14:41:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.310 14:41:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:24.310 14:41:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.310 14:41:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.310 14:41:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.310 14:41:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:24.310 14:41:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.310 14:41:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.310 14:41:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.310 14:41:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:24.310 14:41:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.310 14:41:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.310 14:41:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.310 14:41:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:24.310 14:41:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.310 14:41:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.310 14:41:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.310 14:41:03 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:24.310 14:41:03 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:24.310 14:41:03 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:24.310 00:08:24.310 real 0m2.669s 00:08:24.310 user 0m0.013s 00:08:24.310 sys 0m0.003s 00:08:24.310 14:41:03 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:24.310 14:41:03 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:24.310 ************************************ 00:08:24.310 END TEST accel_decomp_mcore 00:08:24.310 ************************************ 00:08:24.310 14:41:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:24.310 14:41:03 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:24.310 14:41:03 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:24.310 14:41:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:24.310 14:41:03 accel -- common/autotest_common.sh@10 -- # set +x 00:08:24.310 ************************************ 00:08:24.310 START TEST accel_decomp_full_mcore 00:08:24.310 ************************************ 00:08:24.310 14:41:03 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:24.310 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:24.310 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:24.310 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.310 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:24.310 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.310 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:24.310 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:24.310 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:24.310 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:24.310 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:24.310 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:24.310 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:24.310 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:24.310 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:24.310 [2024-07-14 14:41:03.099553] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:24.310 [2024-07-14 14:41:03.099701] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1774241 ] 00:08:24.310 EAL: No free 2048 kB hugepages reported on node 1 00:08:24.310 [2024-07-14 14:41:03.235300] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:24.310 [2024-07-14 14:41:03.504521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:24.310 [2024-07-14 14:41:03.504589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:24.310 [2024-07-14 14:41:03.504634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.310 [2024-07-14 14:41:03.504644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.570 14:41:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.473 14:41:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.473 14:41:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.473 14:41:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.473 14:41:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.473 14:41:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.473 14:41:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.473 14:41:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.473 14:41:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.473 14:41:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.473 14:41:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.473 14:41:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.473 14:41:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.473 14:41:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.473 14:41:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.473 14:41:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.473 14:41:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.473 14:41:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.473 14:41:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.473 14:41:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.473 14:41:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.473 14:41:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.473 14:41:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.473 14:41:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.473 14:41:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.473 14:41:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.473 14:41:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.473 14:41:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.473 14:41:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.473 14:41:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.473 14:41:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.473 14:41:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.473 14:41:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.473 14:41:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.473 14:41:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.473 14:41:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.473 14:41:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.473 14:41:05 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:26.473 14:41:05 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:26.473 14:41:05 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:26.473 00:08:26.473 real 0m2.709s 00:08:26.473 user 0m0.011s 00:08:26.473 sys 0m0.004s 00:08:26.473 14:41:05 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:26.473 14:41:05 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:26.473 ************************************ 00:08:26.473 END TEST accel_decomp_full_mcore 00:08:26.473 ************************************ 00:08:26.733 14:41:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:26.733 14:41:05 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:26.733 14:41:05 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:26.733 14:41:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:26.733 14:41:05 accel -- common/autotest_common.sh@10 -- # set +x 00:08:26.733 ************************************ 00:08:26.733 START TEST accel_decomp_mthread 00:08:26.733 ************************************ 00:08:26.733 14:41:05 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:26.733 14:41:05 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:26.733 14:41:05 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:26.733 14:41:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:26.733 14:41:05 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:26.733 14:41:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:26.733 14:41:05 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:26.733 14:41:05 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:26.733 14:41:05 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:26.733 14:41:05 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:26.733 14:41:05 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:26.733 14:41:05 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:26.733 14:41:05 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:26.733 14:41:05 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:26.733 14:41:05 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:26.733 [2024-07-14 14:41:05.860145] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:26.733 [2024-07-14 14:41:05.860270] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1774538 ] 00:08:26.733 EAL: No free 2048 kB hugepages reported on node 1 00:08:26.733 [2024-07-14 14:41:05.992189] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.991 [2024-07-14 14:41:06.254314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.250 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:27.250 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:27.250 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:27.250 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:27.250 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:27.250 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:27.250 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:27.250 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:27.250 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:27.250 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:27.250 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:27.250 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:27.250 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:27.250 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:27.250 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:27.250 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:27.250 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:27.250 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:27.250 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:27.250 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:27.250 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:27.250 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:27.250 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:27.250 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:27.250 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:27.250 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:27.250 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:27.250 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:27.250 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:27.250 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:27.250 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:27.250 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:27.250 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:27.250 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:27.250 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:27.250 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:27.250 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:27.250 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:08:27.250 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:27.250 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:27.250 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:27.250 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:27.250 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:27.250 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:27.250 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:27.250 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:27.250 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:27.250 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:27.251 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:27.251 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:27.251 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:27.251 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:27.251 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:27.251 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:27.251 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:08:27.251 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:27.251 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:27.251 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:27.251 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:27.251 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:27.251 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:27.251 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:27.251 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:27.251 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:27.251 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:27.251 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:27.251 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:27.251 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:27.251 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:27.251 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:27.251 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:27.251 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:27.251 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:27.251 14:41:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.808 14:41:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:29.808 14:41:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.808 14:41:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.808 14:41:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.808 14:41:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:29.808 14:41:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.808 14:41:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.808 14:41:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.808 14:41:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:29.808 14:41:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.808 14:41:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.808 14:41:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.808 14:41:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:29.808 14:41:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.808 14:41:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.808 14:41:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.808 14:41:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:29.808 14:41:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.808 14:41:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.808 14:41:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.808 14:41:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:29.808 14:41:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.808 14:41:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.808 14:41:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.808 14:41:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:29.808 14:41:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.808 14:41:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.808 14:41:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.808 14:41:08 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:29.808 14:41:08 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:29.808 14:41:08 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:29.808 00:08:29.808 real 0m2.705s 00:08:29.808 user 0m2.461s 00:08:29.808 sys 0m0.241s 00:08:29.808 14:41:08 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:29.808 14:41:08 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:29.808 ************************************ 00:08:29.808 END TEST accel_decomp_mthread 00:08:29.808 ************************************ 00:08:29.808 14:41:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:29.808 14:41:08 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:29.808 14:41:08 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:29.808 14:41:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.808 14:41:08 accel -- common/autotest_common.sh@10 -- # set +x 00:08:29.808 ************************************ 00:08:29.808 START TEST accel_decomp_full_mthread 00:08:29.808 ************************************ 00:08:29.808 14:41:08 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:29.808 14:41:08 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:29.808 14:41:08 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:29.808 14:41:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.809 14:41:08 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:29.809 14:41:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.809 14:41:08 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:29.809 14:41:08 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:29.809 14:41:08 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:29.809 14:41:08 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:29.809 14:41:08 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:29.809 14:41:08 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:29.809 14:41:08 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:29.809 14:41:08 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:29.809 14:41:08 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:29.809 [2024-07-14 14:41:08.615634] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:29.809 [2024-07-14 14:41:08.615768] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1774879 ] 00:08:29.809 EAL: No free 2048 kB hugepages reported on node 1 00:08:29.809 [2024-07-14 14:41:08.745905] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.809 [2024-07-14 14:41:09.011475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.068 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:30.068 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:30.068 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.068 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.068 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:30.068 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:30.068 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.068 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.068 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:30.068 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:30.068 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.068 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.068 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:30.068 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:30.068 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.068 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.068 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:30.068 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:30.068 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.068 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.068 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:30.068 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:30.068 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.068 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.068 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:30.068 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:30.068 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:30.068 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.068 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.068 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:30.068 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:30.068 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.068 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.068 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:30.068 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:30.068 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.068 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.068 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:08:30.068 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:30.068 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:30.068 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.068 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.068 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:30.068 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:30.069 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.069 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.069 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:30.069 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:30.069 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.069 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.069 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:30.069 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:30.069 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.069 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.069 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:08:30.069 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:30.069 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.069 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.069 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:30.069 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:30.069 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.069 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.069 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:30.069 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:30.069 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.069 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.069 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:30.069 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:30.069 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.069 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.069 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:30.069 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:30.069 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.069 14:41:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:32.605 14:41:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:32.605 14:41:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:32.605 14:41:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:32.605 14:41:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:32.605 14:41:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:32.605 14:41:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:32.605 14:41:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:32.605 14:41:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:32.605 14:41:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:32.605 14:41:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:32.605 14:41:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:32.605 14:41:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:32.606 14:41:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:32.606 14:41:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:32.606 14:41:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:32.606 14:41:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:32.606 14:41:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:32.606 14:41:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:32.606 14:41:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:32.606 14:41:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:32.606 14:41:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:32.606 14:41:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:32.606 14:41:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:32.606 14:41:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:32.606 14:41:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:32.606 14:41:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:32.606 14:41:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:32.606 14:41:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:32.606 14:41:11 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:32.606 14:41:11 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:32.606 14:41:11 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:32.606 00:08:32.606 real 0m2.742s 00:08:32.606 user 0m0.011s 00:08:32.606 sys 0m0.003s 00:08:32.606 14:41:11 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:32.606 14:41:11 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:32.606 ************************************ 00:08:32.606 END TEST accel_decomp_full_mthread 00:08:32.606 ************************************ 00:08:32.606 14:41:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:32.606 14:41:11 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:08:32.606 14:41:11 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:32.606 14:41:11 accel -- accel/accel.sh@137 -- # build_accel_config 00:08:32.606 14:41:11 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:32.606 14:41:11 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:32.606 14:41:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:32.606 14:41:11 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:32.606 14:41:11 accel -- common/autotest_common.sh@10 -- # set +x 00:08:32.606 14:41:11 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:32.606 14:41:11 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:32.606 14:41:11 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:32.606 14:41:11 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:32.606 14:41:11 accel -- accel/accel.sh@41 -- # jq -r . 00:08:32.606 ************************************ 00:08:32.606 START TEST accel_dif_functional_tests 00:08:32.606 ************************************ 00:08:32.606 14:41:11 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:32.606 [2024-07-14 14:41:11.437478] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:32.606 [2024-07-14 14:41:11.437611] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1775243 ] 00:08:32.606 EAL: No free 2048 kB hugepages reported on node 1 00:08:32.606 [2024-07-14 14:41:11.566053] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:32.606 [2024-07-14 14:41:11.832041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.606 [2024-07-14 14:41:11.832088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.606 [2024-07-14 14:41:11.832098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:33.176 00:08:33.176 00:08:33.176 CUnit - A unit testing framework for C - Version 2.1-3 00:08:33.176 http://cunit.sourceforge.net/ 00:08:33.176 00:08:33.176 00:08:33.176 Suite: accel_dif 00:08:33.176 Test: verify: DIF generated, GUARD check ...passed 00:08:33.176 Test: verify: DIF generated, APPTAG check ...passed 00:08:33.176 Test: verify: DIF generated, REFTAG check ...passed 00:08:33.176 Test: verify: DIF not generated, GUARD check ...[2024-07-14 14:41:12.194339] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:33.176 passed 00:08:33.176 Test: verify: DIF not generated, APPTAG check ...[2024-07-14 14:41:12.194457] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:33.176 passed 00:08:33.176 Test: verify: DIF not generated, REFTAG check ...[2024-07-14 14:41:12.194527] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:33.176 passed 00:08:33.176 Test: verify: APPTAG correct, APPTAG check ...passed 00:08:33.176 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-14 14:41:12.194665] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:33.176 passed 00:08:33.176 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:08:33.176 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:08:33.176 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:08:33.176 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-14 14:41:12.194936] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:33.176 passed 00:08:33.176 Test: verify copy: DIF generated, GUARD check ...passed 00:08:33.176 Test: verify copy: DIF generated, APPTAG check ...passed 00:08:33.176 Test: verify copy: DIF generated, REFTAG check ...passed 00:08:33.176 Test: verify copy: DIF not generated, GUARD check ...[2024-07-14 14:41:12.195242] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:33.176 passed 00:08:33.176 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-14 14:41:12.195329] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:33.176 passed 00:08:33.176 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-14 14:41:12.195413] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:33.176 passed 00:08:33.176 Test: generate copy: DIF generated, GUARD check ...passed 00:08:33.176 Test: generate copy: DIF generated, APTTAG check ...passed 00:08:33.176 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:33.176 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:33.176 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:33.176 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:33.176 Test: generate copy: iovecs-len validate ...[2024-07-14 14:41:12.195902] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:33.176 passed 00:08:33.176 Test: generate copy: buffer alignment validate ...passed 00:08:33.176 00:08:33.176 Run Summary: Type Total Ran Passed Failed Inactive 00:08:33.176 suites 1 1 n/a 0 0 00:08:33.176 tests 26 26 26 0 0 00:08:33.176 asserts 115 115 115 0 n/a 00:08:33.176 00:08:33.176 Elapsed time = 0.005 seconds 00:08:34.557 00:08:34.557 real 0m2.080s 00:08:34.557 user 0m4.029s 00:08:34.557 sys 0m0.310s 00:08:34.557 14:41:13 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:34.557 14:41:13 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:08:34.557 ************************************ 00:08:34.557 END TEST accel_dif_functional_tests 00:08:34.557 ************************************ 00:08:34.557 14:41:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:34.557 00:08:34.557 real 1m4.709s 00:08:34.557 user 1m11.119s 00:08:34.557 sys 0m7.163s 00:08:34.557 14:41:13 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:34.557 14:41:13 accel -- common/autotest_common.sh@10 -- # set +x 00:08:34.557 ************************************ 00:08:34.557 END TEST accel 00:08:34.557 ************************************ 00:08:34.557 14:41:13 -- common/autotest_common.sh@1142 -- # return 0 00:08:34.557 14:41:13 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:34.557 14:41:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:34.557 14:41:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:34.557 14:41:13 -- common/autotest_common.sh@10 -- # set +x 00:08:34.557 ************************************ 00:08:34.557 START TEST accel_rpc 00:08:34.557 ************************************ 00:08:34.557 14:41:13 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:34.557 * Looking for test storage... 00:08:34.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:08:34.557 14:41:13 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:34.557 14:41:13 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1775565 00:08:34.557 14:41:13 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:34.557 14:41:13 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1775565 00:08:34.557 14:41:13 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 1775565 ']' 00:08:34.557 14:41:13 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.557 14:41:13 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:34.557 14:41:13 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.557 14:41:13 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:34.557 14:41:13 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:34.557 [2024-07-14 14:41:13.670826] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:34.557 [2024-07-14 14:41:13.671001] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1775565 ] 00:08:34.557 EAL: No free 2048 kB hugepages reported on node 1 00:08:34.557 [2024-07-14 14:41:13.815731] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.817 [2024-07-14 14:41:14.054358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.385 14:41:14 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:35.385 14:41:14 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:35.385 14:41:14 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:35.385 14:41:14 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:35.385 14:41:14 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:35.385 14:41:14 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:35.385 14:41:14 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:35.385 14:41:14 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:35.385 14:41:14 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:35.385 14:41:14 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:35.385 ************************************ 00:08:35.385 START TEST accel_assign_opcode 00:08:35.385 ************************************ 00:08:35.385 14:41:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:08:35.385 14:41:14 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:35.385 14:41:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.385 14:41:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:35.385 [2024-07-14 14:41:14.664821] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:35.385 14:41:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.385 14:41:14 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:35.385 14:41:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.385 14:41:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:35.385 [2024-07-14 14:41:14.672803] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:35.386 14:41:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.386 14:41:14 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:35.386 14:41:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.386 14:41:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:36.323 14:41:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.323 14:41:15 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:36.323 14:41:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.323 14:41:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:36.323 14:41:15 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:36.323 14:41:15 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:08:36.323 14:41:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.323 software 00:08:36.323 00:08:36.323 real 0m0.941s 00:08:36.323 user 0m0.040s 00:08:36.323 sys 0m0.009s 00:08:36.323 14:41:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:36.323 14:41:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:36.323 ************************************ 00:08:36.323 END TEST accel_assign_opcode 00:08:36.323 ************************************ 00:08:36.323 14:41:15 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:08:36.323 14:41:15 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1775565 00:08:36.323 14:41:15 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 1775565 ']' 00:08:36.323 14:41:15 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 1775565 00:08:36.323 14:41:15 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:08:36.323 14:41:15 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:36.323 14:41:15 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1775565 00:08:36.582 14:41:15 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:36.582 14:41:15 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:36.583 14:41:15 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1775565' 00:08:36.583 killing process with pid 1775565 00:08:36.583 14:41:15 accel_rpc -- common/autotest_common.sh@967 -- # kill 1775565 00:08:36.583 14:41:15 accel_rpc -- common/autotest_common.sh@972 -- # wait 1775565 00:08:39.116 00:08:39.116 real 0m4.666s 00:08:39.116 user 0m4.671s 00:08:39.116 sys 0m0.642s 00:08:39.116 14:41:18 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:39.116 14:41:18 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:39.116 ************************************ 00:08:39.116 END TEST accel_rpc 00:08:39.116 ************************************ 00:08:39.116 14:41:18 -- common/autotest_common.sh@1142 -- # return 0 00:08:39.116 14:41:18 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:39.116 14:41:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:39.116 14:41:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:39.116 14:41:18 -- common/autotest_common.sh@10 -- # set +x 00:08:39.116 ************************************ 00:08:39.116 START TEST app_cmdline 00:08:39.116 ************************************ 00:08:39.116 14:41:18 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:39.116 * Looking for test storage... 00:08:39.116 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:39.116 14:41:18 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:39.116 14:41:18 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1776169 00:08:39.116 14:41:18 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:39.117 14:41:18 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1776169 00:08:39.117 14:41:18 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 1776169 ']' 00:08:39.117 14:41:18 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.117 14:41:18 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:39.117 14:41:18 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.117 14:41:18 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:39.117 14:41:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:39.117 [2024-07-14 14:41:18.381475] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:39.117 [2024-07-14 14:41:18.381608] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1776169 ] 00:08:39.376 EAL: No free 2048 kB hugepages reported on node 1 00:08:39.376 [2024-07-14 14:41:18.517438] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.633 [2024-07-14 14:41:18.778321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.608 14:41:19 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:40.608 14:41:19 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:08:40.608 14:41:19 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:40.865 { 00:08:40.865 "version": "SPDK v24.09-pre git sha1 719d03c6a", 00:08:40.865 "fields": { 00:08:40.865 "major": 24, 00:08:40.865 "minor": 9, 00:08:40.865 "patch": 0, 00:08:40.865 "suffix": "-pre", 00:08:40.865 "commit": "719d03c6a" 00:08:40.865 } 00:08:40.866 } 00:08:40.866 14:41:19 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:40.866 14:41:19 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:40.866 14:41:19 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:40.866 14:41:19 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:40.866 14:41:19 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:40.866 14:41:19 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.866 14:41:19 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:40.866 14:41:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:40.866 14:41:19 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:40.866 14:41:19 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.866 14:41:19 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:40.866 14:41:19 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:40.866 14:41:19 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:40.866 14:41:19 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:08:40.866 14:41:19 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:40.866 14:41:19 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:40.866 14:41:19 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:40.866 14:41:20 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:40.866 14:41:19 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:40.866 14:41:20 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:40.866 14:41:19 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:40.866 14:41:20 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:40.866 14:41:20 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:40.866 14:41:20 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:41.124 request: 00:08:41.124 { 00:08:41.124 "method": "env_dpdk_get_mem_stats", 00:08:41.124 "req_id": 1 00:08:41.124 } 00:08:41.124 Got JSON-RPC error response 00:08:41.124 response: 00:08:41.124 { 00:08:41.124 "code": -32601, 00:08:41.124 "message": "Method not found" 00:08:41.124 } 00:08:41.124 14:41:20 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:08:41.124 14:41:20 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:41.124 14:41:20 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:41.124 14:41:20 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:41.124 14:41:20 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1776169 00:08:41.124 14:41:20 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 1776169 ']' 00:08:41.124 14:41:20 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 1776169 00:08:41.124 14:41:20 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:08:41.124 14:41:20 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:41.124 14:41:20 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1776169 00:08:41.124 14:41:20 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:41.124 14:41:20 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:41.124 14:41:20 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1776169' 00:08:41.124 killing process with pid 1776169 00:08:41.124 14:41:20 app_cmdline -- common/autotest_common.sh@967 -- # kill 1776169 00:08:41.124 14:41:20 app_cmdline -- common/autotest_common.sh@972 -- # wait 1776169 00:08:43.667 00:08:43.667 real 0m4.595s 00:08:43.667 user 0m4.979s 00:08:43.667 sys 0m0.690s 00:08:43.667 14:41:22 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:43.667 14:41:22 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:43.667 ************************************ 00:08:43.667 END TEST app_cmdline 00:08:43.667 ************************************ 00:08:43.667 14:41:22 -- common/autotest_common.sh@1142 -- # return 0 00:08:43.667 14:41:22 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:43.667 14:41:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:43.667 14:41:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:43.667 14:41:22 -- common/autotest_common.sh@10 -- # set +x 00:08:43.667 ************************************ 00:08:43.667 START TEST version 00:08:43.667 ************************************ 00:08:43.667 14:41:22 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:43.667 * Looking for test storage... 00:08:43.667 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:43.667 14:41:22 version -- app/version.sh@17 -- # get_header_version major 00:08:43.667 14:41:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:43.667 14:41:22 version -- app/version.sh@14 -- # cut -f2 00:08:43.667 14:41:22 version -- app/version.sh@14 -- # tr -d '"' 00:08:43.667 14:41:22 version -- app/version.sh@17 -- # major=24 00:08:43.667 14:41:22 version -- app/version.sh@18 -- # get_header_version minor 00:08:43.667 14:41:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:43.667 14:41:22 version -- app/version.sh@14 -- # cut -f2 00:08:43.667 14:41:22 version -- app/version.sh@14 -- # tr -d '"' 00:08:43.667 14:41:22 version -- app/version.sh@18 -- # minor=9 00:08:43.667 14:41:22 version -- app/version.sh@19 -- # get_header_version patch 00:08:43.667 14:41:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:43.667 14:41:22 version -- app/version.sh@14 -- # cut -f2 00:08:43.667 14:41:22 version -- app/version.sh@14 -- # tr -d '"' 00:08:43.667 14:41:22 version -- app/version.sh@19 -- # patch=0 00:08:43.667 14:41:22 version -- app/version.sh@20 -- # get_header_version suffix 00:08:43.667 14:41:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:43.667 14:41:22 version -- app/version.sh@14 -- # cut -f2 00:08:43.667 14:41:22 version -- app/version.sh@14 -- # tr -d '"' 00:08:43.667 14:41:22 version -- app/version.sh@20 -- # suffix=-pre 00:08:43.667 14:41:22 version -- app/version.sh@22 -- # version=24.9 00:08:43.667 14:41:22 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:43.667 14:41:22 version -- app/version.sh@28 -- # version=24.9rc0 00:08:43.667 14:41:22 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:43.667 14:41:22 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:43.926 14:41:22 version -- app/version.sh@30 -- # py_version=24.9rc0 00:08:43.926 14:41:22 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:08:43.926 00:08:43.926 real 0m0.109s 00:08:43.926 user 0m0.063s 00:08:43.926 sys 0m0.067s 00:08:43.926 14:41:22 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:43.926 14:41:22 version -- common/autotest_common.sh@10 -- # set +x 00:08:43.926 ************************************ 00:08:43.926 END TEST version 00:08:43.926 ************************************ 00:08:43.926 14:41:23 -- common/autotest_common.sh@1142 -- # return 0 00:08:43.926 14:41:23 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:08:43.926 14:41:23 -- spdk/autotest.sh@198 -- # uname -s 00:08:43.926 14:41:23 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:08:43.926 14:41:23 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:43.926 14:41:23 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:43.926 14:41:23 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:08:43.926 14:41:23 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:08:43.926 14:41:23 -- spdk/autotest.sh@260 -- # timing_exit lib 00:08:43.926 14:41:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:43.926 14:41:23 -- common/autotest_common.sh@10 -- # set +x 00:08:43.926 14:41:23 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:08:43.926 14:41:23 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:08:43.926 14:41:23 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:08:43.926 14:41:23 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:08:43.926 14:41:23 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:08:43.926 14:41:23 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:08:43.926 14:41:23 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:43.926 14:41:23 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:43.926 14:41:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:43.926 14:41:23 -- common/autotest_common.sh@10 -- # set +x 00:08:43.926 ************************************ 00:08:43.926 START TEST nvmf_tcp 00:08:43.926 ************************************ 00:08:43.926 14:41:23 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:43.926 * Looking for test storage... 00:08:43.926 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:43.926 14:41:23 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:43.926 14:41:23 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:43.926 14:41:23 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:43.926 14:41:23 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:08:43.926 14:41:23 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:43.926 14:41:23 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:43.926 14:41:23 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:43.926 14:41:23 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:43.926 14:41:23 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:43.926 14:41:23 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:43.926 14:41:23 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:43.926 14:41:23 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:43.926 14:41:23 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:43.926 14:41:23 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:43.926 14:41:23 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:43.926 14:41:23 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:43.926 14:41:23 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:43.926 14:41:23 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:43.926 14:41:23 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:43.926 14:41:23 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:43.926 14:41:23 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:43.926 14:41:23 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:43.926 14:41:23 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:43.926 14:41:23 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:43.926 14:41:23 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.926 14:41:23 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.926 14:41:23 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.926 14:41:23 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:08:43.926 14:41:23 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.926 14:41:23 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:08:43.926 14:41:23 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:43.926 14:41:23 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:43.926 14:41:23 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:43.926 14:41:23 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:43.926 14:41:23 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:43.926 14:41:23 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:43.926 14:41:23 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:43.926 14:41:23 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:43.926 14:41:23 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:43.926 14:41:23 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:08:43.926 14:41:23 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:08:43.926 14:41:23 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:43.926 14:41:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:43.926 14:41:23 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:08:43.926 14:41:23 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:43.926 14:41:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:43.926 14:41:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:43.926 14:41:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:43.926 ************************************ 00:08:43.926 START TEST nvmf_example 00:08:43.926 ************************************ 00:08:43.926 14:41:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:43.926 * Looking for test storage... 00:08:43.926 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:43.926 14:41:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:43.926 14:41:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:08:43.926 14:41:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:43.926 14:41:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:43.926 14:41:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:43.926 14:41:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:43.926 14:41:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:43.926 14:41:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:43.926 14:41:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:43.926 14:41:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:43.926 14:41:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:43.926 14:41:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:43.926 14:41:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:43.927 14:41:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:43.927 14:41:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:43.927 14:41:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:43.927 14:41:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:43.927 14:41:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:43.927 14:41:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:43.927 14:41:23 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:43.927 14:41:23 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:43.927 14:41:23 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:43.927 14:41:23 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.927 14:41:23 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.927 14:41:23 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.927 14:41:23 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:08:43.927 14:41:23 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.927 14:41:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:08:43.927 14:41:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:43.927 14:41:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:43.927 14:41:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:43.927 14:41:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:43.927 14:41:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:43.927 14:41:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:43.927 14:41:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:43.927 14:41:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:43.927 14:41:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:08:43.927 14:41:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:08:43.927 14:41:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:08:43.927 14:41:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:08:43.927 14:41:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:08:43.927 14:41:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:08:43.927 14:41:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:08:43.927 14:41:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:08:43.927 14:41:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:43.927 14:41:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:43.927 14:41:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:08:43.927 14:41:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:43.927 14:41:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:43.927 14:41:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:43.927 14:41:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:43.927 14:41:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:43.927 14:41:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.927 14:41:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:43.927 14:41:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.927 14:41:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:43.927 14:41:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:43.927 14:41:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:08:43.927 14:41:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:45.832 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:45.832 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:45.832 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:45.832 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:45.832 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:46.090 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:46.090 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:46.090 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:46.090 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:46.090 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:46.090 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:46.090 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:46.090 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:46.090 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:08:46.090 00:08:46.090 --- 10.0.0.2 ping statistics --- 00:08:46.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.090 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:08:46.090 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:46.090 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:46.090 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:08:46.090 00:08:46.090 --- 10.0.0.1 ping statistics --- 00:08:46.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.090 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:08:46.090 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:46.090 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:08:46.090 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:46.090 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:46.090 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:46.090 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:46.090 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:46.090 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:46.090 14:41:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:46.090 14:41:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:46.090 14:41:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:46.090 14:41:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:46.090 14:41:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:46.090 14:41:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:08:46.090 14:41:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:08:46.090 14:41:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1778464 00:08:46.090 14:41:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:46.090 14:41:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:46.090 14:41:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1778464 00:08:46.090 14:41:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 1778464 ']' 00:08:46.090 14:41:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.091 14:41:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:46.091 14:41:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.091 14:41:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:46.091 14:41:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:46.091 EAL: No free 2048 kB hugepages reported on node 1 00:08:47.025 14:41:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:47.025 14:41:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:08:47.026 14:41:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:47.026 14:41:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:47.026 14:41:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:47.026 14:41:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:47.026 14:41:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.026 14:41:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:47.026 14:41:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.026 14:41:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:47.026 14:41:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.026 14:41:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:47.285 14:41:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.285 14:41:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:47.285 14:41:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:47.285 14:41:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.285 14:41:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:47.285 14:41:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.285 14:41:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:47.285 14:41:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:47.285 14:41:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.285 14:41:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:47.285 14:41:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.285 14:41:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:47.285 14:41:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.285 14:41:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:47.285 14:41:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.285 14:41:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:08:47.285 14:41:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:47.285 EAL: No free 2048 kB hugepages reported on node 1 00:08:59.483 Initializing NVMe Controllers 00:08:59.483 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:59.483 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:59.483 Initialization complete. Launching workers. 00:08:59.483 ======================================================== 00:08:59.483 Latency(us) 00:08:59.483 Device Information : IOPS MiB/s Average min max 00:08:59.483 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11554.23 45.13 5538.49 1270.04 16188.03 00:08:59.483 ======================================================== 00:08:59.483 Total : 11554.23 45.13 5538.49 1270.04 16188.03 00:08:59.483 00:08:59.483 14:41:36 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:59.483 14:41:36 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:59.483 14:41:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:59.483 14:41:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:08:59.483 14:41:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:59.483 14:41:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:08:59.483 14:41:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:59.483 14:41:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:59.483 rmmod nvme_tcp 00:08:59.483 rmmod nvme_fabrics 00:08:59.483 rmmod nvme_keyring 00:08:59.483 14:41:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:59.483 14:41:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:08:59.483 14:41:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:08:59.483 14:41:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1778464 ']' 00:08:59.483 14:41:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1778464 00:08:59.483 14:41:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 1778464 ']' 00:08:59.483 14:41:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 1778464 00:08:59.483 14:41:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:08:59.483 14:41:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:59.483 14:41:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1778464 00:08:59.483 14:41:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:08:59.483 14:41:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:08:59.483 14:41:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1778464' 00:08:59.483 killing process with pid 1778464 00:08:59.483 14:41:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 1778464 00:08:59.483 14:41:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 1778464 00:08:59.483 nvmf threads initialize successfully 00:08:59.483 bdev subsystem init successfully 00:08:59.483 created a nvmf target service 00:08:59.483 create targets's poll groups done 00:08:59.483 all subsystems of target started 00:08:59.483 nvmf target is running 00:08:59.483 all subsystems of target stopped 00:08:59.483 destroy targets's poll groups done 00:08:59.483 destroyed the nvmf target service 00:08:59.483 bdev subsystem finish successfully 00:08:59.483 nvmf threads destroy successfully 00:08:59.483 14:41:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:59.483 14:41:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:59.483 14:41:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:59.483 14:41:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:59.483 14:41:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:59.483 14:41:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.483 14:41:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:59.483 14:41:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.863 14:41:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:00.863 14:41:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:00.863 14:41:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:00.863 14:41:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:00.863 00:09:00.863 real 0m16.997s 00:09:00.863 user 0m47.265s 00:09:00.863 sys 0m3.464s 00:09:00.863 14:41:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:00.863 14:41:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:00.863 ************************************ 00:09:00.863 END TEST nvmf_example 00:09:00.863 ************************************ 00:09:01.124 14:41:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:01.124 14:41:40 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:01.124 14:41:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:01.125 14:41:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:01.125 14:41:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:01.125 ************************************ 00:09:01.125 START TEST nvmf_filesystem 00:09:01.125 ************************************ 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:01.125 * Looking for test storage... 00:09:01.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:09:01.125 14:41:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:01.125 #define SPDK_CONFIG_H 00:09:01.125 #define SPDK_CONFIG_APPS 1 00:09:01.125 #define SPDK_CONFIG_ARCH native 00:09:01.125 #define SPDK_CONFIG_ASAN 1 00:09:01.125 #undef SPDK_CONFIG_AVAHI 00:09:01.125 #undef SPDK_CONFIG_CET 00:09:01.125 #define SPDK_CONFIG_COVERAGE 1 00:09:01.125 #define SPDK_CONFIG_CROSS_PREFIX 00:09:01.125 #undef SPDK_CONFIG_CRYPTO 00:09:01.125 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:01.126 #undef SPDK_CONFIG_CUSTOMOCF 00:09:01.126 #undef SPDK_CONFIG_DAOS 00:09:01.126 #define SPDK_CONFIG_DAOS_DIR 00:09:01.126 #define SPDK_CONFIG_DEBUG 1 00:09:01.126 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:01.126 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:01.126 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:01.126 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:01.126 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:01.126 #undef SPDK_CONFIG_DPDK_UADK 00:09:01.126 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:01.126 #define SPDK_CONFIG_EXAMPLES 1 00:09:01.126 #undef SPDK_CONFIG_FC 00:09:01.126 #define SPDK_CONFIG_FC_PATH 00:09:01.126 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:01.126 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:01.126 #undef SPDK_CONFIG_FUSE 00:09:01.126 #undef SPDK_CONFIG_FUZZER 00:09:01.126 #define SPDK_CONFIG_FUZZER_LIB 00:09:01.126 #undef SPDK_CONFIG_GOLANG 00:09:01.126 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:01.126 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:01.126 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:01.126 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:01.126 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:01.126 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:01.126 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:01.126 #define SPDK_CONFIG_IDXD 1 00:09:01.126 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:01.126 #undef SPDK_CONFIG_IPSEC_MB 00:09:01.126 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:01.126 #define SPDK_CONFIG_ISAL 1 00:09:01.126 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:01.126 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:01.126 #define SPDK_CONFIG_LIBDIR 00:09:01.126 #undef SPDK_CONFIG_LTO 00:09:01.126 #define SPDK_CONFIG_MAX_LCORES 128 00:09:01.126 #define SPDK_CONFIG_NVME_CUSE 1 00:09:01.126 #undef SPDK_CONFIG_OCF 00:09:01.126 #define SPDK_CONFIG_OCF_PATH 00:09:01.126 #define SPDK_CONFIG_OPENSSL_PATH 00:09:01.126 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:01.126 #define SPDK_CONFIG_PGO_DIR 00:09:01.126 #undef SPDK_CONFIG_PGO_USE 00:09:01.126 #define SPDK_CONFIG_PREFIX /usr/local 00:09:01.126 #undef SPDK_CONFIG_RAID5F 00:09:01.126 #undef SPDK_CONFIG_RBD 00:09:01.126 #define SPDK_CONFIG_RDMA 1 00:09:01.126 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:01.126 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:01.126 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:01.126 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:01.126 #define SPDK_CONFIG_SHARED 1 00:09:01.126 #undef SPDK_CONFIG_SMA 00:09:01.126 #define SPDK_CONFIG_TESTS 1 00:09:01.126 #undef SPDK_CONFIG_TSAN 00:09:01.126 #define SPDK_CONFIG_UBLK 1 00:09:01.126 #define SPDK_CONFIG_UBSAN 1 00:09:01.126 #undef SPDK_CONFIG_UNIT_TESTS 00:09:01.126 #undef SPDK_CONFIG_URING 00:09:01.126 #define SPDK_CONFIG_URING_PATH 00:09:01.126 #undef SPDK_CONFIG_URING_ZNS 00:09:01.126 #undef SPDK_CONFIG_USDT 00:09:01.126 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:01.126 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:01.126 #undef SPDK_CONFIG_VFIO_USER 00:09:01.126 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:01.126 #define SPDK_CONFIG_VHOST 1 00:09:01.126 #define SPDK_CONFIG_VIRTIO 1 00:09:01.126 #undef SPDK_CONFIG_VTUNE 00:09:01.126 #define SPDK_CONFIG_VTUNE_DIR 00:09:01.126 #define SPDK_CONFIG_WERROR 1 00:09:01.126 #define SPDK_CONFIG_WPDK_DIR 00:09:01.126 #undef SPDK_CONFIG_XNVME 00:09:01.126 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:01.126 14:41:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:01.126 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:01.126 14:41:40 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:01.126 14:41:40 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:01.126 14:41:40 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:01.126 14:41:40 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.126 14:41:40 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.126 14:41:40 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.126 14:41:40 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:01.126 14:41:40 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.126 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:01.126 14:41:40 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:01.126 14:41:40 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:01.126 14:41:40 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:01.126 14:41:40 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:01.126 14:41:40 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:01.126 14:41:40 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:01.126 14:41:40 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:09:01.126 14:41:40 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:09:01.126 14:41:40 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:01.126 14:41:40 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:01.126 14:41:40 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:01.126 14:41:40 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:01.126 14:41:40 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:01.126 14:41:40 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:01.126 14:41:40 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:01.126 14:41:40 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:01.126 14:41:40 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:01.126 14:41:40 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:01.126 14:41:40 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:01.126 14:41:40 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:01.126 14:41:40 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:01.126 14:41:40 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:01.126 14:41:40 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:01.126 14:41:40 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:01.126 14:41:40 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:01.126 14:41:40 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:09:01.126 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:09:01.126 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:01.126 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:01.126 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:01.126 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:01.126 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:01.126 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:01.126 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:01.126 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:01.126 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:01.126 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:01.126 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:01.126 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:01.126 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:01.126 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:01.126 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:01.126 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 1 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:01.127 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 1780307 ]] 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 1780307 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.OwgpW3 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.OwgpW3/tests/target /tmp/spdk.OwgpW3 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=953643008 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4330786816 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=55666360320 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61994692608 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6328332288 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30992633856 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997344256 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:01.128 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12390178816 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12398940160 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8761344 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30996983808 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997348352 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=364544 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6199463936 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6199468032 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:09:01.129 * Looking for test storage... 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=55666360320 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=8542924800 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:01.129 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:01.129 14:41:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:01.130 14:41:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:01.130 14:41:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:01.130 14:41:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:01.130 14:41:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:01.130 14:41:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:01.130 14:41:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:01.130 14:41:40 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:01.130 14:41:40 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:01.130 14:41:40 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:01.130 14:41:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:01.130 14:41:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:01.130 14:41:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:01.130 14:41:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:01.130 14:41:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:01.130 14:41:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.130 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:01.130 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.130 14:41:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:01.130 14:41:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:01.130 14:41:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:09:01.130 14:41:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:03.033 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:03.033 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:03.033 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:03.033 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:03.033 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:03.292 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:03.292 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:03.292 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:03.292 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:03.292 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:03.292 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:03.292 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:03.292 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:03.292 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:09:03.292 00:09:03.292 --- 10.0.0.2 ping statistics --- 00:09:03.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.292 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:09:03.292 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:03.292 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:03.292 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:09:03.292 00:09:03.292 --- 10.0.0.1 ping statistics --- 00:09:03.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.292 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:09:03.292 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:03.292 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:09:03.292 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:03.292 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:03.292 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:03.292 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:03.292 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:03.292 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:03.292 14:41:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:03.292 14:41:42 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:03.292 14:41:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:03.292 14:41:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:03.292 14:41:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:03.292 ************************************ 00:09:03.292 START TEST nvmf_filesystem_no_in_capsule 00:09:03.292 ************************************ 00:09:03.292 14:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:09:03.292 14:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:09:03.292 14:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:03.292 14:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:03.292 14:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:03.292 14:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:03.292 14:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1781928 00:09:03.292 14:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:03.292 14:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1781928 00:09:03.292 14:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1781928 ']' 00:09:03.292 14:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.293 14:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:03.293 14:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.293 14:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:03.293 14:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:03.293 [2024-07-14 14:41:42.588319] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:03.293 [2024-07-14 14:41:42.588472] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:03.552 EAL: No free 2048 kB hugepages reported on node 1 00:09:03.552 [2024-07-14 14:41:42.727512] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:03.809 [2024-07-14 14:41:42.987197] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:03.809 [2024-07-14 14:41:42.987278] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:03.809 [2024-07-14 14:41:42.987306] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:03.809 [2024-07-14 14:41:42.987326] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:03.809 [2024-07-14 14:41:42.987347] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:03.810 [2024-07-14 14:41:42.987483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:03.810 [2024-07-14 14:41:42.987559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:03.810 [2024-07-14 14:41:42.987650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.810 [2024-07-14 14:41:42.987660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:04.375 14:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:04.375 14:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:09:04.375 14:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:04.375 14:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:04.375 14:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:04.375 14:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:04.375 14:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:04.375 14:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:04.375 14:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.375 14:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:04.375 [2024-07-14 14:41:43.542319] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:04.375 14:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.375 14:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:04.375 14:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.375 14:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:04.939 Malloc1 00:09:04.939 14:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.939 14:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:04.939 14:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.939 14:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:04.939 14:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.939 14:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:04.939 14:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.939 14:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:04.939 14:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.939 14:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:04.939 14:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.940 14:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:04.940 [2024-07-14 14:41:44.137608] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:04.940 14:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.940 14:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:04.940 14:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:09:04.940 14:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:09:04.940 14:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:09:04.940 14:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:09:04.940 14:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:04.940 14:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.940 14:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:04.940 14:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.940 14:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:09:04.940 { 00:09:04.940 "name": "Malloc1", 00:09:04.940 "aliases": [ 00:09:04.940 "cd7a6cae-8084-44f6-9707-ceb6eca73ff7" 00:09:04.940 ], 00:09:04.940 "product_name": "Malloc disk", 00:09:04.940 "block_size": 512, 00:09:04.940 "num_blocks": 1048576, 00:09:04.940 "uuid": "cd7a6cae-8084-44f6-9707-ceb6eca73ff7", 00:09:04.940 "assigned_rate_limits": { 00:09:04.940 "rw_ios_per_sec": 0, 00:09:04.940 "rw_mbytes_per_sec": 0, 00:09:04.940 "r_mbytes_per_sec": 0, 00:09:04.940 "w_mbytes_per_sec": 0 00:09:04.940 }, 00:09:04.940 "claimed": true, 00:09:04.940 "claim_type": "exclusive_write", 00:09:04.940 "zoned": false, 00:09:04.940 "supported_io_types": { 00:09:04.940 "read": true, 00:09:04.940 "write": true, 00:09:04.940 "unmap": true, 00:09:04.940 "flush": true, 00:09:04.940 "reset": true, 00:09:04.940 "nvme_admin": false, 00:09:04.940 "nvme_io": false, 00:09:04.940 "nvme_io_md": false, 00:09:04.940 "write_zeroes": true, 00:09:04.940 "zcopy": true, 00:09:04.940 "get_zone_info": false, 00:09:04.940 "zone_management": false, 00:09:04.940 "zone_append": false, 00:09:04.940 "compare": false, 00:09:04.940 "compare_and_write": false, 00:09:04.940 "abort": true, 00:09:04.940 "seek_hole": false, 00:09:04.940 "seek_data": false, 00:09:04.940 "copy": true, 00:09:04.940 "nvme_iov_md": false 00:09:04.940 }, 00:09:04.940 "memory_domains": [ 00:09:04.940 { 00:09:04.940 "dma_device_id": "system", 00:09:04.940 "dma_device_type": 1 00:09:04.940 }, 00:09:04.940 { 00:09:04.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.940 "dma_device_type": 2 00:09:04.940 } 00:09:04.940 ], 00:09:04.940 "driver_specific": {} 00:09:04.940 } 00:09:04.940 ]' 00:09:04.940 14:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:09:04.940 14:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:09:04.940 14:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:09:04.940 14:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:09:04.940 14:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:09:04.940 14:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:09:04.940 14:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:04.940 14:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:05.872 14:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:05.872 14:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:09:05.872 14:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:05.872 14:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:05.872 14:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:09:07.805 14:41:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:07.805 14:41:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:07.805 14:41:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:07.805 14:41:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:07.805 14:41:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:07.805 14:41:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:09:07.805 14:41:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:07.805 14:41:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:07.806 14:41:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:07.806 14:41:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:07.806 14:41:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:07.806 14:41:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:07.806 14:41:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:07.806 14:41:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:07.806 14:41:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:07.806 14:41:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:07.806 14:41:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:08.062 14:41:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:08.624 14:41:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:09.555 14:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:09:09.555 14:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:09.555 14:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:09.555 14:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:09.556 14:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:09.556 ************************************ 00:09:09.556 START TEST filesystem_ext4 00:09:09.556 ************************************ 00:09:09.556 14:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:09.556 14:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:09.556 14:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:09.556 14:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:09.556 14:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:09:09.556 14:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:09:09.556 14:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:09:09.556 14:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:09:09.556 14:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:09:09.556 14:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:09:09.556 14:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:09.556 mke2fs 1.46.5 (30-Dec-2021) 00:09:09.556 Discarding device blocks: 0/522240 done 00:09:09.812 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:09.812 Filesystem UUID: ece7cb61-f03c-4924-acf5-4a2be35037dd 00:09:09.812 Superblock backups stored on blocks: 00:09:09.812 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:09.812 00:09:09.812 Allocating group tables: 0/64 done 00:09:09.812 Writing inode tables: 0/64 done 00:09:12.334 Creating journal (8192 blocks): done 00:09:13.416 Writing superblocks and filesystem accounting information: 0/64 4/64 done 00:09:13.416 00:09:13.416 14:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:09:13.416 14:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:13.416 14:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:13.675 14:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:09:13.675 14:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:13.675 14:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:09:13.675 14:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:13.675 14:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:13.675 14:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1781928 00:09:13.675 14:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:13.675 14:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:13.675 14:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:13.675 14:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:13.675 00:09:13.675 real 0m4.140s 00:09:13.675 user 0m0.011s 00:09:13.675 sys 0m0.067s 00:09:13.675 14:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:13.675 14:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:13.675 ************************************ 00:09:13.675 END TEST filesystem_ext4 00:09:13.675 ************************************ 00:09:13.675 14:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:09:13.675 14:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:13.675 14:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:13.675 14:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:13.675 14:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:13.675 ************************************ 00:09:13.675 START TEST filesystem_btrfs 00:09:13.675 ************************************ 00:09:13.675 14:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:13.675 14:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:13.675 14:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:13.675 14:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:13.675 14:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:09:13.675 14:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:09:13.675 14:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:09:13.675 14:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:09:13.675 14:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:09:13.675 14:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:09:13.675 14:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:13.933 btrfs-progs v6.6.2 00:09:13.933 See https://btrfs.readthedocs.io for more information. 00:09:13.933 00:09:13.933 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:13.933 NOTE: several default settings have changed in version 5.15, please make sure 00:09:13.933 this does not affect your deployments: 00:09:13.933 - DUP for metadata (-m dup) 00:09:13.933 - enabled no-holes (-O no-holes) 00:09:13.933 - enabled free-space-tree (-R free-space-tree) 00:09:13.933 00:09:13.933 Label: (null) 00:09:13.933 UUID: a6ffe19a-362f-4acb-82a4-ea8aada19187 00:09:13.933 Node size: 16384 00:09:13.933 Sector size: 4096 00:09:13.933 Filesystem size: 510.00MiB 00:09:13.933 Block group profiles: 00:09:13.933 Data: single 8.00MiB 00:09:13.933 Metadata: DUP 32.00MiB 00:09:13.933 System: DUP 8.00MiB 00:09:13.933 SSD detected: yes 00:09:13.933 Zoned device: no 00:09:13.933 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:09:13.933 Runtime features: free-space-tree 00:09:13.933 Checksum: crc32c 00:09:13.933 Number of devices: 1 00:09:13.933 Devices: 00:09:13.933 ID SIZE PATH 00:09:13.933 1 510.00MiB /dev/nvme0n1p1 00:09:13.933 00:09:13.933 14:41:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:09:13.933 14:41:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:14.865 14:41:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:14.865 14:41:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:09:14.865 14:41:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:14.865 14:41:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:09:14.865 14:41:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:14.866 14:41:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:14.866 14:41:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1781928 00:09:14.866 14:41:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:14.866 14:41:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:14.866 14:41:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:14.866 14:41:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:14.866 00:09:14.866 real 0m1.107s 00:09:14.866 user 0m0.020s 00:09:14.866 sys 0m0.111s 00:09:14.866 14:41:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:14.866 14:41:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:14.866 ************************************ 00:09:14.866 END TEST filesystem_btrfs 00:09:14.866 ************************************ 00:09:14.866 14:41:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:09:14.866 14:41:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:09:14.866 14:41:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:14.866 14:41:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:14.866 14:41:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:14.866 ************************************ 00:09:14.866 START TEST filesystem_xfs 00:09:14.866 ************************************ 00:09:14.866 14:41:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:09:14.866 14:41:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:14.866 14:41:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:14.866 14:41:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:14.866 14:41:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:09:14.866 14:41:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:09:14.866 14:41:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:09:14.866 14:41:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:09:14.866 14:41:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:09:14.866 14:41:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:09:14.866 14:41:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:14.866 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:14.866 = sectsz=512 attr=2, projid32bit=1 00:09:14.866 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:14.866 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:14.866 data = bsize=4096 blocks=130560, imaxpct=25 00:09:14.866 = sunit=0 swidth=0 blks 00:09:14.866 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:14.866 log =internal log bsize=4096 blocks=16384, version=2 00:09:14.866 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:14.866 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:16.240 Discarding blocks...Done. 00:09:16.240 14:41:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:09:16.240 14:41:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:18.767 14:41:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:18.767 14:41:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:09:18.767 14:41:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:18.767 14:41:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:09:18.767 14:41:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:09:18.767 14:41:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:18.767 14:41:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1781928 00:09:18.767 14:41:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:18.767 14:41:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:18.767 14:41:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:18.767 14:41:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:18.767 00:09:18.767 real 0m3.662s 00:09:18.767 user 0m0.018s 00:09:18.767 sys 0m0.058s 00:09:18.767 14:41:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:18.768 14:41:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:18.768 ************************************ 00:09:18.768 END TEST filesystem_xfs 00:09:18.768 ************************************ 00:09:18.768 14:41:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:09:18.768 14:41:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:18.768 14:41:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:18.768 14:41:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:18.768 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.768 14:41:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:18.768 14:41:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:09:18.768 14:41:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:18.768 14:41:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:18.768 14:41:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:18.768 14:41:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:18.768 14:41:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:09:18.768 14:41:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:18.768 14:41:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.768 14:41:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:18.768 14:41:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.768 14:41:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:18.768 14:41:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1781928 00:09:18.768 14:41:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1781928 ']' 00:09:18.768 14:41:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1781928 00:09:18.768 14:41:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:09:18.768 14:41:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:18.768 14:41:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1781928 00:09:19.026 14:41:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:19.026 14:41:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:19.026 14:41:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1781928' 00:09:19.026 killing process with pid 1781928 00:09:19.026 14:41:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 1781928 00:09:19.026 14:41:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 1781928 00:09:21.554 14:42:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:21.554 00:09:21.554 real 0m18.143s 00:09:21.554 user 1m7.740s 00:09:21.554 sys 0m2.249s 00:09:21.554 14:42:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:21.554 14:42:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:21.554 ************************************ 00:09:21.554 END TEST nvmf_filesystem_no_in_capsule 00:09:21.554 ************************************ 00:09:21.554 14:42:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:09:21.554 14:42:00 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:09:21.554 14:42:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:21.554 14:42:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:21.554 14:42:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:21.554 ************************************ 00:09:21.554 START TEST nvmf_filesystem_in_capsule 00:09:21.554 ************************************ 00:09:21.554 14:42:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:09:21.554 14:42:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:09:21.554 14:42:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:21.554 14:42:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:21.554 14:42:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:21.554 14:42:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:21.554 14:42:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1784336 00:09:21.554 14:42:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:21.554 14:42:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1784336 00:09:21.554 14:42:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1784336 ']' 00:09:21.554 14:42:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.554 14:42:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:21.554 14:42:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.554 14:42:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:21.554 14:42:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:21.554 [2024-07-14 14:42:00.781636] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:21.554 [2024-07-14 14:42:00.781762] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:21.554 EAL: No free 2048 kB hugepages reported on node 1 00:09:21.812 [2024-07-14 14:42:00.914899] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:22.070 [2024-07-14 14:42:01.176788] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:22.070 [2024-07-14 14:42:01.176864] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:22.070 [2024-07-14 14:42:01.176903] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:22.070 [2024-07-14 14:42:01.176942] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:22.070 [2024-07-14 14:42:01.176961] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:22.070 [2024-07-14 14:42:01.177048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:22.070 [2024-07-14 14:42:01.177101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:22.070 [2024-07-14 14:42:01.177150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.070 [2024-07-14 14:42:01.177160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:22.636 14:42:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:22.636 14:42:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:09:22.636 14:42:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:22.636 14:42:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:22.636 14:42:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:22.636 14:42:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:22.636 14:42:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:22.636 14:42:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:09:22.636 14:42:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.636 14:42:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:22.636 [2024-07-14 14:42:01.747383] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:22.636 14:42:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.636 14:42:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:22.636 14:42:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.636 14:42:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:23.204 Malloc1 00:09:23.204 14:42:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.204 14:42:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:23.204 14:42:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.204 14:42:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:23.204 14:42:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.204 14:42:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:23.204 14:42:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.204 14:42:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:23.204 14:42:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.204 14:42:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:23.204 14:42:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.204 14:42:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:23.204 [2024-07-14 14:42:02.323967] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:23.204 14:42:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.204 14:42:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:23.204 14:42:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:09:23.204 14:42:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:09:23.204 14:42:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:09:23.204 14:42:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:09:23.204 14:42:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:23.204 14:42:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.204 14:42:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:23.204 14:42:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.204 14:42:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:09:23.204 { 00:09:23.204 "name": "Malloc1", 00:09:23.204 "aliases": [ 00:09:23.204 "a9472ea7-91f2-44d4-8a44-fd68e593ce84" 00:09:23.204 ], 00:09:23.204 "product_name": "Malloc disk", 00:09:23.204 "block_size": 512, 00:09:23.204 "num_blocks": 1048576, 00:09:23.204 "uuid": "a9472ea7-91f2-44d4-8a44-fd68e593ce84", 00:09:23.204 "assigned_rate_limits": { 00:09:23.204 "rw_ios_per_sec": 0, 00:09:23.204 "rw_mbytes_per_sec": 0, 00:09:23.204 "r_mbytes_per_sec": 0, 00:09:23.204 "w_mbytes_per_sec": 0 00:09:23.204 }, 00:09:23.204 "claimed": true, 00:09:23.204 "claim_type": "exclusive_write", 00:09:23.204 "zoned": false, 00:09:23.204 "supported_io_types": { 00:09:23.204 "read": true, 00:09:23.204 "write": true, 00:09:23.204 "unmap": true, 00:09:23.204 "flush": true, 00:09:23.204 "reset": true, 00:09:23.204 "nvme_admin": false, 00:09:23.204 "nvme_io": false, 00:09:23.204 "nvme_io_md": false, 00:09:23.204 "write_zeroes": true, 00:09:23.204 "zcopy": true, 00:09:23.204 "get_zone_info": false, 00:09:23.204 "zone_management": false, 00:09:23.204 "zone_append": false, 00:09:23.204 "compare": false, 00:09:23.204 "compare_and_write": false, 00:09:23.204 "abort": true, 00:09:23.204 "seek_hole": false, 00:09:23.204 "seek_data": false, 00:09:23.204 "copy": true, 00:09:23.204 "nvme_iov_md": false 00:09:23.204 }, 00:09:23.204 "memory_domains": [ 00:09:23.204 { 00:09:23.204 "dma_device_id": "system", 00:09:23.204 "dma_device_type": 1 00:09:23.204 }, 00:09:23.204 { 00:09:23.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.204 "dma_device_type": 2 00:09:23.204 } 00:09:23.204 ], 00:09:23.204 "driver_specific": {} 00:09:23.204 } 00:09:23.204 ]' 00:09:23.204 14:42:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:09:23.204 14:42:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:09:23.204 14:42:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:09:23.204 14:42:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:09:23.204 14:42:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:09:23.204 14:42:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:09:23.204 14:42:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:23.204 14:42:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:23.769 14:42:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:23.769 14:42:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:09:23.769 14:42:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:23.769 14:42:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:23.769 14:42:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:09:26.295 14:42:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:26.295 14:42:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:26.295 14:42:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:26.295 14:42:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:26.295 14:42:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:26.295 14:42:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:09:26.295 14:42:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:26.295 14:42:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:26.295 14:42:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:26.295 14:42:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:26.295 14:42:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:26.295 14:42:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:26.295 14:42:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:26.295 14:42:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:26.295 14:42:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:26.295 14:42:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:26.295 14:42:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:26.295 14:42:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:26.860 14:42:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:27.806 14:42:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:09:27.806 14:42:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:27.806 14:42:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:27.806 14:42:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:27.806 14:42:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:27.806 ************************************ 00:09:27.806 START TEST filesystem_in_capsule_ext4 00:09:27.806 ************************************ 00:09:27.806 14:42:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:27.806 14:42:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:27.806 14:42:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:27.806 14:42:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:27.806 14:42:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:09:27.806 14:42:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:09:27.806 14:42:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:09:27.806 14:42:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:09:27.806 14:42:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:09:27.806 14:42:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:09:27.806 14:42:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:27.806 mke2fs 1.46.5 (30-Dec-2021) 00:09:27.806 Discarding device blocks: 0/522240 done 00:09:27.806 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:27.806 Filesystem UUID: 224bbf3c-7505-462e-b402-be8b4d0f6f19 00:09:27.806 Superblock backups stored on blocks: 00:09:27.806 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:27.806 00:09:27.806 Allocating group tables: 0/64 done 00:09:27.806 Writing inode tables: 0/64 done 00:09:28.074 Creating journal (8192 blocks): done 00:09:28.589 Writing superblocks and filesystem accounting information: 0/64 done 00:09:28.589 00:09:28.589 14:42:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:09:28.589 14:42:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:28.589 14:42:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:28.589 14:42:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:09:28.847 14:42:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:28.847 14:42:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:09:28.847 14:42:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:28.847 14:42:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:28.847 14:42:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1784336 00:09:28.847 14:42:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:28.847 14:42:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:28.847 14:42:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:28.848 14:42:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:28.848 00:09:28.848 real 0m1.028s 00:09:28.848 user 0m0.009s 00:09:28.848 sys 0m0.058s 00:09:28.848 14:42:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:28.848 14:42:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:28.848 ************************************ 00:09:28.848 END TEST filesystem_in_capsule_ext4 00:09:28.848 ************************************ 00:09:28.848 14:42:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:09:28.848 14:42:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:28.848 14:42:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:28.848 14:42:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:28.848 14:42:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:28.848 ************************************ 00:09:28.848 START TEST filesystem_in_capsule_btrfs 00:09:28.848 ************************************ 00:09:28.848 14:42:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:28.848 14:42:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:28.848 14:42:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:28.848 14:42:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:28.848 14:42:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:09:28.848 14:42:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:09:28.848 14:42:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:09:28.848 14:42:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:09:28.848 14:42:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:09:28.848 14:42:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:09:28.848 14:42:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:29.414 btrfs-progs v6.6.2 00:09:29.414 See https://btrfs.readthedocs.io for more information. 00:09:29.414 00:09:29.414 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:29.414 NOTE: several default settings have changed in version 5.15, please make sure 00:09:29.414 this does not affect your deployments: 00:09:29.414 - DUP for metadata (-m dup) 00:09:29.414 - enabled no-holes (-O no-holes) 00:09:29.414 - enabled free-space-tree (-R free-space-tree) 00:09:29.414 00:09:29.414 Label: (null) 00:09:29.414 UUID: 48ff0fd4-9881-4381-8c6c-2a410e9d4681 00:09:29.414 Node size: 16384 00:09:29.414 Sector size: 4096 00:09:29.414 Filesystem size: 510.00MiB 00:09:29.414 Block group profiles: 00:09:29.414 Data: single 8.00MiB 00:09:29.414 Metadata: DUP 32.00MiB 00:09:29.414 System: DUP 8.00MiB 00:09:29.414 SSD detected: yes 00:09:29.414 Zoned device: no 00:09:29.414 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:09:29.414 Runtime features: free-space-tree 00:09:29.414 Checksum: crc32c 00:09:29.414 Number of devices: 1 00:09:29.414 Devices: 00:09:29.414 ID SIZE PATH 00:09:29.414 1 510.00MiB /dev/nvme0n1p1 00:09:29.414 00:09:29.414 14:42:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:09:29.414 14:42:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:29.672 14:42:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:29.672 14:42:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:09:29.672 14:42:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:29.672 14:42:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:09:29.672 14:42:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:29.672 14:42:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:29.672 14:42:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1784336 00:09:29.672 14:42:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:29.672 14:42:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:29.672 14:42:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:29.672 14:42:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:29.672 00:09:29.672 real 0m0.870s 00:09:29.672 user 0m0.013s 00:09:29.672 sys 0m0.114s 00:09:29.672 14:42:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:29.672 14:42:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:29.672 ************************************ 00:09:29.672 END TEST filesystem_in_capsule_btrfs 00:09:29.672 ************************************ 00:09:29.672 14:42:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:09:29.672 14:42:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:09:29.672 14:42:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:29.672 14:42:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:29.672 14:42:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:29.672 ************************************ 00:09:29.672 START TEST filesystem_in_capsule_xfs 00:09:29.672 ************************************ 00:09:29.672 14:42:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:09:29.672 14:42:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:29.672 14:42:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:29.672 14:42:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:29.672 14:42:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:09:29.672 14:42:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:09:29.672 14:42:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:09:29.672 14:42:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:09:29.672 14:42:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:09:29.672 14:42:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:09:29.672 14:42:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:29.928 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:29.928 = sectsz=512 attr=2, projid32bit=1 00:09:29.928 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:29.928 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:29.928 data = bsize=4096 blocks=130560, imaxpct=25 00:09:29.928 = sunit=0 swidth=0 blks 00:09:29.928 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:29.928 log =internal log bsize=4096 blocks=16384, version=2 00:09:29.928 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:29.928 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:30.492 Discarding blocks...Done. 00:09:30.492 14:42:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:09:30.492 14:42:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:32.392 14:42:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:32.392 14:42:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:09:32.392 14:42:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:32.392 14:42:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:09:32.392 14:42:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:09:32.392 14:42:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:32.392 14:42:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1784336 00:09:32.392 14:42:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:32.392 14:42:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:32.392 14:42:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:32.392 14:42:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:32.392 00:09:32.392 real 0m2.664s 00:09:32.392 user 0m0.015s 00:09:32.392 sys 0m0.059s 00:09:32.392 14:42:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:32.392 14:42:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:32.392 ************************************ 00:09:32.392 END TEST filesystem_in_capsule_xfs 00:09:32.392 ************************************ 00:09:32.392 14:42:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:09:32.392 14:42:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:32.650 14:42:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:32.650 14:42:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:32.910 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.910 14:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:32.910 14:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:09:32.910 14:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:32.910 14:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:32.910 14:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:32.910 14:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:32.910 14:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:09:32.910 14:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:32.910 14:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.910 14:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:32.910 14:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.910 14:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:32.910 14:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1784336 00:09:32.910 14:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1784336 ']' 00:09:32.910 14:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1784336 00:09:32.910 14:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:09:32.910 14:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:32.910 14:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1784336 00:09:32.910 14:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:32.910 14:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:32.910 14:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1784336' 00:09:32.910 killing process with pid 1784336 00:09:32.910 14:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 1784336 00:09:32.910 14:42:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 1784336 00:09:35.441 14:42:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:35.441 00:09:35.441 real 0m13.958s 00:09:35.441 user 0m51.328s 00:09:35.441 sys 0m1.924s 00:09:35.441 14:42:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:35.441 14:42:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:35.441 ************************************ 00:09:35.441 END TEST nvmf_filesystem_in_capsule 00:09:35.441 ************************************ 00:09:35.441 14:42:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:09:35.441 14:42:14 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:09:35.441 14:42:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:35.441 14:42:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:09:35.441 14:42:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:35.441 14:42:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:09:35.441 14:42:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:35.441 14:42:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:35.441 rmmod nvme_tcp 00:09:35.441 rmmod nvme_fabrics 00:09:35.441 rmmod nvme_keyring 00:09:35.441 14:42:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:35.441 14:42:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:09:35.441 14:42:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:09:35.441 14:42:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:35.441 14:42:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:35.441 14:42:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:35.441 14:42:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:35.441 14:42:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:35.441 14:42:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:35.441 14:42:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.441 14:42:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:35.441 14:42:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.978 14:42:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:37.978 00:09:37.978 real 0m36.563s 00:09:37.978 user 1m59.976s 00:09:37.978 sys 0m5.729s 00:09:37.978 14:42:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:37.978 14:42:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:37.978 ************************************ 00:09:37.978 END TEST nvmf_filesystem 00:09:37.978 ************************************ 00:09:37.978 14:42:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:37.978 14:42:16 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:37.978 14:42:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:37.978 14:42:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:37.978 14:42:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:37.978 ************************************ 00:09:37.978 START TEST nvmf_target_discovery 00:09:37.978 ************************************ 00:09:37.978 14:42:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:37.978 * Looking for test storage... 00:09:37.978 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:37.978 14:42:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:37.978 14:42:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:09:37.978 14:42:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:37.978 14:42:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:37.978 14:42:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:37.978 14:42:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:37.978 14:42:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:37.978 14:42:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:37.978 14:42:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:37.978 14:42:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:37.978 14:42:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:37.978 14:42:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:37.978 14:42:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:37.978 14:42:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:37.978 14:42:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:37.978 14:42:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:37.978 14:42:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:37.978 14:42:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:37.978 14:42:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:37.978 14:42:16 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:37.978 14:42:16 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:37.978 14:42:16 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:37.978 14:42:16 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.978 14:42:16 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.978 14:42:16 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.978 14:42:16 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:09:37.978 14:42:16 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.978 14:42:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:09:37.978 14:42:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:37.978 14:42:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:37.978 14:42:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:37.978 14:42:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:37.978 14:42:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:37.978 14:42:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:37.978 14:42:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:37.978 14:42:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:37.978 14:42:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:09:37.978 14:42:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:09:37.978 14:42:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:09:37.978 14:42:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:09:37.978 14:42:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:09:37.978 14:42:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:37.978 14:42:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:37.978 14:42:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:37.978 14:42:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:37.978 14:42:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:37.978 14:42:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.978 14:42:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:37.978 14:42:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.978 14:42:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:37.978 14:42:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:37.978 14:42:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:09:37.978 14:42:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:39.888 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:39.888 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:39.888 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:39.888 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:39.888 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:39.888 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:39.888 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:09:39.888 00:09:39.888 --- 10.0.0.2 ping statistics --- 00:09:39.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:39.888 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:09:39.889 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:39.889 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:39.889 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:09:39.889 00:09:39.889 --- 10.0.0.1 ping statistics --- 00:09:39.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:39.889 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:09:39.889 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:39.889 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:09:39.889 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:39.889 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:39.889 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:39.889 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:39.889 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:39.889 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:39.889 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:39.889 14:42:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:09:39.889 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:39.889 14:42:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:39.889 14:42:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:39.889 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1788658 00:09:39.889 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1788658 00:09:39.889 14:42:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 1788658 ']' 00:09:39.889 14:42:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.889 14:42:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:39.889 14:42:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:39.889 14:42:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.889 14:42:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:39.889 14:42:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:39.889 [2024-07-14 14:42:19.089278] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:39.889 [2024-07-14 14:42:19.089435] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:39.889 EAL: No free 2048 kB hugepages reported on node 1 00:09:40.149 [2024-07-14 14:42:19.235505] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:40.407 [2024-07-14 14:42:19.502455] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:40.407 [2024-07-14 14:42:19.502536] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:40.407 [2024-07-14 14:42:19.502564] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:40.407 [2024-07-14 14:42:19.502585] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:40.407 [2024-07-14 14:42:19.502607] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:40.408 [2024-07-14 14:42:19.502732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:40.408 [2024-07-14 14:42:19.502792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:40.408 [2024-07-14 14:42:19.503181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:40.408 [2024-07-14 14:42:19.503206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.976 14:42:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:40.976 14:42:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:09:40.976 14:42:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:40.976 14:42:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:40.976 14:42:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:40.976 14:42:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:40.976 14:42:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:40.976 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.976 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:40.976 [2024-07-14 14:42:20.020052] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:40.976 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.976 14:42:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:09:40.976 14:42:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:40.976 14:42:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:09:40.976 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.976 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:40.976 Null1 00:09:40.976 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.976 14:42:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:40.976 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.976 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:40.976 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.976 14:42:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:09:40.976 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.976 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:40.976 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.976 14:42:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:40.976 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.976 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:40.976 [2024-07-14 14:42:20.062330] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:40.976 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.976 14:42:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:40.976 14:42:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:09:40.976 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.976 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:40.976 Null2 00:09:40.976 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.976 14:42:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:09:40.976 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.976 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:40.976 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.976 14:42:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:09:40.976 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.976 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:40.976 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.976 14:42:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:40.976 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.976 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:40.976 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.976 14:42:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:40.976 14:42:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:09:40.976 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.976 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:40.976 Null3 00:09:40.976 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.976 14:42:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:09:40.976 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.976 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:40.976 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.976 14:42:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:09:40.977 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.977 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:40.977 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.977 14:42:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:09:40.977 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.977 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:40.977 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.977 14:42:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:40.977 14:42:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:09:40.977 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.977 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:40.977 Null4 00:09:40.977 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.977 14:42:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:09:40.977 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.977 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:40.977 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.977 14:42:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:09:40.977 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.977 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:40.977 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.977 14:42:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:09:40.977 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.977 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:40.977 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.977 14:42:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:40.977 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.977 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:40.977 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.977 14:42:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:09:40.977 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.977 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:40.977 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.977 14:42:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:09:41.234 00:09:41.234 Discovery Log Number of Records 6, Generation counter 6 00:09:41.234 =====Discovery Log Entry 0====== 00:09:41.234 trtype: tcp 00:09:41.234 adrfam: ipv4 00:09:41.234 subtype: current discovery subsystem 00:09:41.234 treq: not required 00:09:41.234 portid: 0 00:09:41.234 trsvcid: 4420 00:09:41.234 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:41.234 traddr: 10.0.0.2 00:09:41.234 eflags: explicit discovery connections, duplicate discovery information 00:09:41.234 sectype: none 00:09:41.234 =====Discovery Log Entry 1====== 00:09:41.234 trtype: tcp 00:09:41.234 adrfam: ipv4 00:09:41.234 subtype: nvme subsystem 00:09:41.234 treq: not required 00:09:41.234 portid: 0 00:09:41.234 trsvcid: 4420 00:09:41.234 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:41.234 traddr: 10.0.0.2 00:09:41.234 eflags: none 00:09:41.234 sectype: none 00:09:41.234 =====Discovery Log Entry 2====== 00:09:41.234 trtype: tcp 00:09:41.234 adrfam: ipv4 00:09:41.234 subtype: nvme subsystem 00:09:41.234 treq: not required 00:09:41.234 portid: 0 00:09:41.234 trsvcid: 4420 00:09:41.234 subnqn: nqn.2016-06.io.spdk:cnode2 00:09:41.234 traddr: 10.0.0.2 00:09:41.234 eflags: none 00:09:41.234 sectype: none 00:09:41.234 =====Discovery Log Entry 3====== 00:09:41.234 trtype: tcp 00:09:41.234 adrfam: ipv4 00:09:41.234 subtype: nvme subsystem 00:09:41.234 treq: not required 00:09:41.234 portid: 0 00:09:41.234 trsvcid: 4420 00:09:41.234 subnqn: nqn.2016-06.io.spdk:cnode3 00:09:41.234 traddr: 10.0.0.2 00:09:41.234 eflags: none 00:09:41.234 sectype: none 00:09:41.234 =====Discovery Log Entry 4====== 00:09:41.234 trtype: tcp 00:09:41.234 adrfam: ipv4 00:09:41.234 subtype: nvme subsystem 00:09:41.234 treq: not required 00:09:41.234 portid: 0 00:09:41.234 trsvcid: 4420 00:09:41.234 subnqn: nqn.2016-06.io.spdk:cnode4 00:09:41.234 traddr: 10.0.0.2 00:09:41.234 eflags: none 00:09:41.234 sectype: none 00:09:41.234 =====Discovery Log Entry 5====== 00:09:41.234 trtype: tcp 00:09:41.234 adrfam: ipv4 00:09:41.234 subtype: discovery subsystem referral 00:09:41.234 treq: not required 00:09:41.234 portid: 0 00:09:41.234 trsvcid: 4430 00:09:41.234 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:41.234 traddr: 10.0.0.2 00:09:41.234 eflags: none 00:09:41.234 sectype: none 00:09:41.234 14:42:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:09:41.234 Perform nvmf subsystem discovery via RPC 00:09:41.234 14:42:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:09:41.234 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.234 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:41.234 [ 00:09:41.234 { 00:09:41.234 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:41.234 "subtype": "Discovery", 00:09:41.234 "listen_addresses": [ 00:09:41.234 { 00:09:41.234 "trtype": "TCP", 00:09:41.234 "adrfam": "IPv4", 00:09:41.234 "traddr": "10.0.0.2", 00:09:41.234 "trsvcid": "4420" 00:09:41.234 } 00:09:41.234 ], 00:09:41.234 "allow_any_host": true, 00:09:41.234 "hosts": [] 00:09:41.234 }, 00:09:41.234 { 00:09:41.234 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:41.234 "subtype": "NVMe", 00:09:41.234 "listen_addresses": [ 00:09:41.234 { 00:09:41.234 "trtype": "TCP", 00:09:41.234 "adrfam": "IPv4", 00:09:41.234 "traddr": "10.0.0.2", 00:09:41.234 "trsvcid": "4420" 00:09:41.234 } 00:09:41.234 ], 00:09:41.234 "allow_any_host": true, 00:09:41.234 "hosts": [], 00:09:41.234 "serial_number": "SPDK00000000000001", 00:09:41.234 "model_number": "SPDK bdev Controller", 00:09:41.234 "max_namespaces": 32, 00:09:41.234 "min_cntlid": 1, 00:09:41.234 "max_cntlid": 65519, 00:09:41.234 "namespaces": [ 00:09:41.234 { 00:09:41.234 "nsid": 1, 00:09:41.234 "bdev_name": "Null1", 00:09:41.234 "name": "Null1", 00:09:41.234 "nguid": "72713784302C413AA41A470C6D3734B5", 00:09:41.234 "uuid": "72713784-302c-413a-a41a-470c6d3734b5" 00:09:41.234 } 00:09:41.234 ] 00:09:41.234 }, 00:09:41.234 { 00:09:41.234 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:41.234 "subtype": "NVMe", 00:09:41.234 "listen_addresses": [ 00:09:41.234 { 00:09:41.234 "trtype": "TCP", 00:09:41.234 "adrfam": "IPv4", 00:09:41.234 "traddr": "10.0.0.2", 00:09:41.234 "trsvcid": "4420" 00:09:41.234 } 00:09:41.234 ], 00:09:41.234 "allow_any_host": true, 00:09:41.234 "hosts": [], 00:09:41.234 "serial_number": "SPDK00000000000002", 00:09:41.234 "model_number": "SPDK bdev Controller", 00:09:41.234 "max_namespaces": 32, 00:09:41.234 "min_cntlid": 1, 00:09:41.234 "max_cntlid": 65519, 00:09:41.234 "namespaces": [ 00:09:41.234 { 00:09:41.234 "nsid": 1, 00:09:41.234 "bdev_name": "Null2", 00:09:41.234 "name": "Null2", 00:09:41.234 "nguid": "05839AD910A844318963C1D5DC2FA873", 00:09:41.234 "uuid": "05839ad9-10a8-4431-8963-c1d5dc2fa873" 00:09:41.234 } 00:09:41.234 ] 00:09:41.234 }, 00:09:41.234 { 00:09:41.235 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:09:41.235 "subtype": "NVMe", 00:09:41.235 "listen_addresses": [ 00:09:41.235 { 00:09:41.235 "trtype": "TCP", 00:09:41.235 "adrfam": "IPv4", 00:09:41.235 "traddr": "10.0.0.2", 00:09:41.235 "trsvcid": "4420" 00:09:41.235 } 00:09:41.235 ], 00:09:41.235 "allow_any_host": true, 00:09:41.235 "hosts": [], 00:09:41.235 "serial_number": "SPDK00000000000003", 00:09:41.235 "model_number": "SPDK bdev Controller", 00:09:41.235 "max_namespaces": 32, 00:09:41.235 "min_cntlid": 1, 00:09:41.235 "max_cntlid": 65519, 00:09:41.235 "namespaces": [ 00:09:41.235 { 00:09:41.235 "nsid": 1, 00:09:41.235 "bdev_name": "Null3", 00:09:41.235 "name": "Null3", 00:09:41.235 "nguid": "E3FF741FEA174C63A3756EFDA789B27C", 00:09:41.235 "uuid": "e3ff741f-ea17-4c63-a375-6efda789b27c" 00:09:41.235 } 00:09:41.235 ] 00:09:41.235 }, 00:09:41.235 { 00:09:41.235 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:09:41.235 "subtype": "NVMe", 00:09:41.235 "listen_addresses": [ 00:09:41.235 { 00:09:41.235 "trtype": "TCP", 00:09:41.235 "adrfam": "IPv4", 00:09:41.235 "traddr": "10.0.0.2", 00:09:41.235 "trsvcid": "4420" 00:09:41.235 } 00:09:41.235 ], 00:09:41.235 "allow_any_host": true, 00:09:41.235 "hosts": [], 00:09:41.235 "serial_number": "SPDK00000000000004", 00:09:41.235 "model_number": "SPDK bdev Controller", 00:09:41.235 "max_namespaces": 32, 00:09:41.235 "min_cntlid": 1, 00:09:41.235 "max_cntlid": 65519, 00:09:41.235 "namespaces": [ 00:09:41.235 { 00:09:41.235 "nsid": 1, 00:09:41.235 "bdev_name": "Null4", 00:09:41.235 "name": "Null4", 00:09:41.235 "nguid": "220BFDB972A14DA2B86BDE090D33CE07", 00:09:41.235 "uuid": "220bfdb9-72a1-4da2-b86b-de090d33ce07" 00:09:41.235 } 00:09:41.235 ] 00:09:41.235 } 00:09:41.235 ] 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:41.235 rmmod nvme_tcp 00:09:41.235 rmmod nvme_fabrics 00:09:41.235 rmmod nvme_keyring 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1788658 ']' 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1788658 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 1788658 ']' 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 1788658 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1788658 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1788658' 00:09:41.235 killing process with pid 1788658 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 1788658 00:09:41.235 14:42:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 1788658 00:09:42.607 14:42:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:42.607 14:42:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:42.607 14:42:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:42.607 14:42:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:42.607 14:42:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:42.607 14:42:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:42.607 14:42:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:42.607 14:42:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.141 14:42:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:45.141 00:09:45.141 real 0m7.060s 00:09:45.141 user 0m8.639s 00:09:45.141 sys 0m1.959s 00:09:45.141 14:42:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:45.141 14:42:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:45.141 ************************************ 00:09:45.141 END TEST nvmf_target_discovery 00:09:45.141 ************************************ 00:09:45.141 14:42:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:45.141 14:42:23 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:45.141 14:42:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:45.141 14:42:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:45.141 14:42:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:45.141 ************************************ 00:09:45.141 START TEST nvmf_referrals 00:09:45.141 ************************************ 00:09:45.141 14:42:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:45.141 * Looking for test storage... 00:09:45.141 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:45.141 14:42:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:45.141 14:42:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:09:45.141 14:42:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:45.141 14:42:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:45.141 14:42:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:45.141 14:42:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:45.141 14:42:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:45.141 14:42:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:45.141 14:42:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:45.141 14:42:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:45.141 14:42:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:45.141 14:42:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:45.141 14:42:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:45.141 14:42:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:45.141 14:42:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:45.141 14:42:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:45.141 14:42:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:45.141 14:42:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:45.141 14:42:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:45.141 14:42:23 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:45.141 14:42:23 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:45.141 14:42:23 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:45.141 14:42:23 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.141 14:42:23 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.141 14:42:23 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.141 14:42:23 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:09:45.141 14:42:23 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.141 14:42:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:09:45.141 14:42:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:45.141 14:42:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:45.141 14:42:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:45.141 14:42:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:45.141 14:42:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:45.141 14:42:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:45.141 14:42:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:45.141 14:42:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:45.141 14:42:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:09:45.141 14:42:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:09:45.141 14:42:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:09:45.141 14:42:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:09:45.141 14:42:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:09:45.141 14:42:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:09:45.141 14:42:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:09:45.141 14:42:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:45.142 14:42:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:45.142 14:42:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:45.142 14:42:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:45.142 14:42:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:45.142 14:42:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.142 14:42:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:45.142 14:42:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.142 14:42:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:45.142 14:42:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:45.142 14:42:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:09:45.142 14:42:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:47.047 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:47.047 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:47.047 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:47.047 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:47.047 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:47.048 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:47.048 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:47.048 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:47.048 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:47.048 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:47.048 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:47.048 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:47.048 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:47.048 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:47.048 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:09:47.048 00:09:47.048 --- 10.0.0.2 ping statistics --- 00:09:47.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.048 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:09:47.048 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:47.048 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:47.048 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:09:47.048 00:09:47.048 --- 10.0.0.1 ping statistics --- 00:09:47.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.048 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:09:47.048 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:47.048 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:09:47.048 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:47.048 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:47.048 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:47.048 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:47.048 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:47.048 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:47.048 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:47.048 14:42:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:09:47.048 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:47.048 14:42:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:47.048 14:42:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:47.048 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1790892 00:09:47.048 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:47.048 14:42:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1790892 00:09:47.048 14:42:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 1790892 ']' 00:09:47.048 14:42:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.048 14:42:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:47.048 14:42:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.048 14:42:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:47.048 14:42:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:47.048 [2024-07-14 14:42:26.084322] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:47.048 [2024-07-14 14:42:26.084462] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:47.048 EAL: No free 2048 kB hugepages reported on node 1 00:09:47.048 [2024-07-14 14:42:26.225372] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:47.310 [2024-07-14 14:42:26.497242] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:47.310 [2024-07-14 14:42:26.497320] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:47.310 [2024-07-14 14:42:26.497349] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:47.310 [2024-07-14 14:42:26.497370] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:47.310 [2024-07-14 14:42:26.497392] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:47.310 [2024-07-14 14:42:26.497649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.310 [2024-07-14 14:42:26.497704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:47.310 [2024-07-14 14:42:26.497750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.310 [2024-07-14 14:42:26.497761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:47.907 14:42:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:47.907 14:42:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:09:47.907 14:42:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:47.907 14:42:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:47.907 14:42:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:47.907 14:42:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:47.907 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:47.907 14:42:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.907 14:42:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:47.907 [2024-07-14 14:42:27.047240] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:47.907 14:42:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.907 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:09:47.907 14:42:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.907 14:42:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:47.907 [2024-07-14 14:42:27.060658] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:09:47.907 14:42:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.907 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:09:47.907 14:42:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.907 14:42:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:47.907 14:42:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.907 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:09:47.907 14:42:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.907 14:42:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:47.907 14:42:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.907 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:09:47.907 14:42:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.907 14:42:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:47.907 14:42:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.907 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:47.907 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:09:47.907 14:42:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.907 14:42:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:47.907 14:42:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.907 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:09:47.907 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:09:47.907 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:47.907 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:47.908 14:42:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.908 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:47.908 14:42:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:47.908 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:47.908 14:42:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.908 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:47.908 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:47.908 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:09:47.908 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:47.908 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:47.908 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:47.908 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:47.908 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:48.167 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:48.167 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:48.167 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:09:48.167 14:42:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.167 14:42:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:48.167 14:42:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.167 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:09:48.167 14:42:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.167 14:42:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:48.167 14:42:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.167 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:09:48.167 14:42:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.167 14:42:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:48.167 14:42:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.167 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:48.167 14:42:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.167 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:09:48.167 14:42:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:48.167 14:42:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.167 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:09:48.167 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:09:48.167 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:48.167 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:48.167 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:48.167 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:48.167 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:48.425 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:48.425 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:09:48.425 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:09:48.425 14:42:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.425 14:42:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:48.425 14:42:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.425 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:48.425 14:42:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.426 14:42:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:48.426 14:42:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.426 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:09:48.426 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:48.426 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:48.426 14:42:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.426 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:48.426 14:42:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:48.426 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:48.426 14:42:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.426 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:09:48.426 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:48.426 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:09:48.426 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:48.426 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:48.426 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:48.426 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:48.426 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:48.426 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:09:48.426 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:48.426 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:09:48.426 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:48.426 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:09:48.426 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:48.426 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:48.685 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:48.685 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:09:48.685 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:48.685 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:09:48.685 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:48.685 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:48.685 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:48.685 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:48.685 14:42:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.685 14:42:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:48.685 14:42:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.685 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:09:48.685 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:48.685 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:48.685 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:48.685 14:42:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.685 14:42:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:48.685 14:42:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:48.943 14:42:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.943 14:42:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:09:48.943 14:42:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:48.943 14:42:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:09:48.943 14:42:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:48.943 14:42:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:48.943 14:42:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:48.943 14:42:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:48.943 14:42:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:48.943 14:42:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:09:48.943 14:42:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:48.943 14:42:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:09:48.943 14:42:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:48.943 14:42:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:09:48.943 14:42:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:48.943 14:42:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:49.200 14:42:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:09:49.200 14:42:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:09:49.200 14:42:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:09:49.200 14:42:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:49.200 14:42:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:49.200 14:42:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:49.200 14:42:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:49.200 14:42:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:09:49.200 14:42:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.200 14:42:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:49.200 14:42:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.200 14:42:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:49.200 14:42:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:09:49.200 14:42:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.200 14:42:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:49.200 14:42:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.200 14:42:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:09:49.200 14:42:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:09:49.200 14:42:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:49.200 14:42:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:49.200 14:42:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:49.201 14:42:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:49.201 14:42:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:49.459 14:42:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:49.459 14:42:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:09:49.459 14:42:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:09:49.459 14:42:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:09:49.459 14:42:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:49.459 14:42:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:09:49.459 14:42:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:49.459 14:42:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:09:49.459 14:42:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:49.459 14:42:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:49.459 rmmod nvme_tcp 00:09:49.459 rmmod nvme_fabrics 00:09:49.459 rmmod nvme_keyring 00:09:49.459 14:42:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:49.459 14:42:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:09:49.459 14:42:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:09:49.459 14:42:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1790892 ']' 00:09:49.459 14:42:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1790892 00:09:49.459 14:42:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 1790892 ']' 00:09:49.459 14:42:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 1790892 00:09:49.459 14:42:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:09:49.459 14:42:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:49.459 14:42:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1790892 00:09:49.459 14:42:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:49.459 14:42:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:49.459 14:42:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1790892' 00:09:49.459 killing process with pid 1790892 00:09:49.459 14:42:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 1790892 00:09:49.459 14:42:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 1790892 00:09:50.834 14:42:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:50.834 14:42:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:50.834 14:42:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:50.834 14:42:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:50.834 14:42:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:50.834 14:42:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.834 14:42:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:50.834 14:42:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.741 14:42:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:52.741 00:09:52.741 real 0m8.070s 00:09:52.741 user 0m13.583s 00:09:52.741 sys 0m2.267s 00:09:52.741 14:42:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:52.741 14:42:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:52.741 ************************************ 00:09:52.741 END TEST nvmf_referrals 00:09:52.741 ************************************ 00:09:52.741 14:42:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:52.741 14:42:32 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:52.741 14:42:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:52.741 14:42:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:52.741 14:42:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:52.741 ************************************ 00:09:52.741 START TEST nvmf_connect_disconnect 00:09:52.741 ************************************ 00:09:52.741 14:42:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:52.999 * Looking for test storage... 00:09:52.999 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:52.999 14:42:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:52.999 14:42:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:09:52.999 14:42:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:52.999 14:42:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:52.999 14:42:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:52.999 14:42:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:52.999 14:42:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:52.999 14:42:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:52.999 14:42:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:52.999 14:42:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:52.999 14:42:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:52.999 14:42:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:53.000 14:42:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:53.000 14:42:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:53.000 14:42:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:53.000 14:42:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:53.000 14:42:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:53.000 14:42:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:53.000 14:42:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:53.000 14:42:32 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:53.000 14:42:32 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:53.000 14:42:32 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:53.000 14:42:32 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.000 14:42:32 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.000 14:42:32 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.000 14:42:32 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:09:53.000 14:42:32 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.000 14:42:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:09:53.000 14:42:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:53.000 14:42:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:53.000 14:42:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:53.000 14:42:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:53.000 14:42:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:53.000 14:42:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:53.000 14:42:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:53.000 14:42:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:53.000 14:42:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:53.000 14:42:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:53.000 14:42:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:53.000 14:42:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:53.000 14:42:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:53.000 14:42:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:53.000 14:42:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:53.000 14:42:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:53.000 14:42:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.000 14:42:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:53.000 14:42:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.000 14:42:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:53.000 14:42:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:53.000 14:42:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:09:53.000 14:42:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:54.900 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:54.900 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:54.900 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:54.900 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:54.900 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:54.901 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:54.901 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:54.901 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:54.901 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:54.901 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:54.901 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:54.901 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:54.901 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:54.901 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:54.901 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:54.901 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:54.901 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:54.901 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:54.901 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:54.901 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:54.901 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:55.158 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:55.158 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:55.158 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:55.158 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:55.158 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:09:55.158 00:09:55.158 --- 10.0.0.2 ping statistics --- 00:09:55.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.158 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:09:55.158 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:55.158 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:55.158 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:09:55.158 00:09:55.158 --- 10.0.0.1 ping statistics --- 00:09:55.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.158 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:09:55.158 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:55.159 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:09:55.159 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:55.159 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:55.159 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:55.159 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:55.159 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:55.159 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:55.159 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:55.159 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:55.159 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:55.159 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:55.159 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:55.159 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1793449 00:09:55.159 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:55.159 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1793449 00:09:55.159 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 1793449 ']' 00:09:55.159 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.159 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:55.159 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.159 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:55.159 14:42:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:55.159 [2024-07-14 14:42:34.377407] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:55.159 [2024-07-14 14:42:34.377563] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:55.159 EAL: No free 2048 kB hugepages reported on node 1 00:09:55.416 [2024-07-14 14:42:34.518056] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:55.674 [2024-07-14 14:42:34.785938] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:55.674 [2024-07-14 14:42:34.786015] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:55.674 [2024-07-14 14:42:34.786045] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:55.674 [2024-07-14 14:42:34.786075] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:55.674 [2024-07-14 14:42:34.786098] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:55.674 [2024-07-14 14:42:34.786221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:55.674 [2024-07-14 14:42:34.786279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:55.674 [2024-07-14 14:42:34.786323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.674 [2024-07-14 14:42:34.786335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:56.240 14:42:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:56.240 14:42:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:09:56.240 14:42:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:56.240 14:42:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:56.240 14:42:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:56.240 14:42:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:56.240 14:42:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:56.240 14:42:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:56.240 14:42:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:56.240 [2024-07-14 14:42:35.350304] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:56.240 14:42:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:56.240 14:42:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:56.240 14:42:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:56.240 14:42:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:56.240 14:42:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:56.240 14:42:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:56.240 14:42:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:56.240 14:42:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:56.240 14:42:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:56.240 14:42:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:56.240 14:42:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:56.240 14:42:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:56.240 14:42:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:56.240 14:42:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:56.240 14:42:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:56.240 14:42:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:56.240 14:42:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:56.240 [2024-07-14 14:42:35.453044] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:56.240 14:42:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:56.240 14:42:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:09:56.240 14:42:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:09:56.240 14:42:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:09:56.240 14:42:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:09:58.768 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.294 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.191 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.727 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.255 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.842 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.741 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.278 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.174 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.695 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.220 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.116 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.641 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.165 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.059 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.151 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.676 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.572 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.096 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.520 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.047 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.573 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.470 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.997 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.575 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.470 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.994 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.516 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.411 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.937 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.464 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.360 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.414 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.352 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.875 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.400 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.305 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.831 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.352 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.246 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.775 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.302 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.835 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.756 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.281 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.804 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.329 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.222 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.747 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.272 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.169 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.693 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.116 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.674 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.195 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.737 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.632 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.156 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.702 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.227 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.125 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.649 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.548 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.096 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.518 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.043 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.569 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.463 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.990 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.518 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.040 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.946 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.483 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.032 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.557 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.982 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.508 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.404 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.929 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.454 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.350 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.875 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.423 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.321 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.846 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.365 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.900 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.802 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.341 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.901 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.808 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:45.340 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.917 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.826 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.826 14:46:29 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:13:49.826 14:46:29 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:13:49.826 14:46:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:49.826 14:46:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:13:49.826 14:46:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:49.826 14:46:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:13:49.826 14:46:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:49.826 14:46:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:49.826 rmmod nvme_tcp 00:13:49.826 rmmod nvme_fabrics 00:13:49.826 rmmod nvme_keyring 00:13:49.826 14:46:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:49.826 14:46:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:13:49.826 14:46:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:13:49.826 14:46:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1793449 ']' 00:13:49.826 14:46:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1793449 00:13:49.826 14:46:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1793449 ']' 00:13:49.826 14:46:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 1793449 00:13:49.826 14:46:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:13:49.826 14:46:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:50.083 14:46:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1793449 00:13:50.083 14:46:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:50.083 14:46:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:50.083 14:46:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1793449' 00:13:50.083 killing process with pid 1793449 00:13:50.084 14:46:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 1793449 00:13:50.084 14:46:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 1793449 00:13:51.464 14:46:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:51.464 14:46:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:51.464 14:46:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:51.464 14:46:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:51.464 14:46:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:51.464 14:46:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:51.464 14:46:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:51.464 14:46:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:53.371 14:46:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:53.371 00:13:53.371 real 4m0.576s 00:13:53.371 user 15m10.935s 00:13:53.371 sys 0m36.388s 00:13:53.371 14:46:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:53.371 14:46:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:53.371 ************************************ 00:13:53.371 END TEST nvmf_connect_disconnect 00:13:53.371 ************************************ 00:13:53.371 14:46:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:53.371 14:46:32 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:53.371 14:46:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:53.371 14:46:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:53.371 14:46:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:53.371 ************************************ 00:13:53.371 START TEST nvmf_multitarget 00:13:53.371 ************************************ 00:13:53.371 14:46:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:53.630 * Looking for test storage... 00:13:53.630 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:53.630 14:46:32 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:53.630 14:46:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:13:53.630 14:46:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:53.630 14:46:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:53.630 14:46:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:53.630 14:46:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:53.630 14:46:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:53.630 14:46:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:53.630 14:46:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:53.630 14:46:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:53.630 14:46:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:53.630 14:46:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:53.630 14:46:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:53.630 14:46:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:53.630 14:46:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:53.630 14:46:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:53.630 14:46:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:53.630 14:46:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:53.630 14:46:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:53.630 14:46:32 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:53.630 14:46:32 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:53.630 14:46:32 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:53.630 14:46:32 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.630 14:46:32 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.630 14:46:32 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.630 14:46:32 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:13:53.630 14:46:32 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.630 14:46:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:13:53.630 14:46:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:53.630 14:46:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:53.630 14:46:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:53.630 14:46:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:53.630 14:46:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:53.630 14:46:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:53.630 14:46:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:53.630 14:46:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:53.630 14:46:32 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:53.630 14:46:32 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:13:53.630 14:46:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:53.630 14:46:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:53.630 14:46:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:53.630 14:46:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:53.630 14:46:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:53.630 14:46:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:53.630 14:46:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:53.630 14:46:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:53.630 14:46:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:53.630 14:46:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:53.630 14:46:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:13:53.630 14:46:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:55.530 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:55.530 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:13:55.530 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:55.530 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:55.530 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:55.530 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:55.530 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:55.530 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:13:55.530 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:55.530 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:13:55.530 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:13:55.530 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:13:55.530 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:13:55.530 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:13:55.530 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:13:55.530 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:55.530 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:55.530 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:55.530 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:55.530 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:55.530 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:55.530 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:55.531 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:55.531 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:55.531 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:55.531 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:55.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:55.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:13:55.531 00:13:55.531 --- 10.0.0.2 ping statistics --- 00:13:55.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.531 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:55.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:55.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:13:55.531 00:13:55.531 --- 10.0.0.1 ping statistics --- 00:13:55.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.531 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1824964 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1824964 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 1824964 ']' 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:55.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:55.531 14:46:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:55.790 [2024-07-14 14:46:34.900676] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:55.790 [2024-07-14 14:46:34.900810] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:55.790 EAL: No free 2048 kB hugepages reported on node 1 00:13:55.790 [2024-07-14 14:46:35.039031] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:56.050 [2024-07-14 14:46:35.307202] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:56.050 [2024-07-14 14:46:35.307285] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:56.050 [2024-07-14 14:46:35.307314] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:56.050 [2024-07-14 14:46:35.307335] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:56.050 [2024-07-14 14:46:35.307358] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:56.050 [2024-07-14 14:46:35.307472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:56.050 [2024-07-14 14:46:35.307530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:56.050 [2024-07-14 14:46:35.310913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:56.050 [2024-07-14 14:46:35.310917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.617 14:46:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:56.617 14:46:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:13:56.617 14:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:56.617 14:46:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:56.617 14:46:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:56.617 14:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:56.617 14:46:35 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:56.617 14:46:35 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:56.617 14:46:35 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:13:56.876 14:46:35 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:56.876 14:46:35 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:56.876 "nvmf_tgt_1" 00:13:56.876 14:46:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:57.134 "nvmf_tgt_2" 00:13:57.134 14:46:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:57.134 14:46:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:13:57.134 14:46:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:57.134 14:46:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:57.134 true 00:13:57.391 14:46:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:57.391 true 00:13:57.391 14:46:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:57.391 14:46:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:13:57.391 14:46:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:57.391 14:46:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:57.391 14:46:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:13:57.391 14:46:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:57.391 14:46:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:13:57.391 14:46:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:57.391 14:46:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:13:57.391 14:46:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:57.391 14:46:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:57.391 rmmod nvme_tcp 00:13:57.391 rmmod nvme_fabrics 00:13:57.648 rmmod nvme_keyring 00:13:57.648 14:46:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:57.648 14:46:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:13:57.648 14:46:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:13:57.648 14:46:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1824964 ']' 00:13:57.648 14:46:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1824964 00:13:57.648 14:46:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 1824964 ']' 00:13:57.648 14:46:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 1824964 00:13:57.648 14:46:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:13:57.648 14:46:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:57.648 14:46:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1824964 00:13:57.648 14:46:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:57.648 14:46:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:57.648 14:46:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1824964' 00:13:57.648 killing process with pid 1824964 00:13:57.648 14:46:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 1824964 00:13:57.648 14:46:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 1824964 00:13:59.027 14:46:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:59.027 14:46:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:59.027 14:46:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:59.027 14:46:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:59.027 14:46:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:59.027 14:46:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.027 14:46:38 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:59.027 14:46:38 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.934 14:46:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:00.934 00:14:00.934 real 0m7.381s 00:14:00.934 user 0m11.329s 00:14:00.934 sys 0m2.023s 00:14:00.934 14:46:40 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:00.934 14:46:40 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:00.934 ************************************ 00:14:00.934 END TEST nvmf_multitarget 00:14:00.934 ************************************ 00:14:00.934 14:46:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:00.934 14:46:40 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:00.934 14:46:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:00.934 14:46:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:00.934 14:46:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:00.934 ************************************ 00:14:00.934 START TEST nvmf_rpc 00:14:00.934 ************************************ 00:14:00.934 14:46:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:00.934 * Looking for test storage... 00:14:00.934 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:00.934 14:46:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:00.934 14:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:14:00.934 14:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:00.934 14:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:00.934 14:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:00.934 14:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:00.934 14:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:00.934 14:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:00.934 14:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:00.934 14:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:00.934 14:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:00.934 14:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:00.934 14:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:00.934 14:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:00.934 14:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:00.934 14:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:00.934 14:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:00.934 14:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:00.934 14:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:00.934 14:46:40 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:00.934 14:46:40 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:00.934 14:46:40 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:00.934 14:46:40 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.934 14:46:40 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.934 14:46:40 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.934 14:46:40 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:14:00.934 14:46:40 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.934 14:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:14:00.934 14:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:00.934 14:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:00.934 14:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:00.934 14:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:00.934 14:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:00.934 14:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:00.934 14:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:00.934 14:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:00.934 14:46:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:14:00.934 14:46:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:14:00.934 14:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:00.934 14:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:00.934 14:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:00.934 14:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:00.934 14:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:00.934 14:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.934 14:46:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:00.934 14:46:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.934 14:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:00.934 14:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:00.934 14:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:14:00.934 14:46:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:03.470 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:03.470 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:03.470 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:03.470 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:03.470 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:03.470 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:14:03.470 00:14:03.470 --- 10.0.0.2 ping statistics --- 00:14:03.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:03.470 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:03.470 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:03.470 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:14:03.470 00:14:03.470 --- 10.0.0.1 ping statistics --- 00:14:03.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:03.470 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1827208 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:03.470 14:46:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1827208 00:14:03.471 14:46:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 1827208 ']' 00:14:03.471 14:46:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:03.471 14:46:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:03.471 14:46:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:03.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:03.471 14:46:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:03.471 14:46:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.471 [2024-07-14 14:46:42.479312] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:03.471 [2024-07-14 14:46:42.479442] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:03.471 EAL: No free 2048 kB hugepages reported on node 1 00:14:03.471 [2024-07-14 14:46:42.634495] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:03.731 [2024-07-14 14:46:42.992713] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:03.731 [2024-07-14 14:46:42.992810] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:03.731 [2024-07-14 14:46:42.992845] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:03.731 [2024-07-14 14:46:42.992894] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:03.731 [2024-07-14 14:46:42.992940] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:03.731 [2024-07-14 14:46:42.993077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:03.731 [2024-07-14 14:46:42.993138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:03.731 [2024-07-14 14:46:42.993195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.731 [2024-07-14 14:46:42.993201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:04.298 14:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:04.298 14:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:14:04.298 14:46:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:04.298 14:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:04.298 14:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.298 14:46:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:04.298 14:46:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:14:04.298 14:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.298 14:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.298 14:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.298 14:46:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:14:04.298 "tick_rate": 2700000000, 00:14:04.298 "poll_groups": [ 00:14:04.298 { 00:14:04.298 "name": "nvmf_tgt_poll_group_000", 00:14:04.298 "admin_qpairs": 0, 00:14:04.298 "io_qpairs": 0, 00:14:04.298 "current_admin_qpairs": 0, 00:14:04.298 "current_io_qpairs": 0, 00:14:04.298 "pending_bdev_io": 0, 00:14:04.298 "completed_nvme_io": 0, 00:14:04.298 "transports": [] 00:14:04.298 }, 00:14:04.298 { 00:14:04.298 "name": "nvmf_tgt_poll_group_001", 00:14:04.298 "admin_qpairs": 0, 00:14:04.298 "io_qpairs": 0, 00:14:04.298 "current_admin_qpairs": 0, 00:14:04.298 "current_io_qpairs": 0, 00:14:04.298 "pending_bdev_io": 0, 00:14:04.298 "completed_nvme_io": 0, 00:14:04.299 "transports": [] 00:14:04.299 }, 00:14:04.299 { 00:14:04.299 "name": "nvmf_tgt_poll_group_002", 00:14:04.299 "admin_qpairs": 0, 00:14:04.299 "io_qpairs": 0, 00:14:04.299 "current_admin_qpairs": 0, 00:14:04.299 "current_io_qpairs": 0, 00:14:04.299 "pending_bdev_io": 0, 00:14:04.299 "completed_nvme_io": 0, 00:14:04.299 "transports": [] 00:14:04.299 }, 00:14:04.299 { 00:14:04.299 "name": "nvmf_tgt_poll_group_003", 00:14:04.299 "admin_qpairs": 0, 00:14:04.299 "io_qpairs": 0, 00:14:04.299 "current_admin_qpairs": 0, 00:14:04.299 "current_io_qpairs": 0, 00:14:04.299 "pending_bdev_io": 0, 00:14:04.299 "completed_nvme_io": 0, 00:14:04.299 "transports": [] 00:14:04.299 } 00:14:04.299 ] 00:14:04.299 }' 00:14:04.299 14:46:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:14:04.299 14:46:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:14:04.299 14:46:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:14:04.299 14:46:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:14:04.299 14:46:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:14:04.299 14:46:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:14:04.299 14:46:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:14:04.299 14:46:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:04.299 14:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.299 14:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.299 [2024-07-14 14:46:43.569385] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:04.299 14:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.299 14:46:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:14:04.299 14:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.299 14:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.299 14:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.299 14:46:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:14:04.299 "tick_rate": 2700000000, 00:14:04.299 "poll_groups": [ 00:14:04.299 { 00:14:04.299 "name": "nvmf_tgt_poll_group_000", 00:14:04.299 "admin_qpairs": 0, 00:14:04.299 "io_qpairs": 0, 00:14:04.299 "current_admin_qpairs": 0, 00:14:04.299 "current_io_qpairs": 0, 00:14:04.299 "pending_bdev_io": 0, 00:14:04.299 "completed_nvme_io": 0, 00:14:04.299 "transports": [ 00:14:04.299 { 00:14:04.299 "trtype": "TCP" 00:14:04.299 } 00:14:04.299 ] 00:14:04.299 }, 00:14:04.299 { 00:14:04.299 "name": "nvmf_tgt_poll_group_001", 00:14:04.299 "admin_qpairs": 0, 00:14:04.299 "io_qpairs": 0, 00:14:04.299 "current_admin_qpairs": 0, 00:14:04.299 "current_io_qpairs": 0, 00:14:04.299 "pending_bdev_io": 0, 00:14:04.299 "completed_nvme_io": 0, 00:14:04.299 "transports": [ 00:14:04.299 { 00:14:04.299 "trtype": "TCP" 00:14:04.299 } 00:14:04.299 ] 00:14:04.299 }, 00:14:04.299 { 00:14:04.299 "name": "nvmf_tgt_poll_group_002", 00:14:04.299 "admin_qpairs": 0, 00:14:04.299 "io_qpairs": 0, 00:14:04.299 "current_admin_qpairs": 0, 00:14:04.299 "current_io_qpairs": 0, 00:14:04.299 "pending_bdev_io": 0, 00:14:04.299 "completed_nvme_io": 0, 00:14:04.299 "transports": [ 00:14:04.299 { 00:14:04.299 "trtype": "TCP" 00:14:04.299 } 00:14:04.299 ] 00:14:04.299 }, 00:14:04.299 { 00:14:04.299 "name": "nvmf_tgt_poll_group_003", 00:14:04.299 "admin_qpairs": 0, 00:14:04.299 "io_qpairs": 0, 00:14:04.299 "current_admin_qpairs": 0, 00:14:04.299 "current_io_qpairs": 0, 00:14:04.299 "pending_bdev_io": 0, 00:14:04.299 "completed_nvme_io": 0, 00:14:04.299 "transports": [ 00:14:04.299 { 00:14:04.299 "trtype": "TCP" 00:14:04.299 } 00:14:04.299 ] 00:14:04.299 } 00:14:04.299 ] 00:14:04.299 }' 00:14:04.299 14:46:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:14:04.299 14:46:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:04.299 14:46:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:04.299 14:46:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:04.557 14:46:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:14:04.557 14:46:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:14:04.557 14:46:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:04.557 14:46:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:04.557 14:46:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:04.557 14:46:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:14:04.557 14:46:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:14:04.557 14:46:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:14:04.557 14:46:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:14:04.557 14:46:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:04.557 14:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.557 14:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.557 Malloc1 00:14:04.557 14:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.558 14:46:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:04.558 14:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.558 14:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.558 14:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.558 14:46:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:04.558 14:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.558 14:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.558 14:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.558 14:46:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:14:04.558 14:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.558 14:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.558 14:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.558 14:46:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:04.558 14:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.558 14:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.558 [2024-07-14 14:46:43.782912] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:04.558 14:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.558 14:46:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:14:04.558 14:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:14:04.558 14:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:14:04.558 14:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:14:04.558 14:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:04.558 14:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:14:04.558 14:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:04.558 14:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:14:04.558 14:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:04.558 14:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:14:04.558 14:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:14:04.558 14:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:14:04.558 [2024-07-14 14:46:43.806137] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:14:04.558 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:04.558 could not add new controller: failed to write to nvme-fabrics device 00:14:04.558 14:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:14:04.558 14:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:04.558 14:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:04.558 14:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:04.558 14:46:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:04.558 14:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.558 14:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.558 14:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.558 14:46:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:05.491 14:46:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:14:05.491 14:46:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:05.491 14:46:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:05.491 14:46:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:05.491 14:46:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:07.395 14:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:07.395 14:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:07.395 14:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:07.395 14:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:07.395 14:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:07.395 14:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:07.395 14:46:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:07.395 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.395 14:46:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:07.395 14:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:07.395 14:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:07.395 14:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:07.395 14:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:07.395 14:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:07.395 14:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:07.395 14:46:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:07.395 14:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.395 14:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.395 14:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.395 14:46:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:07.395 14:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:14:07.396 14:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:07.396 14:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:14:07.396 14:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:07.396 14:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:14:07.396 14:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:07.396 14:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:14:07.396 14:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:07.396 14:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:14:07.396 14:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:14:07.396 14:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:07.694 [2024-07-14 14:46:46.707338] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:14:07.694 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:07.694 could not add new controller: failed to write to nvme-fabrics device 00:14:07.694 14:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:14:07.694 14:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:07.694 14:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:07.694 14:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:07.694 14:46:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:14:07.694 14:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.694 14:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.694 14:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.694 14:46:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:08.281 14:46:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:14:08.281 14:46:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:08.281 14:46:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:08.281 14:46:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:08.281 14:46:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:10.186 14:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:10.186 14:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:10.186 14:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:10.186 14:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:10.186 14:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:10.186 14:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:10.186 14:46:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:10.443 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.443 14:46:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:10.443 14:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:10.443 14:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:10.443 14:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:10.443 14:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:10.443 14:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:10.443 14:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:10.443 14:46:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:10.443 14:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.443 14:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:10.443 14:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.443 14:46:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:14:10.443 14:46:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:10.443 14:46:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:10.443 14:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.443 14:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:10.443 14:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.443 14:46:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:10.443 14:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.443 14:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:10.443 [2024-07-14 14:46:49.641677] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:10.443 14:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.443 14:46:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:10.443 14:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.443 14:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:10.443 14:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.443 14:46:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:10.443 14:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.443 14:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:10.443 14:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.443 14:46:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:11.377 14:46:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:11.377 14:46:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:11.377 14:46:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:11.377 14:46:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:11.377 14:46:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:13.283 14:46:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:13.283 14:46:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:13.283 14:46:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:13.283 14:46:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:13.283 14:46:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:13.283 14:46:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:13.283 14:46:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:13.283 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:13.283 14:46:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:13.283 14:46:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:13.283 14:46:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:13.283 14:46:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:13.283 14:46:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:13.283 14:46:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:13.283 14:46:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:13.283 14:46:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:13.283 14:46:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.283 14:46:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:13.283 14:46:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.283 14:46:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:13.283 14:46:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.283 14:46:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:13.283 14:46:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.283 14:46:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:13.283 14:46:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:13.283 14:46:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.283 14:46:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:13.283 14:46:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.283 14:46:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:13.283 14:46:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.283 14:46:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:13.283 [2024-07-14 14:46:52.561298] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:13.283 14:46:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.283 14:46:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:13.283 14:46:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.283 14:46:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:13.283 14:46:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.283 14:46:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:13.283 14:46:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.283 14:46:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:13.283 14:46:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.284 14:46:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:14.220 14:46:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:14.220 14:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:14.220 14:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:14.220 14:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:14.220 14:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:16.125 14:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:16.125 14:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:16.125 14:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:16.125 14:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:16.125 14:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:16.125 14:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:16.125 14:46:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:16.386 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.386 14:46:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:16.386 14:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:16.386 14:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:16.386 14:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:16.386 14:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:16.386 14:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:16.386 14:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:16.386 14:46:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:16.386 14:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.386 14:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.386 14:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.386 14:46:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:16.386 14:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.386 14:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.386 14:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.386 14:46:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:16.386 14:46:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:16.386 14:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.386 14:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.386 14:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.386 14:46:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:16.386 14:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.386 14:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.386 [2024-07-14 14:46:55.586944] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:16.386 14:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.386 14:46:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:16.386 14:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.386 14:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.386 14:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.386 14:46:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:16.386 14:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.386 14:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.386 14:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.386 14:46:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:16.953 14:46:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:16.953 14:46:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:16.953 14:46:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:16.953 14:46:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:16.953 14:46:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:19.492 14:46:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:19.492 14:46:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:19.492 14:46:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:19.492 14:46:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:19.492 14:46:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:19.492 14:46:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:19.492 14:46:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:19.492 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.492 14:46:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:19.492 14:46:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:19.492 14:46:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:19.492 14:46:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:19.492 14:46:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:19.492 14:46:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:19.492 14:46:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:19.492 14:46:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:19.492 14:46:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.492 14:46:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.492 14:46:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.492 14:46:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:19.493 14:46:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.493 14:46:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.493 14:46:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.493 14:46:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:19.493 14:46:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:19.493 14:46:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.493 14:46:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.493 14:46:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.493 14:46:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:19.493 14:46:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.493 14:46:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.493 [2024-07-14 14:46:58.440010] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:19.493 14:46:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.493 14:46:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:19.493 14:46:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.493 14:46:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.493 14:46:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.493 14:46:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:19.493 14:46:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.493 14:46:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.493 14:46:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.493 14:46:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:20.061 14:46:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:20.061 14:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:20.061 14:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:20.061 14:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:20.061 14:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:21.965 14:47:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:21.965 14:47:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:21.965 14:47:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:21.965 14:47:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:21.965 14:47:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:21.965 14:47:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:21.965 14:47:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:22.225 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.226 14:47:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:22.226 14:47:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:22.226 14:47:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:22.226 14:47:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:22.226 14:47:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:22.226 14:47:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:22.226 14:47:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:22.226 14:47:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:22.226 14:47:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.226 14:47:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:22.226 14:47:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.226 14:47:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:22.226 14:47:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.226 14:47:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:22.226 14:47:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.226 14:47:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:22.226 14:47:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:22.226 14:47:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.226 14:47:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:22.226 14:47:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.226 14:47:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:22.226 14:47:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.226 14:47:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:22.226 [2024-07-14 14:47:01.351264] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:22.226 14:47:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.226 14:47:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:22.226 14:47:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.226 14:47:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:22.226 14:47:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.226 14:47:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:22.226 14:47:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.226 14:47:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:22.226 14:47:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.226 14:47:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:22.792 14:47:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:22.792 14:47:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:22.792 14:47:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:22.792 14:47:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:22.792 14:47:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:24.697 14:47:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:24.698 14:47:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:24.698 14:47:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:24.698 14:47:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:24.698 14:47:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:24.698 14:47:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:24.698 14:47:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:24.957 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.957 14:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:24.957 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:24.957 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:24.957 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:24.957 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:24.957 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:24.957 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:24.957 14:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:24.957 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.957 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.957 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.957 14:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:24.957 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.957 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.957 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.957 14:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:14:24.957 14:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:24.957 14:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:24.957 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.957 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.957 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.957 14:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:24.957 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.957 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.957 [2024-07-14 14:47:04.216963] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:24.957 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.957 14:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:24.957 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.957 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.957 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.957 14:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:24.957 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.957 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.957 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.957 14:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:24.957 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.957 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.957 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.957 14:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:24.957 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.957 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.957 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.957 14:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:24.957 14:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:24.957 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.957 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.957 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.957 14:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:24.957 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.957 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.957 [2024-07-14 14:47:04.264980] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:25.216 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.216 14:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:25.216 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.216 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.216 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.216 14:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:25.216 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.216 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.216 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.216 14:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:25.216 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.216 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.216 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.216 14:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:25.216 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.216 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.216 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.216 14:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:25.216 14:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:25.216 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.216 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.216 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.216 14:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:25.216 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.216 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.216 [2024-07-14 14:47:04.313189] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:25.216 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.216 14:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:25.216 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.216 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.216 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.216 14:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:25.216 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.216 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.216 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.216 14:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:25.216 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.216 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.216 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.216 14:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:25.216 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.216 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.216 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.216 14:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:25.216 14:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:25.216 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.216 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.216 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.216 14:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:25.216 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.217 [2024-07-14 14:47:04.361323] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.217 [2024-07-14 14:47:04.409504] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:14:25.217 "tick_rate": 2700000000, 00:14:25.217 "poll_groups": [ 00:14:25.217 { 00:14:25.217 "name": "nvmf_tgt_poll_group_000", 00:14:25.217 "admin_qpairs": 2, 00:14:25.217 "io_qpairs": 84, 00:14:25.217 "current_admin_qpairs": 0, 00:14:25.217 "current_io_qpairs": 0, 00:14:25.217 "pending_bdev_io": 0, 00:14:25.217 "completed_nvme_io": 235, 00:14:25.217 "transports": [ 00:14:25.217 { 00:14:25.217 "trtype": "TCP" 00:14:25.217 } 00:14:25.217 ] 00:14:25.217 }, 00:14:25.217 { 00:14:25.217 "name": "nvmf_tgt_poll_group_001", 00:14:25.217 "admin_qpairs": 2, 00:14:25.217 "io_qpairs": 84, 00:14:25.217 "current_admin_qpairs": 0, 00:14:25.217 "current_io_qpairs": 0, 00:14:25.217 "pending_bdev_io": 0, 00:14:25.217 "completed_nvme_io": 86, 00:14:25.217 "transports": [ 00:14:25.217 { 00:14:25.217 "trtype": "TCP" 00:14:25.217 } 00:14:25.217 ] 00:14:25.217 }, 00:14:25.217 { 00:14:25.217 "name": "nvmf_tgt_poll_group_002", 00:14:25.217 "admin_qpairs": 1, 00:14:25.217 "io_qpairs": 84, 00:14:25.217 "current_admin_qpairs": 0, 00:14:25.217 "current_io_qpairs": 0, 00:14:25.217 "pending_bdev_io": 0, 00:14:25.217 "completed_nvme_io": 183, 00:14:25.217 "transports": [ 00:14:25.217 { 00:14:25.217 "trtype": "TCP" 00:14:25.217 } 00:14:25.217 ] 00:14:25.217 }, 00:14:25.217 { 00:14:25.217 "name": "nvmf_tgt_poll_group_003", 00:14:25.217 "admin_qpairs": 2, 00:14:25.217 "io_qpairs": 84, 00:14:25.217 "current_admin_qpairs": 0, 00:14:25.217 "current_io_qpairs": 0, 00:14:25.217 "pending_bdev_io": 0, 00:14:25.217 "completed_nvme_io": 182, 00:14:25.217 "transports": [ 00:14:25.217 { 00:14:25.217 "trtype": "TCP" 00:14:25.217 } 00:14:25.217 ] 00:14:25.217 } 00:14:25.217 ] 00:14:25.217 }' 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:25.217 14:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:25.475 14:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:14:25.475 14:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:14:25.475 14:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:14:25.475 14:47:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:14:25.475 14:47:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:25.475 14:47:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:14:25.475 14:47:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:25.475 14:47:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:14:25.475 14:47:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:25.475 14:47:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:25.475 rmmod nvme_tcp 00:14:25.475 rmmod nvme_fabrics 00:14:25.475 rmmod nvme_keyring 00:14:25.475 14:47:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:25.475 14:47:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:14:25.475 14:47:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:14:25.475 14:47:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1827208 ']' 00:14:25.475 14:47:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1827208 00:14:25.475 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 1827208 ']' 00:14:25.475 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 1827208 00:14:25.475 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:14:25.475 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:25.475 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1827208 00:14:25.475 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:25.475 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:25.475 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1827208' 00:14:25.475 killing process with pid 1827208 00:14:25.475 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 1827208 00:14:25.475 14:47:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 1827208 00:14:26.851 14:47:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:26.851 14:47:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:26.851 14:47:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:26.851 14:47:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:26.851 14:47:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:26.851 14:47:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:26.851 14:47:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:26.851 14:47:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:29.415 14:47:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:29.415 00:14:29.415 real 0m28.096s 00:14:29.415 user 1m29.420s 00:14:29.415 sys 0m4.561s 00:14:29.415 14:47:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:29.415 14:47:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.415 ************************************ 00:14:29.415 END TEST nvmf_rpc 00:14:29.415 ************************************ 00:14:29.415 14:47:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:29.415 14:47:08 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:29.415 14:47:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:29.415 14:47:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:29.415 14:47:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:29.415 ************************************ 00:14:29.415 START TEST nvmf_invalid 00:14:29.415 ************************************ 00:14:29.415 14:47:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:29.415 * Looking for test storage... 00:14:29.415 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:29.415 14:47:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:29.415 14:47:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:14:29.415 14:47:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:29.415 14:47:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:29.415 14:47:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:29.415 14:47:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:29.415 14:47:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:29.416 14:47:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:29.416 14:47:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:29.416 14:47:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:29.416 14:47:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:29.416 14:47:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:29.416 14:47:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:29.416 14:47:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:29.416 14:47:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:29.416 14:47:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:29.416 14:47:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:29.416 14:47:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:29.416 14:47:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:29.416 14:47:08 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:29.416 14:47:08 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:29.416 14:47:08 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:29.416 14:47:08 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.416 14:47:08 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.416 14:47:08 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.416 14:47:08 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:14:29.416 14:47:08 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.416 14:47:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:14:29.416 14:47:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:29.416 14:47:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:29.416 14:47:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:29.416 14:47:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:29.416 14:47:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:29.416 14:47:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:29.416 14:47:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:29.416 14:47:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:29.416 14:47:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:29.416 14:47:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:29.416 14:47:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:14:29.416 14:47:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:14:29.416 14:47:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:14:29.416 14:47:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:14:29.416 14:47:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:29.416 14:47:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:29.416 14:47:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:29.416 14:47:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:29.416 14:47:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:29.416 14:47:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:29.416 14:47:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:29.416 14:47:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:29.416 14:47:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:29.416 14:47:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:29.416 14:47:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:14:29.416 14:47:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:31.326 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:31.326 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:14:31.326 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:31.326 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:31.326 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:31.326 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:31.326 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:31.326 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:14:31.326 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:31.326 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:14:31.326 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:14:31.326 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:14:31.326 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:14:31.326 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:14:31.326 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:14:31.326 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:31.326 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:31.326 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:31.326 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:31.326 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:31.326 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:31.327 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:31.327 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:31.327 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:31.327 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:31.327 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:31.327 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:14:31.327 00:14:31.327 --- 10.0.0.2 ping statistics --- 00:14:31.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.327 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:31.327 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:31.327 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:14:31.327 00:14:31.327 --- 10.0.0.1 ping statistics --- 00:14:31.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.327 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1832079 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1832079 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 1832079 ']' 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:31.327 14:47:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:31.327 [2024-07-14 14:47:10.523564] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:31.327 [2024-07-14 14:47:10.523692] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:31.327 EAL: No free 2048 kB hugepages reported on node 1 00:14:31.585 [2024-07-14 14:47:10.666630] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:31.844 [2024-07-14 14:47:10.930485] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:31.844 [2024-07-14 14:47:10.930553] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:31.844 [2024-07-14 14:47:10.930581] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:31.844 [2024-07-14 14:47:10.930601] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:31.844 [2024-07-14 14:47:10.930622] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:31.844 [2024-07-14 14:47:10.930771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:31.844 [2024-07-14 14:47:10.930830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:31.844 [2024-07-14 14:47:10.930890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.844 [2024-07-14 14:47:10.930902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:32.410 14:47:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:32.410 14:47:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:14:32.410 14:47:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:32.410 14:47:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:32.410 14:47:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:32.410 14:47:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:32.410 14:47:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:32.410 14:47:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode8151 00:14:32.668 [2024-07-14 14:47:11.785009] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:32.668 14:47:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:14:32.668 { 00:14:32.668 "nqn": "nqn.2016-06.io.spdk:cnode8151", 00:14:32.668 "tgt_name": "foobar", 00:14:32.668 "method": "nvmf_create_subsystem", 00:14:32.668 "req_id": 1 00:14:32.668 } 00:14:32.668 Got JSON-RPC error response 00:14:32.668 response: 00:14:32.668 { 00:14:32.668 "code": -32603, 00:14:32.668 "message": "Unable to find target foobar" 00:14:32.668 }' 00:14:32.668 14:47:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:14:32.668 { 00:14:32.668 "nqn": "nqn.2016-06.io.spdk:cnode8151", 00:14:32.668 "tgt_name": "foobar", 00:14:32.668 "method": "nvmf_create_subsystem", 00:14:32.668 "req_id": 1 00:14:32.668 } 00:14:32.668 Got JSON-RPC error response 00:14:32.668 response: 00:14:32.668 { 00:14:32.668 "code": -32603, 00:14:32.668 "message": "Unable to find target foobar" 00:14:32.668 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:32.668 14:47:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:32.668 14:47:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode6092 00:14:32.926 [2024-07-14 14:47:12.033943] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6092: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:32.926 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:14:32.926 { 00:14:32.926 "nqn": "nqn.2016-06.io.spdk:cnode6092", 00:14:32.926 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:32.926 "method": "nvmf_create_subsystem", 00:14:32.926 "req_id": 1 00:14:32.926 } 00:14:32.926 Got JSON-RPC error response 00:14:32.926 response: 00:14:32.926 { 00:14:32.926 "code": -32602, 00:14:32.926 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:32.926 }' 00:14:32.926 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:14:32.926 { 00:14:32.926 "nqn": "nqn.2016-06.io.spdk:cnode6092", 00:14:32.926 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:32.926 "method": "nvmf_create_subsystem", 00:14:32.926 "req_id": 1 00:14:32.926 } 00:14:32.926 Got JSON-RPC error response 00:14:32.926 response: 00:14:32.926 { 00:14:32.926 "code": -32602, 00:14:32.926 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:32.926 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:32.926 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:32.926 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode4958 00:14:33.185 [2024-07-14 14:47:12.274757] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4958: invalid model number 'SPDK_Controller' 00:14:33.185 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:14:33.185 { 00:14:33.185 "nqn": "nqn.2016-06.io.spdk:cnode4958", 00:14:33.185 "model_number": "SPDK_Controller\u001f", 00:14:33.185 "method": "nvmf_create_subsystem", 00:14:33.185 "req_id": 1 00:14:33.185 } 00:14:33.185 Got JSON-RPC error response 00:14:33.185 response: 00:14:33.185 { 00:14:33.185 "code": -32602, 00:14:33.185 "message": "Invalid MN SPDK_Controller\u001f" 00:14:33.185 }' 00:14:33.185 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:14:33.185 { 00:14:33.185 "nqn": "nqn.2016-06.io.spdk:cnode4958", 00:14:33.185 "model_number": "SPDK_Controller\u001f", 00:14:33.185 "method": "nvmf_create_subsystem", 00:14:33.185 "req_id": 1 00:14:33.185 } 00:14:33.185 Got JSON-RPC error response 00:14:33.185 response: 00:14:33.185 { 00:14:33.185 "code": -32602, 00:14:33.185 "message": "Invalid MN SPDK_Controller\u001f" 00:14:33.185 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:33.185 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:14:33.185 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:14:33.185 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:33.185 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:33.185 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:33.185 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:33.185 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.185 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:14:33.185 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:14:33.185 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:14:33.185 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.185 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.185 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:14:33.185 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:14:33.185 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:14:33.185 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.185 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.185 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:14:33.185 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ > == \- ]] 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '>HQhIt9iGXE:dmKl4V|RF' 00:14:33.186 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '>HQhIt9iGXE:dmKl4V|RF' nqn.2016-06.io.spdk:cnode29183 00:14:33.445 [2024-07-14 14:47:12.656167] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29183: invalid serial number '>HQhIt9iGXE:dmKl4V|RF' 00:14:33.445 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:14:33.445 { 00:14:33.445 "nqn": "nqn.2016-06.io.spdk:cnode29183", 00:14:33.445 "serial_number": ">HQhIt9iGXE:dmKl4V|RF", 00:14:33.445 "method": "nvmf_create_subsystem", 00:14:33.445 "req_id": 1 00:14:33.445 } 00:14:33.445 Got JSON-RPC error response 00:14:33.445 response: 00:14:33.445 { 00:14:33.445 "code": -32602, 00:14:33.445 "message": "Invalid SN >HQhIt9iGXE:dmKl4V|RF" 00:14:33.445 }' 00:14:33.445 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:14:33.445 { 00:14:33.445 "nqn": "nqn.2016-06.io.spdk:cnode29183", 00:14:33.445 "serial_number": ">HQhIt9iGXE:dmKl4V|RF", 00:14:33.445 "method": "nvmf_create_subsystem", 00:14:33.445 "req_id": 1 00:14:33.446 } 00:14:33.446 Got JSON-RPC error response 00:14:33.446 response: 00:14:33.446 { 00:14:33.446 "code": -32602, 00:14:33.446 "message": "Invalid SN >HQhIt9iGXE:dmKl4V|RF" 00:14:33.446 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.446 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.745 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:14:33.745 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:14:33.745 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:14:33.745 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.745 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.745 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:14:33.745 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:14:33.745 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:14:33.745 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.745 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.745 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:14:33.745 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:14:33.745 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:14:33.745 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.745 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.745 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:14:33.745 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:14:33.745 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ z == \- ]] 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'z!p}D"8+* Si,f_JM\!t/^g^$b+K44T48S98,Frxl' 00:14:33.746 14:47:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'z!p}D"8+* Si,f_JM\!t/^g^$b+K44T48S98,Frxl' nqn.2016-06.io.spdk:cnode8083 00:14:33.746 [2024-07-14 14:47:13.041509] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8083: invalid model number 'z!p}D"8+* Si,f_JM\!t/^g^$b+K44T48S98,Frxl' 00:14:34.006 14:47:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:14:34.006 { 00:14:34.006 "nqn": "nqn.2016-06.io.spdk:cnode8083", 00:14:34.006 "model_number": "z!p}D\"8+* Si,f_JM\\!t/^g^$b+K44T48S98,Frxl", 00:14:34.006 "method": "nvmf_create_subsystem", 00:14:34.006 "req_id": 1 00:14:34.006 } 00:14:34.006 Got JSON-RPC error response 00:14:34.006 response: 00:14:34.006 { 00:14:34.006 "code": -32602, 00:14:34.006 "message": "Invalid MN z!p}D\"8+* Si,f_JM\\!t/^g^$b+K44T48S98,Frxl" 00:14:34.006 }' 00:14:34.006 14:47:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:14:34.006 { 00:14:34.006 "nqn": "nqn.2016-06.io.spdk:cnode8083", 00:14:34.006 "model_number": "z!p}D\"8+* Si,f_JM\\!t/^g^$b+K44T48S98,Frxl", 00:14:34.006 "method": "nvmf_create_subsystem", 00:14:34.006 "req_id": 1 00:14:34.006 } 00:14:34.006 Got JSON-RPC error response 00:14:34.006 response: 00:14:34.006 { 00:14:34.006 "code": -32602, 00:14:34.006 "message": "Invalid MN z!p}D\"8+* Si,f_JM\\!t/^g^$b+K44T48S98,Frxl" 00:14:34.006 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:34.006 14:47:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:14:34.263 [2024-07-14 14:47:13.334543] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:34.263 14:47:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:14:34.520 14:47:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:14:34.520 14:47:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:14:34.520 14:47:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:14:34.520 14:47:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:14:34.520 14:47:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:14:34.777 [2024-07-14 14:47:13.921923] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:14:34.777 14:47:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:14:34.777 { 00:14:34.777 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:34.777 "listen_address": { 00:14:34.777 "trtype": "tcp", 00:14:34.777 "traddr": "", 00:14:34.777 "trsvcid": "4421" 00:14:34.777 }, 00:14:34.777 "method": "nvmf_subsystem_remove_listener", 00:14:34.777 "req_id": 1 00:14:34.777 } 00:14:34.777 Got JSON-RPC error response 00:14:34.777 response: 00:14:34.777 { 00:14:34.777 "code": -32602, 00:14:34.777 "message": "Invalid parameters" 00:14:34.777 }' 00:14:34.777 14:47:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:14:34.778 { 00:14:34.778 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:34.778 "listen_address": { 00:14:34.778 "trtype": "tcp", 00:14:34.778 "traddr": "", 00:14:34.778 "trsvcid": "4421" 00:14:34.778 }, 00:14:34.778 "method": "nvmf_subsystem_remove_listener", 00:14:34.778 "req_id": 1 00:14:34.778 } 00:14:34.778 Got JSON-RPC error response 00:14:34.778 response: 00:14:34.778 { 00:14:34.778 "code": -32602, 00:14:34.778 "message": "Invalid parameters" 00:14:34.778 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:14:34.778 14:47:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25472 -i 0 00:14:35.036 [2024-07-14 14:47:14.166703] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25472: invalid cntlid range [0-65519] 00:14:35.036 14:47:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:14:35.036 { 00:14:35.036 "nqn": "nqn.2016-06.io.spdk:cnode25472", 00:14:35.036 "min_cntlid": 0, 00:14:35.036 "method": "nvmf_create_subsystem", 00:14:35.036 "req_id": 1 00:14:35.036 } 00:14:35.036 Got JSON-RPC error response 00:14:35.036 response: 00:14:35.036 { 00:14:35.036 "code": -32602, 00:14:35.036 "message": "Invalid cntlid range [0-65519]" 00:14:35.036 }' 00:14:35.036 14:47:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:14:35.036 { 00:14:35.036 "nqn": "nqn.2016-06.io.spdk:cnode25472", 00:14:35.036 "min_cntlid": 0, 00:14:35.036 "method": "nvmf_create_subsystem", 00:14:35.036 "req_id": 1 00:14:35.036 } 00:14:35.036 Got JSON-RPC error response 00:14:35.036 response: 00:14:35.036 { 00:14:35.036 "code": -32602, 00:14:35.036 "message": "Invalid cntlid range [0-65519]" 00:14:35.036 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:35.036 14:47:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23257 -i 65520 00:14:35.294 [2024-07-14 14:47:14.463645] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23257: invalid cntlid range [65520-65519] 00:14:35.294 14:47:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:14:35.294 { 00:14:35.294 "nqn": "nqn.2016-06.io.spdk:cnode23257", 00:14:35.294 "min_cntlid": 65520, 00:14:35.294 "method": "nvmf_create_subsystem", 00:14:35.294 "req_id": 1 00:14:35.294 } 00:14:35.294 Got JSON-RPC error response 00:14:35.294 response: 00:14:35.294 { 00:14:35.294 "code": -32602, 00:14:35.294 "message": "Invalid cntlid range [65520-65519]" 00:14:35.294 }' 00:14:35.294 14:47:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:14:35.294 { 00:14:35.294 "nqn": "nqn.2016-06.io.spdk:cnode23257", 00:14:35.294 "min_cntlid": 65520, 00:14:35.294 "method": "nvmf_create_subsystem", 00:14:35.294 "req_id": 1 00:14:35.294 } 00:14:35.294 Got JSON-RPC error response 00:14:35.294 response: 00:14:35.294 { 00:14:35.294 "code": -32602, 00:14:35.294 "message": "Invalid cntlid range [65520-65519]" 00:14:35.294 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:35.294 14:47:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5708 -I 0 00:14:35.553 [2024-07-14 14:47:14.760667] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5708: invalid cntlid range [1-0] 00:14:35.553 14:47:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:14:35.553 { 00:14:35.553 "nqn": "nqn.2016-06.io.spdk:cnode5708", 00:14:35.553 "max_cntlid": 0, 00:14:35.553 "method": "nvmf_create_subsystem", 00:14:35.553 "req_id": 1 00:14:35.553 } 00:14:35.553 Got JSON-RPC error response 00:14:35.553 response: 00:14:35.553 { 00:14:35.553 "code": -32602, 00:14:35.553 "message": "Invalid cntlid range [1-0]" 00:14:35.553 }' 00:14:35.553 14:47:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:14:35.553 { 00:14:35.553 "nqn": "nqn.2016-06.io.spdk:cnode5708", 00:14:35.553 "max_cntlid": 0, 00:14:35.553 "method": "nvmf_create_subsystem", 00:14:35.553 "req_id": 1 00:14:35.553 } 00:14:35.553 Got JSON-RPC error response 00:14:35.553 response: 00:14:35.553 { 00:14:35.553 "code": -32602, 00:14:35.553 "message": "Invalid cntlid range [1-0]" 00:14:35.553 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:35.553 14:47:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode32184 -I 65520 00:14:35.811 [2024-07-14 14:47:15.053723] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32184: invalid cntlid range [1-65520] 00:14:35.811 14:47:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:14:35.811 { 00:14:35.811 "nqn": "nqn.2016-06.io.spdk:cnode32184", 00:14:35.811 "max_cntlid": 65520, 00:14:35.811 "method": "nvmf_create_subsystem", 00:14:35.811 "req_id": 1 00:14:35.811 } 00:14:35.811 Got JSON-RPC error response 00:14:35.811 response: 00:14:35.811 { 00:14:35.811 "code": -32602, 00:14:35.811 "message": "Invalid cntlid range [1-65520]" 00:14:35.811 }' 00:14:35.811 14:47:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:14:35.811 { 00:14:35.811 "nqn": "nqn.2016-06.io.spdk:cnode32184", 00:14:35.811 "max_cntlid": 65520, 00:14:35.811 "method": "nvmf_create_subsystem", 00:14:35.811 "req_id": 1 00:14:35.811 } 00:14:35.811 Got JSON-RPC error response 00:14:35.811 response: 00:14:35.811 { 00:14:35.811 "code": -32602, 00:14:35.811 "message": "Invalid cntlid range [1-65520]" 00:14:35.811 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:35.811 14:47:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24838 -i 6 -I 5 00:14:36.069 [2024-07-14 14:47:15.306607] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24838: invalid cntlid range [6-5] 00:14:36.069 14:47:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:14:36.069 { 00:14:36.069 "nqn": "nqn.2016-06.io.spdk:cnode24838", 00:14:36.069 "min_cntlid": 6, 00:14:36.069 "max_cntlid": 5, 00:14:36.069 "method": "nvmf_create_subsystem", 00:14:36.069 "req_id": 1 00:14:36.069 } 00:14:36.069 Got JSON-RPC error response 00:14:36.069 response: 00:14:36.069 { 00:14:36.069 "code": -32602, 00:14:36.069 "message": "Invalid cntlid range [6-5]" 00:14:36.069 }' 00:14:36.069 14:47:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:14:36.069 { 00:14:36.069 "nqn": "nqn.2016-06.io.spdk:cnode24838", 00:14:36.069 "min_cntlid": 6, 00:14:36.069 "max_cntlid": 5, 00:14:36.069 "method": "nvmf_create_subsystem", 00:14:36.069 "req_id": 1 00:14:36.069 } 00:14:36.069 Got JSON-RPC error response 00:14:36.069 response: 00:14:36.069 { 00:14:36.070 "code": -32602, 00:14:36.070 "message": "Invalid cntlid range [6-5]" 00:14:36.070 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:36.070 14:47:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:14:36.329 14:47:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:14:36.329 { 00:14:36.329 "name": "foobar", 00:14:36.329 "method": "nvmf_delete_target", 00:14:36.329 "req_id": 1 00:14:36.329 } 00:14:36.329 Got JSON-RPC error response 00:14:36.329 response: 00:14:36.329 { 00:14:36.329 "code": -32602, 00:14:36.329 "message": "The specified target doesn'\''t exist, cannot delete it." 00:14:36.329 }' 00:14:36.330 14:47:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:14:36.330 { 00:14:36.330 "name": "foobar", 00:14:36.330 "method": "nvmf_delete_target", 00:14:36.330 "req_id": 1 00:14:36.330 } 00:14:36.330 Got JSON-RPC error response 00:14:36.330 response: 00:14:36.330 { 00:14:36.330 "code": -32602, 00:14:36.330 "message": "The specified target doesn't exist, cannot delete it." 00:14:36.330 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:14:36.330 14:47:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:14:36.330 14:47:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:14:36.330 14:47:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:36.330 14:47:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:14:36.330 14:47:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:36.330 14:47:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:14:36.330 14:47:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:36.330 14:47:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:36.330 rmmod nvme_tcp 00:14:36.330 rmmod nvme_fabrics 00:14:36.330 rmmod nvme_keyring 00:14:36.330 14:47:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:36.330 14:47:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:14:36.330 14:47:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:14:36.330 14:47:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 1832079 ']' 00:14:36.330 14:47:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 1832079 00:14:36.330 14:47:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 1832079 ']' 00:14:36.330 14:47:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 1832079 00:14:36.330 14:47:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:14:36.330 14:47:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:36.330 14:47:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1832079 00:14:36.330 14:47:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:36.330 14:47:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:36.330 14:47:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1832079' 00:14:36.330 killing process with pid 1832079 00:14:36.330 14:47:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 1832079 00:14:36.330 14:47:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 1832079 00:14:37.707 14:47:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:37.707 14:47:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:37.708 14:47:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:37.708 14:47:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:37.708 14:47:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:37.708 14:47:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:37.708 14:47:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:37.708 14:47:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:39.619 14:47:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:39.619 00:14:39.619 real 0m10.582s 00:14:39.619 user 0m26.337s 00:14:39.619 sys 0m2.602s 00:14:39.619 14:47:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:39.619 14:47:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:39.619 ************************************ 00:14:39.619 END TEST nvmf_invalid 00:14:39.619 ************************************ 00:14:39.619 14:47:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:39.619 14:47:18 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:14:39.619 14:47:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:39.619 14:47:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:39.619 14:47:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:39.619 ************************************ 00:14:39.619 START TEST nvmf_abort 00:14:39.619 ************************************ 00:14:39.619 14:47:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:14:39.619 * Looking for test storage... 00:14:39.619 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:39.619 14:47:18 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:39.619 14:47:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:14:39.619 14:47:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:39.619 14:47:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:39.619 14:47:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:39.619 14:47:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:39.619 14:47:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:39.619 14:47:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:39.619 14:47:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:39.619 14:47:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:39.619 14:47:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:39.619 14:47:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:39.878 14:47:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:39.878 14:47:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:39.878 14:47:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:39.878 14:47:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:39.878 14:47:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:39.878 14:47:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:39.878 14:47:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:39.878 14:47:18 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:39.878 14:47:18 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:39.878 14:47:18 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:39.878 14:47:18 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.878 14:47:18 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.878 14:47:18 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.878 14:47:18 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:14:39.878 14:47:18 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.878 14:47:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:14:39.878 14:47:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:39.878 14:47:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:39.878 14:47:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:39.878 14:47:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:39.878 14:47:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:39.878 14:47:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:39.878 14:47:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:39.878 14:47:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:39.879 14:47:18 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:39.879 14:47:18 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:14:39.879 14:47:18 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:14:39.879 14:47:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:39.879 14:47:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:39.879 14:47:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:39.879 14:47:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:39.879 14:47:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:39.879 14:47:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:39.879 14:47:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:39.879 14:47:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:39.879 14:47:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:39.879 14:47:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:39.879 14:47:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:14:39.879 14:47:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:41.782 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:41.782 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:14:41.782 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:41.782 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:41.782 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:41.782 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:41.783 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:41.783 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:41.783 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:41.783 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:41.783 14:47:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:41.783 14:47:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:41.783 14:47:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:41.783 14:47:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:41.783 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:41.783 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:14:41.783 00:14:41.783 --- 10.0.0.2 ping statistics --- 00:14:41.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.783 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:14:41.783 14:47:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:41.783 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:41.783 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:14:41.783 00:14:41.783 --- 10.0.0.1 ping statistics --- 00:14:41.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.783 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:14:41.783 14:47:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:41.783 14:47:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:14:41.783 14:47:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:41.783 14:47:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:41.783 14:47:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:41.783 14:47:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:41.783 14:47:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:41.783 14:47:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:41.783 14:47:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:41.783 14:47:21 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:14:41.783 14:47:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:41.783 14:47:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:41.783 14:47:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:41.784 14:47:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1834866 00:14:41.784 14:47:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:41.784 14:47:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1834866 00:14:41.784 14:47:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 1834866 ']' 00:14:41.784 14:47:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.784 14:47:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:41.784 14:47:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:41.784 14:47:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:41.784 14:47:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:42.044 [2024-07-14 14:47:21.163401] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:42.044 [2024-07-14 14:47:21.163542] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:42.044 EAL: No free 2048 kB hugepages reported on node 1 00:14:42.044 [2024-07-14 14:47:21.294935] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:42.305 [2024-07-14 14:47:21.532709] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:42.305 [2024-07-14 14:47:21.532780] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:42.305 [2024-07-14 14:47:21.532815] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:42.305 [2024-07-14 14:47:21.532837] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:42.305 [2024-07-14 14:47:21.532860] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:42.305 [2024-07-14 14:47:21.532989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:42.305 [2024-07-14 14:47:21.533041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:42.305 [2024-07-14 14:47:21.533050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:42.873 14:47:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:42.873 14:47:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:14:42.873 14:47:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:42.873 14:47:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:42.873 14:47:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:42.873 14:47:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:42.873 14:47:22 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:14:42.873 14:47:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.873 14:47:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:42.873 [2024-07-14 14:47:22.112674] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:42.874 14:47:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.874 14:47:22 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:14:42.874 14:47:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.874 14:47:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:43.135 Malloc0 00:14:43.135 14:47:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.135 14:47:22 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:43.135 14:47:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.135 14:47:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:43.135 Delay0 00:14:43.135 14:47:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.135 14:47:22 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:43.135 14:47:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.135 14:47:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:43.135 14:47:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.135 14:47:22 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:14:43.135 14:47:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.135 14:47:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:43.135 14:47:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.135 14:47:22 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:43.135 14:47:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.135 14:47:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:43.135 [2024-07-14 14:47:22.231655] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:43.135 14:47:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.135 14:47:22 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:43.135 14:47:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.135 14:47:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:43.135 14:47:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.135 14:47:22 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:14:43.135 EAL: No free 2048 kB hugepages reported on node 1 00:14:43.135 [2024-07-14 14:47:22.400315] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:14:45.670 Initializing NVMe Controllers 00:14:45.670 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:45.670 controller IO queue size 128 less than required 00:14:45.670 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:14:45.670 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:14:45.670 Initialization complete. Launching workers. 00:14:45.670 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 25164 00:14:45.671 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 25221, failed to submit 66 00:14:45.671 success 25164, unsuccess 57, failed 0 00:14:45.671 14:47:24 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:45.671 14:47:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.671 14:47:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:45.671 14:47:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.671 14:47:24 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:14:45.671 14:47:24 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:14:45.671 14:47:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:45.671 14:47:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:14:45.671 14:47:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:45.671 14:47:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:14:45.671 14:47:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:45.671 14:47:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:45.671 rmmod nvme_tcp 00:14:45.671 rmmod nvme_fabrics 00:14:45.671 rmmod nvme_keyring 00:14:45.671 14:47:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:45.671 14:47:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:14:45.671 14:47:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:14:45.671 14:47:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1834866 ']' 00:14:45.671 14:47:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1834866 00:14:45.671 14:47:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 1834866 ']' 00:14:45.671 14:47:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 1834866 00:14:45.671 14:47:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:14:45.671 14:47:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:45.671 14:47:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1834866 00:14:45.671 14:47:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:45.671 14:47:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:45.671 14:47:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1834866' 00:14:45.671 killing process with pid 1834866 00:14:45.671 14:47:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 1834866 00:14:45.671 14:47:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 1834866 00:14:47.055 14:47:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:47.055 14:47:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:47.055 14:47:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:47.055 14:47:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:47.055 14:47:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:47.055 14:47:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:47.055 14:47:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:47.055 14:47:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:48.960 14:47:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:48.960 00:14:48.960 real 0m9.145s 00:14:48.960 user 0m14.812s 00:14:48.960 sys 0m2.696s 00:14:48.960 14:47:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:48.960 14:47:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:48.960 ************************************ 00:14:48.960 END TEST nvmf_abort 00:14:48.960 ************************************ 00:14:48.960 14:47:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:48.960 14:47:28 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:14:48.960 14:47:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:48.960 14:47:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:48.960 14:47:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:48.960 ************************************ 00:14:48.960 START TEST nvmf_ns_hotplug_stress 00:14:48.960 ************************************ 00:14:48.960 14:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:14:48.960 * Looking for test storage... 00:14:48.960 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:48.960 14:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:48.960 14:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:14:48.960 14:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:48.960 14:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:48.960 14:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:48.960 14:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:48.960 14:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:48.960 14:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:48.960 14:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:48.960 14:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:48.960 14:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:48.960 14:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:48.960 14:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:48.960 14:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:48.960 14:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:48.960 14:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:48.960 14:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:48.960 14:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:48.960 14:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:48.960 14:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:48.960 14:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:48.961 14:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:48.961 14:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.961 14:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.961 14:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.961 14:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:14:48.961 14:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.961 14:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:14:48.961 14:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:48.961 14:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:48.961 14:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:48.961 14:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:48.961 14:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:48.961 14:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:48.961 14:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:48.961 14:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:48.961 14:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:48.961 14:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:14:48.961 14:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:48.961 14:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:48.961 14:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:48.961 14:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:48.961 14:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:48.961 14:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:48.961 14:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:48.961 14:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:48.961 14:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:48.961 14:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:48.961 14:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:14:48.961 14:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:50.893 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:50.893 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:50.893 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:50.893 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:50.893 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:51.151 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:51.151 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:51.151 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:51.151 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:51.151 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:14:51.151 00:14:51.151 --- 10.0.0.2 ping statistics --- 00:14:51.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.151 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:14:51.151 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:51.151 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:51.151 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:14:51.151 00:14:51.151 --- 10.0.0.1 ping statistics --- 00:14:51.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.152 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:14:51.152 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:51.152 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:14:51.152 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:51.152 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:51.152 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:51.152 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:51.152 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:51.152 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:51.152 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:51.152 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:14:51.152 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:51.152 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:51.152 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:51.152 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1837349 00:14:51.152 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:51.152 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1837349 00:14:51.152 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 1837349 ']' 00:14:51.152 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.152 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:51.152 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.152 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:51.152 14:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:51.152 [2024-07-14 14:47:30.349913] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:51.152 [2024-07-14 14:47:30.350049] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:51.152 EAL: No free 2048 kB hugepages reported on node 1 00:14:51.411 [2024-07-14 14:47:30.482380] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:51.670 [2024-07-14 14:47:30.740524] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:51.670 [2024-07-14 14:47:30.740600] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:51.670 [2024-07-14 14:47:30.740634] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:51.670 [2024-07-14 14:47:30.740655] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:51.670 [2024-07-14 14:47:30.740681] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:51.670 [2024-07-14 14:47:30.740834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:51.670 [2024-07-14 14:47:30.740920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:51.670 [2024-07-14 14:47:30.740928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:52.234 14:47:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:52.234 14:47:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:14:52.234 14:47:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:52.234 14:47:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:52.234 14:47:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:52.234 14:47:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:52.234 14:47:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:14:52.234 14:47:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:52.234 [2024-07-14 14:47:31.536347] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:52.492 14:47:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:52.749 14:47:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:53.007 [2024-07-14 14:47:32.122365] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:53.007 14:47:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:53.264 14:47:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:14:53.522 Malloc0 00:14:53.522 14:47:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:53.779 Delay0 00:14:53.779 14:47:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:54.049 14:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:14:54.307 NULL1 00:14:54.307 14:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:54.565 14:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1837778 00:14:54.565 14:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:14:54.565 14:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1837778 00:14:54.565 14:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:54.565 EAL: No free 2048 kB hugepages reported on node 1 00:14:55.948 Read completed with error (sct=0, sc=11) 00:14:55.948 14:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:55.948 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:55.948 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:55.948 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:55.948 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:55.948 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:56.206 14:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:14:56.207 14:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:14:56.465 true 00:14:56.465 14:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1837778 00:14:56.465 14:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:57.034 14:47:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:57.602 14:47:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:14:57.602 14:47:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:14:57.602 true 00:14:57.602 14:47:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1837778 00:14:57.602 14:47:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:58.169 14:47:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:58.169 14:47:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:14:58.169 14:47:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:14:58.427 true 00:14:58.427 14:47:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1837778 00:14:58.427 14:47:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:58.685 14:47:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:58.943 14:47:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:14:58.943 14:47:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:14:59.199 true 00:14:59.199 14:47:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1837778 00:14:59.199 14:47:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:00.134 14:47:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:00.391 14:47:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:15:00.391 14:47:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:15:00.648 true 00:15:00.648 14:47:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1837778 00:15:00.648 14:47:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:00.905 14:47:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:01.192 14:47:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:15:01.192 14:47:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:15:01.474 true 00:15:01.474 14:47:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1837778 00:15:01.474 14:47:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:01.732 14:47:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:01.989 14:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:15:01.989 14:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:15:02.247 true 00:15:02.247 14:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1837778 00:15:02.247 14:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:03.181 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:03.181 14:47:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:03.439 14:47:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:15:03.439 14:47:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:15:03.696 true 00:15:03.696 14:47:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1837778 00:15:03.696 14:47:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:03.954 14:47:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:04.211 14:47:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:15:04.211 14:47:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:15:04.468 true 00:15:04.468 14:47:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1837778 00:15:04.468 14:47:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:05.400 14:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:05.400 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:05.400 14:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:15:05.400 14:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:15:05.658 true 00:15:05.658 14:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1837778 00:15:05.658 14:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:05.915 14:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:06.173 14:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:15:06.173 14:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:15:06.430 true 00:15:06.430 14:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1837778 00:15:06.430 14:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:07.364 14:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:07.623 14:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:15:07.623 14:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:15:07.881 true 00:15:07.881 14:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1837778 00:15:07.881 14:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:08.140 14:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:08.399 14:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:15:08.399 14:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:15:08.399 true 00:15:08.658 14:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1837778 00:15:08.658 14:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:09.225 14:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:09.225 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:09.484 14:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:15:09.484 14:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:15:09.742 true 00:15:09.742 14:47:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1837778 00:15:09.742 14:47:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:10.000 14:47:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:10.258 14:47:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:15:10.258 14:47:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:15:10.518 true 00:15:10.518 14:47:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1837778 00:15:10.518 14:47:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:11.455 14:47:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:11.712 14:47:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:15:11.712 14:47:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:15:11.970 true 00:15:11.970 14:47:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1837778 00:15:11.970 14:47:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:12.228 14:47:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:12.486 14:47:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:15:12.486 14:47:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:15:12.744 true 00:15:12.744 14:47:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1837778 00:15:12.744 14:47:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:13.681 14:47:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:13.681 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:13.681 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:13.938 14:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:15:13.939 14:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:15:14.195 true 00:15:14.195 14:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1837778 00:15:14.195 14:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:14.451 14:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:14.451 14:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:15:14.451 14:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:15:14.709 true 00:15:14.709 14:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1837778 00:15:14.709 14:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:16.117 14:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:16.117 14:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:15:16.117 14:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:15:16.375 true 00:15:16.375 14:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1837778 00:15:16.375 14:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:16.633 14:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:16.890 14:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:15:16.890 14:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:15:17.149 true 00:15:17.149 14:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1837778 00:15:17.149 14:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:18.089 14:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:18.089 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:18.089 14:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:15:18.089 14:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:15:18.346 true 00:15:18.346 14:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1837778 00:15:18.346 14:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:18.603 14:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:18.861 14:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:15:18.861 14:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:15:19.119 true 00:15:19.119 14:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1837778 00:15:19.119 14:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:19.377 14:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:19.635 14:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:15:19.635 14:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:15:19.893 true 00:15:19.893 14:47:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1837778 00:15:19.893 14:47:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:21.268 14:48:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:21.268 14:48:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:15:21.268 14:48:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:15:21.526 true 00:15:21.526 14:48:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1837778 00:15:21.526 14:48:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:22.090 14:48:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:22.090 14:48:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:15:22.090 14:48:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:15:22.346 true 00:15:22.346 14:48:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1837778 00:15:22.346 14:48:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:22.603 14:48:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:22.860 14:48:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:15:22.860 14:48:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:15:23.118 true 00:15:23.118 14:48:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1837778 00:15:23.118 14:48:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:24.491 14:48:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:24.491 14:48:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:15:24.491 14:48:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:15:24.748 true 00:15:24.748 14:48:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1837778 00:15:24.748 14:48:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:25.006 Initializing NVMe Controllers 00:15:25.006 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:25.006 Controller IO queue size 128, less than required. 00:15:25.006 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:25.006 Controller IO queue size 128, less than required. 00:15:25.006 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:25.006 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:25.006 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:25.006 Initialization complete. Launching workers. 00:15:25.006 ======================================================== 00:15:25.006 Latency(us) 00:15:25.006 Device Information : IOPS MiB/s Average min max 00:15:25.006 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 452.94 0.22 136085.92 3023.71 1039664.28 00:15:25.006 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7819.03 3.82 16372.39 3595.61 480757.42 00:15:25.006 ======================================================== 00:15:25.006 Total : 8271.98 4.04 22927.48 3023.71 1039664.28 00:15:25.006 00:15:25.006 14:48:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:25.263 14:48:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:15:25.263 14:48:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:15:25.521 true 00:15:25.521 14:48:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1837778 00:15:25.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1837778) - No such process 00:15:25.521 14:48:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1837778 00:15:25.521 14:48:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:25.778 14:48:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:26.036 14:48:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:15:26.036 14:48:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:15:26.036 14:48:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:15:26.036 14:48:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:26.036 14:48:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:15:26.294 null0 00:15:26.294 14:48:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:26.294 14:48:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:26.294 14:48:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:15:26.551 null1 00:15:26.551 14:48:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:26.551 14:48:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:26.551 14:48:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:15:26.809 null2 00:15:26.809 14:48:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:26.809 14:48:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:26.809 14:48:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:15:27.066 null3 00:15:27.066 14:48:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:27.066 14:48:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:27.066 14:48:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:15:27.352 null4 00:15:27.352 14:48:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:27.352 14:48:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:27.352 14:48:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:15:27.610 null5 00:15:27.610 14:48:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:27.610 14:48:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:27.610 14:48:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:15:27.866 null6 00:15:27.866 14:48:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:27.866 14:48:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:27.866 14:48:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:15:28.123 null7 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:28.123 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:28.124 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:28.124 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:15:28.124 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:28.124 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:28.124 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:15:28.124 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1841924 1841925 1841927 1841929 1841931 1841933 1841935 1841937 00:15:28.124 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:28.124 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:28.124 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:28.381 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:28.381 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:28.381 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:28.381 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:28.381 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:28.381 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:28.381 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:28.381 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:28.639 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:28.639 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:28.639 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:28.639 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:28.639 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:28.639 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:28.639 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:28.639 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:28.639 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:28.639 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:28.639 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:28.639 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:28.639 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:28.639 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:28.639 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:28.639 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:28.639 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:28.639 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:28.639 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:28.639 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:28.639 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:28.639 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:28.639 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:28.639 14:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:28.897 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:28.897 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:28.897 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:28.897 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:28.897 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:28.897 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:28.897 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:28.897 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:29.154 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:29.154 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:29.154 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:29.154 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:29.154 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:29.154 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:29.154 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:29.154 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:29.154 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:29.155 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:29.155 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:29.155 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:29.155 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:29.155 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:29.155 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:29.155 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:29.155 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:29.155 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:29.155 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:29.155 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:29.155 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:29.155 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:29.155 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:29.155 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:29.412 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:29.412 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:29.412 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:29.412 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:29.412 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:29.412 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:29.412 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:29.412 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:29.671 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:29.671 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:29.671 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:29.671 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:29.671 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:29.671 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:29.671 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:29.671 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:29.671 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:29.671 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:29.671 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:29.671 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:29.671 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:29.671 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:29.671 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:29.671 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:29.671 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:29.671 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:29.671 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:29.671 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:29.671 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:29.671 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:29.671 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:29.671 14:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:29.930 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:29.930 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:29.930 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:29.930 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:29.930 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:29.930 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:29.930 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:29.930 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:30.188 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:30.188 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:30.188 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:30.188 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:30.188 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:30.188 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:30.188 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:30.188 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:30.188 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:30.188 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:30.188 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:30.188 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:30.188 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:30.188 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:30.188 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:30.188 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:30.188 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:30.188 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:30.188 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:30.188 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:30.188 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:30.188 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:30.188 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:30.188 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:30.446 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:30.446 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:30.446 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:30.446 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:30.446 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:30.446 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:30.446 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:30.446 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:30.704 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:30.704 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:30.704 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:30.704 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:30.704 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:30.704 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:30.704 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:30.704 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:30.704 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:30.704 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:30.704 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:30.704 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:30.704 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:30.704 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:30.704 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:30.704 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:30.704 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:30.704 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:30.704 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:30.704 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:30.704 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:30.704 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:30.704 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:30.704 14:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:30.962 14:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:30.962 14:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:30.962 14:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:30.962 14:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:30.962 14:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:30.962 14:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:30.962 14:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:30.962 14:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:31.220 14:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:31.220 14:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:31.220 14:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:31.220 14:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:31.220 14:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:31.220 14:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:31.220 14:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:31.220 14:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:31.220 14:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:31.220 14:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:31.220 14:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:31.221 14:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:31.221 14:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:31.221 14:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:31.221 14:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:31.221 14:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:31.221 14:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:31.221 14:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:31.221 14:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:31.221 14:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:31.221 14:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:31.221 14:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:31.221 14:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:31.221 14:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:31.479 14:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:31.479 14:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:31.479 14:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:31.479 14:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:31.479 14:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:31.479 14:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:31.479 14:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:31.479 14:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:31.737 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:31.737 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:31.737 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:31.737 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:31.737 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:31.737 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:31.737 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:31.737 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:31.737 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:31.737 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:31.737 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:31.737 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:31.737 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:31.737 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:31.737 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:31.995 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:31.995 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:31.995 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:31.995 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:31.995 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:31.995 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:31.995 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:31.995 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:31.995 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:31.995 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:31.995 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:31.995 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:32.252 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:32.252 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:32.252 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:32.252 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:32.252 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:32.252 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:32.252 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:32.252 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:32.252 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:32.252 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:32.252 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:32.510 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:32.510 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:32.510 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:32.510 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:32.510 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:32.510 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:32.510 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:32.510 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:32.510 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:32.510 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:32.510 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:32.510 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:32.510 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:32.510 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:32.510 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:32.510 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:32.510 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:32.510 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:32.510 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:32.510 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:32.767 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:32.767 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:32.767 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:32.767 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:32.767 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:32.767 14:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:33.024 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:33.024 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:33.024 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:33.024 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:33.024 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:33.024 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:33.024 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:33.024 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:33.024 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:33.024 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:33.024 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:33.024 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:33.024 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:33.024 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:33.025 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:33.025 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:33.025 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:33.025 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:33.025 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:33.025 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:33.025 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:33.025 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:33.025 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:33.025 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:33.282 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:33.282 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:33.282 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:33.282 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:33.282 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:33.282 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:33.282 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:33.282 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:33.540 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:33.540 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:33.540 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:33.540 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:33.540 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:33.540 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:33.540 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:33.540 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:33.540 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:33.540 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:33.540 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:33.540 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:33.540 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:33.540 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:33.540 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:33.540 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:33.540 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:33.540 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:15:33.540 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:33.540 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:15:33.540 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:33.540 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:15:33.540 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:33.540 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:33.540 rmmod nvme_tcp 00:15:33.540 rmmod nvme_fabrics 00:15:33.540 rmmod nvme_keyring 00:15:33.540 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:33.540 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:15:33.540 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:15:33.540 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1837349 ']' 00:15:33.540 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1837349 00:15:33.540 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 1837349 ']' 00:15:33.540 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 1837349 00:15:33.540 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:15:33.540 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:33.540 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1837349 00:15:33.540 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:33.540 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:33.540 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1837349' 00:15:33.540 killing process with pid 1837349 00:15:33.540 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 1837349 00:15:33.540 14:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 1837349 00:15:34.941 14:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:34.941 14:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:34.941 14:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:34.941 14:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:34.941 14:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:34.941 14:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.941 14:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:34.941 14:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:37.473 14:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:37.473 00:15:37.473 real 0m48.131s 00:15:37.473 user 3m34.026s 00:15:37.473 sys 0m16.466s 00:15:37.473 14:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:37.473 14:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:15:37.473 ************************************ 00:15:37.473 END TEST nvmf_ns_hotplug_stress 00:15:37.473 ************************************ 00:15:37.473 14:48:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:37.473 14:48:16 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:37.473 14:48:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:37.473 14:48:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:37.473 14:48:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:37.473 ************************************ 00:15:37.473 START TEST nvmf_connect_stress 00:15:37.473 ************************************ 00:15:37.473 14:48:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:37.473 * Looking for test storage... 00:15:37.473 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:37.473 14:48:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:37.473 14:48:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:15:37.473 14:48:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:37.473 14:48:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:37.473 14:48:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:37.473 14:48:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:37.473 14:48:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:37.473 14:48:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:37.473 14:48:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:37.473 14:48:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:37.473 14:48:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:37.473 14:48:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:37.473 14:48:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:37.473 14:48:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:37.473 14:48:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:37.473 14:48:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:37.473 14:48:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:37.473 14:48:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:37.473 14:48:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:37.473 14:48:16 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:37.473 14:48:16 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:37.473 14:48:16 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:37.473 14:48:16 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.473 14:48:16 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.473 14:48:16 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.473 14:48:16 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:15:37.473 14:48:16 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.473 14:48:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:15:37.473 14:48:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:37.473 14:48:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:37.473 14:48:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:37.473 14:48:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:37.473 14:48:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:37.473 14:48:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:37.473 14:48:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:37.473 14:48:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:37.473 14:48:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:15:37.473 14:48:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:37.473 14:48:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:37.473 14:48:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:37.473 14:48:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:37.473 14:48:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:37.473 14:48:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:37.473 14:48:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:37.473 14:48:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:37.473 14:48:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:37.473 14:48:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:37.473 14:48:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:15:37.473 14:48:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:38.847 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:38.847 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:15:38.847 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:38.847 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:38.847 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:38.847 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:38.847 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:38.847 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:15:38.847 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:38.847 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:15:38.847 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:15:38.847 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:15:38.847 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:15:38.847 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:15:38.847 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:15:38.847 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:38.847 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:38.847 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:38.847 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:38.847 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:38.847 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:38.847 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:38.847 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:38.847 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:38.847 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:38.847 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:38.847 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:38.847 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:38.847 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:38.847 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:38.847 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:38.847 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:38.847 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:38.847 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:38.847 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:38.847 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:38.847 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:38.847 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:38.847 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:38.847 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:38.847 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:38.847 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:38.847 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:38.847 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:38.847 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:38.847 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:38.847 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:38.848 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:38.848 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:38.848 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:38.848 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:38.848 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:38.848 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:38.848 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:38.848 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:38.848 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:38.848 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:38.848 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:38.848 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:38.848 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:38.848 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:38.848 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:38.848 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:38.848 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:38.848 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:38.848 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:38.848 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:38.848 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:38.848 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:38.848 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:38.848 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:38.848 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:38.848 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:15:38.848 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:38.848 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:38.848 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:38.848 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:38.848 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:38.848 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:38.848 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:38.848 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:38.848 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:38.848 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:38.848 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:38.848 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:38.848 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:38.848 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:38.848 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:38.848 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:38.848 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:38.848 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:39.106 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:39.106 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:39.106 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:39.106 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:39.106 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:39.106 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:39.106 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:15:39.106 00:15:39.106 --- 10.0.0.2 ping statistics --- 00:15:39.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.106 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:15:39.106 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:39.106 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:39.106 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:15:39.106 00:15:39.106 --- 10.0.0.1 ping statistics --- 00:15:39.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.106 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:15:39.106 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:39.106 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:15:39.106 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:39.106 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:39.106 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:39.106 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:39.106 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:39.106 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:39.106 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:39.106 14:48:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:15:39.106 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:39.106 14:48:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:39.106 14:48:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:39.106 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1845312 00:15:39.106 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:39.106 14:48:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1845312 00:15:39.106 14:48:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 1845312 ']' 00:15:39.106 14:48:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.106 14:48:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:39.106 14:48:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.106 14:48:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:39.106 14:48:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:39.106 [2024-07-14 14:48:18.330895] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:39.106 [2024-07-14 14:48:18.331032] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:39.106 EAL: No free 2048 kB hugepages reported on node 1 00:15:39.364 [2024-07-14 14:48:18.471121] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:39.622 [2024-07-14 14:48:18.730940] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:39.622 [2024-07-14 14:48:18.731014] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:39.623 [2024-07-14 14:48:18.731047] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:39.623 [2024-07-14 14:48:18.731068] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:39.623 [2024-07-14 14:48:18.731089] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:39.623 [2024-07-14 14:48:18.731227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:39.623 [2024-07-14 14:48:18.731318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:39.623 [2024-07-14 14:48:18.731327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:40.188 [2024-07-14 14:48:19.250311] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:40.188 [2024-07-14 14:48:19.285379] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:40.188 NULL1 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1845465 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:40.188 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:40.189 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:40.189 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:40.189 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:40.189 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:40.189 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:40.189 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:40.189 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:40.189 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:40.189 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:40.189 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:40.189 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1845465 00:15:40.189 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:40.189 14:48:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.189 14:48:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:40.189 EAL: No free 2048 kB hugepages reported on node 1 00:15:40.446 14:48:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.446 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1845465 00:15:40.446 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:40.446 14:48:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.446 14:48:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:40.704 14:48:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.704 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1845465 00:15:40.704 14:48:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:40.704 14:48:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.704 14:48:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:41.270 14:48:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.270 14:48:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1845465 00:15:41.270 14:48:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:41.270 14:48:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.270 14:48:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:41.529 14:48:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.529 14:48:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1845465 00:15:41.529 14:48:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:41.529 14:48:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.529 14:48:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:41.787 14:48:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.787 14:48:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1845465 00:15:41.787 14:48:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:41.787 14:48:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.787 14:48:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:42.045 14:48:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.045 14:48:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1845465 00:15:42.045 14:48:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:42.045 14:48:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.045 14:48:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:42.303 14:48:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.303 14:48:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1845465 00:15:42.303 14:48:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:42.303 14:48:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.303 14:48:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:42.869 14:48:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.869 14:48:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1845465 00:15:42.869 14:48:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:42.869 14:48:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.869 14:48:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:43.127 14:48:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.127 14:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1845465 00:15:43.127 14:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:43.127 14:48:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.127 14:48:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:43.385 14:48:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.385 14:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1845465 00:15:43.385 14:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:43.385 14:48:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.385 14:48:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:43.643 14:48:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.643 14:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1845465 00:15:43.643 14:48:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:43.643 14:48:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.643 14:48:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:44.208 14:48:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.208 14:48:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1845465 00:15:44.208 14:48:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:44.208 14:48:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.208 14:48:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:44.465 14:48:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.465 14:48:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1845465 00:15:44.465 14:48:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:44.465 14:48:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.465 14:48:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:44.721 14:48:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.721 14:48:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1845465 00:15:44.721 14:48:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:44.721 14:48:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.721 14:48:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:44.978 14:48:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.978 14:48:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1845465 00:15:44.978 14:48:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:44.978 14:48:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.978 14:48:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:45.236 14:48:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.236 14:48:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1845465 00:15:45.236 14:48:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:45.236 14:48:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.236 14:48:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:45.800 14:48:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.800 14:48:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1845465 00:15:45.800 14:48:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:45.800 14:48:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.800 14:48:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:46.058 14:48:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.058 14:48:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1845465 00:15:46.058 14:48:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:46.058 14:48:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.058 14:48:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:46.316 14:48:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.316 14:48:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1845465 00:15:46.316 14:48:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:46.316 14:48:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.316 14:48:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:46.574 14:48:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.574 14:48:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1845465 00:15:46.574 14:48:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:46.574 14:48:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.574 14:48:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:47.140 14:48:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.140 14:48:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1845465 00:15:47.140 14:48:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:47.140 14:48:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.140 14:48:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:47.398 14:48:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.398 14:48:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1845465 00:15:47.398 14:48:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:47.398 14:48:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.398 14:48:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:47.656 14:48:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.656 14:48:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1845465 00:15:47.656 14:48:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:47.656 14:48:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.656 14:48:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:47.914 14:48:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.914 14:48:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1845465 00:15:47.914 14:48:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:47.914 14:48:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.914 14:48:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:48.172 14:48:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.172 14:48:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1845465 00:15:48.172 14:48:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:48.172 14:48:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.172 14:48:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:48.738 14:48:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.738 14:48:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1845465 00:15:48.738 14:48:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:48.738 14:48:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.738 14:48:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:48.996 14:48:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.996 14:48:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1845465 00:15:48.996 14:48:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:48.996 14:48:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.996 14:48:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:49.254 14:48:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.254 14:48:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1845465 00:15:49.254 14:48:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:49.254 14:48:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.254 14:48:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:49.512 14:48:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.512 14:48:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1845465 00:15:49.512 14:48:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:49.512 14:48:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.512 14:48:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:49.770 14:48:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.770 14:48:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1845465 00:15:49.770 14:48:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:49.770 14:48:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.770 14:48:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:50.336 14:48:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.336 14:48:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1845465 00:15:50.336 14:48:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:50.336 14:48:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.336 14:48:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:50.336 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:50.594 14:48:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.594 14:48:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1845465 00:15:50.594 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1845465) - No such process 00:15:50.594 14:48:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1845465 00:15:50.594 14:48:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:50.594 14:48:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:50.594 14:48:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:50.594 14:48:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:50.594 14:48:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:15:50.594 14:48:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:50.594 14:48:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:15:50.594 14:48:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:50.594 14:48:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:50.594 rmmod nvme_tcp 00:15:50.594 rmmod nvme_fabrics 00:15:50.594 rmmod nvme_keyring 00:15:50.594 14:48:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:50.594 14:48:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:15:50.594 14:48:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:15:50.594 14:48:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1845312 ']' 00:15:50.594 14:48:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1845312 00:15:50.594 14:48:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 1845312 ']' 00:15:50.594 14:48:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 1845312 00:15:50.594 14:48:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:15:50.594 14:48:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:50.594 14:48:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1845312 00:15:50.594 14:48:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:50.594 14:48:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:50.594 14:48:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1845312' 00:15:50.594 killing process with pid 1845312 00:15:50.594 14:48:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 1845312 00:15:50.594 14:48:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 1845312 00:15:51.997 14:48:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:51.997 14:48:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:51.997 14:48:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:51.997 14:48:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:51.997 14:48:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:51.997 14:48:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.997 14:48:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:51.997 14:48:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.902 14:48:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:53.902 00:15:53.902 real 0m16.817s 00:15:53.902 user 0m42.325s 00:15:53.902 sys 0m5.696s 00:15:53.902 14:48:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:53.903 14:48:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:53.903 ************************************ 00:15:53.903 END TEST nvmf_connect_stress 00:15:53.903 ************************************ 00:15:53.903 14:48:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:53.903 14:48:33 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:53.903 14:48:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:53.903 14:48:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:53.903 14:48:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:53.903 ************************************ 00:15:53.903 START TEST nvmf_fused_ordering 00:15:53.903 ************************************ 00:15:53.903 14:48:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:53.903 * Looking for test storage... 00:15:53.903 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:53.903 14:48:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:53.903 14:48:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:15:53.903 14:48:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:53.903 14:48:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:53.903 14:48:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:53.903 14:48:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:53.903 14:48:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:53.903 14:48:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:53.903 14:48:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:53.903 14:48:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:53.903 14:48:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:53.903 14:48:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:53.903 14:48:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:53.903 14:48:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:53.903 14:48:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:53.903 14:48:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:53.903 14:48:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:53.903 14:48:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:53.903 14:48:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:53.903 14:48:33 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:53.903 14:48:33 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:53.903 14:48:33 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:53.903 14:48:33 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.903 14:48:33 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.903 14:48:33 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.903 14:48:33 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:15:53.903 14:48:33 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.903 14:48:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:15:53.903 14:48:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:53.903 14:48:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:53.903 14:48:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:53.903 14:48:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:53.903 14:48:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:53.903 14:48:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:53.903 14:48:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:53.903 14:48:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:53.903 14:48:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:15:53.903 14:48:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:53.903 14:48:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:53.903 14:48:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:53.903 14:48:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:53.903 14:48:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:53.903 14:48:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.903 14:48:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:53.903 14:48:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.903 14:48:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:53.903 14:48:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:53.903 14:48:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:15:53.903 14:48:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:55.800 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:55.800 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:15:55.800 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:55.800 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:55.800 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:55.800 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:55.800 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:55.800 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:15:55.800 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:55.800 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:15:55.800 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:15:55.800 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:15:55.800 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:15:55.800 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:15:55.800 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:15:55.800 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:55.800 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:55.800 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:55.800 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:55.800 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:55.800 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:55.800 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:55.800 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:55.800 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:55.800 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:55.800 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:55.800 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:55.800 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:55.800 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:55.800 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:55.800 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:55.800 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:55.800 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:55.800 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:55.800 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:55.800 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:55.800 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:55.800 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:55.800 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:55.800 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:55.800 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:55.800 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:55.800 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:55.800 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:55.800 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:55.800 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:55.800 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:55.800 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:55.800 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:55.800 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:55.800 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:55.800 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:55.800 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:56.059 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:56.059 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:56.059 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:56.059 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:15:56.059 00:15:56.059 --- 10.0.0.2 ping statistics --- 00:15:56.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.059 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:56.059 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:56.059 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:15:56.059 00:15:56.059 --- 10.0.0.1 ping statistics --- 00:15:56.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.059 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1848748 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1848748 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 1848748 ']' 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:56.059 14:48:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:56.059 [2024-07-14 14:48:35.352244] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:56.059 [2024-07-14 14:48:35.352368] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:56.317 EAL: No free 2048 kB hugepages reported on node 1 00:15:56.317 [2024-07-14 14:48:35.495349] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.575 [2024-07-14 14:48:35.749629] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:56.575 [2024-07-14 14:48:35.749696] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:56.575 [2024-07-14 14:48:35.749737] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:56.575 [2024-07-14 14:48:35.749787] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:56.575 [2024-07-14 14:48:35.749821] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:56.575 [2024-07-14 14:48:35.749901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:57.140 14:48:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:57.140 14:48:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:15:57.140 14:48:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:57.140 14:48:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:57.140 14:48:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:57.140 14:48:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:57.140 14:48:36 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:57.140 14:48:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.140 14:48:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:57.140 [2024-07-14 14:48:36.283841] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:57.140 14:48:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.140 14:48:36 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:57.140 14:48:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.140 14:48:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:57.140 14:48:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.140 14:48:36 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:57.140 14:48:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.141 14:48:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:57.141 [2024-07-14 14:48:36.300064] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:57.141 14:48:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.141 14:48:36 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:57.141 14:48:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.141 14:48:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:57.141 NULL1 00:15:57.141 14:48:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.141 14:48:36 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:57.141 14:48:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.141 14:48:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:57.141 14:48:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.141 14:48:36 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:57.141 14:48:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.141 14:48:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:57.141 14:48:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.141 14:48:36 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:57.141 [2024-07-14 14:48:36.371257] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:57.141 [2024-07-14 14:48:36.371347] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1848902 ] 00:15:57.141 EAL: No free 2048 kB hugepages reported on node 1 00:15:57.705 Attached to nqn.2016-06.io.spdk:cnode1 00:15:57.705 Namespace ID: 1 size: 1GB 00:15:57.705 fused_ordering(0) 00:15:57.705 fused_ordering(1) 00:15:57.705 fused_ordering(2) 00:15:57.705 fused_ordering(3) 00:15:57.705 fused_ordering(4) 00:15:57.705 fused_ordering(5) 00:15:57.705 fused_ordering(6) 00:15:57.705 fused_ordering(7) 00:15:57.705 fused_ordering(8) 00:15:57.705 fused_ordering(9) 00:15:57.705 fused_ordering(10) 00:15:57.705 fused_ordering(11) 00:15:57.705 fused_ordering(12) 00:15:57.705 fused_ordering(13) 00:15:57.705 fused_ordering(14) 00:15:57.705 fused_ordering(15) 00:15:57.705 fused_ordering(16) 00:15:57.705 fused_ordering(17) 00:15:57.705 fused_ordering(18) 00:15:57.705 fused_ordering(19) 00:15:57.705 fused_ordering(20) 00:15:57.705 fused_ordering(21) 00:15:57.705 fused_ordering(22) 00:15:57.705 fused_ordering(23) 00:15:57.705 fused_ordering(24) 00:15:57.705 fused_ordering(25) 00:15:57.705 fused_ordering(26) 00:15:57.705 fused_ordering(27) 00:15:57.705 fused_ordering(28) 00:15:57.705 fused_ordering(29) 00:15:57.705 fused_ordering(30) 00:15:57.705 fused_ordering(31) 00:15:57.705 fused_ordering(32) 00:15:57.705 fused_ordering(33) 00:15:57.705 fused_ordering(34) 00:15:57.705 fused_ordering(35) 00:15:57.705 fused_ordering(36) 00:15:57.705 fused_ordering(37) 00:15:57.705 fused_ordering(38) 00:15:57.705 fused_ordering(39) 00:15:57.705 fused_ordering(40) 00:15:57.705 fused_ordering(41) 00:15:57.705 fused_ordering(42) 00:15:57.705 fused_ordering(43) 00:15:57.705 fused_ordering(44) 00:15:57.705 fused_ordering(45) 00:15:57.705 fused_ordering(46) 00:15:57.705 fused_ordering(47) 00:15:57.705 fused_ordering(48) 00:15:57.705 fused_ordering(49) 00:15:57.705 fused_ordering(50) 00:15:57.705 fused_ordering(51) 00:15:57.705 fused_ordering(52) 00:15:57.705 fused_ordering(53) 00:15:57.705 fused_ordering(54) 00:15:57.705 fused_ordering(55) 00:15:57.705 fused_ordering(56) 00:15:57.705 fused_ordering(57) 00:15:57.705 fused_ordering(58) 00:15:57.705 fused_ordering(59) 00:15:57.705 fused_ordering(60) 00:15:57.705 fused_ordering(61) 00:15:57.705 fused_ordering(62) 00:15:57.705 fused_ordering(63) 00:15:57.705 fused_ordering(64) 00:15:57.705 fused_ordering(65) 00:15:57.705 fused_ordering(66) 00:15:57.705 fused_ordering(67) 00:15:57.705 fused_ordering(68) 00:15:57.705 fused_ordering(69) 00:15:57.705 fused_ordering(70) 00:15:57.705 fused_ordering(71) 00:15:57.705 fused_ordering(72) 00:15:57.705 fused_ordering(73) 00:15:57.705 fused_ordering(74) 00:15:57.705 fused_ordering(75) 00:15:57.705 fused_ordering(76) 00:15:57.705 fused_ordering(77) 00:15:57.705 fused_ordering(78) 00:15:57.705 fused_ordering(79) 00:15:57.705 fused_ordering(80) 00:15:57.705 fused_ordering(81) 00:15:57.705 fused_ordering(82) 00:15:57.705 fused_ordering(83) 00:15:57.705 fused_ordering(84) 00:15:57.705 fused_ordering(85) 00:15:57.705 fused_ordering(86) 00:15:57.705 fused_ordering(87) 00:15:57.705 fused_ordering(88) 00:15:57.705 fused_ordering(89) 00:15:57.705 fused_ordering(90) 00:15:57.705 fused_ordering(91) 00:15:57.705 fused_ordering(92) 00:15:57.705 fused_ordering(93) 00:15:57.705 fused_ordering(94) 00:15:57.705 fused_ordering(95) 00:15:57.705 fused_ordering(96) 00:15:57.705 fused_ordering(97) 00:15:57.705 fused_ordering(98) 00:15:57.705 fused_ordering(99) 00:15:57.705 fused_ordering(100) 00:15:57.705 fused_ordering(101) 00:15:57.705 fused_ordering(102) 00:15:57.705 fused_ordering(103) 00:15:57.705 fused_ordering(104) 00:15:57.705 fused_ordering(105) 00:15:57.705 fused_ordering(106) 00:15:57.705 fused_ordering(107) 00:15:57.705 fused_ordering(108) 00:15:57.705 fused_ordering(109) 00:15:57.705 fused_ordering(110) 00:15:57.705 fused_ordering(111) 00:15:57.705 fused_ordering(112) 00:15:57.705 fused_ordering(113) 00:15:57.705 fused_ordering(114) 00:15:57.705 fused_ordering(115) 00:15:57.705 fused_ordering(116) 00:15:57.705 fused_ordering(117) 00:15:57.705 fused_ordering(118) 00:15:57.705 fused_ordering(119) 00:15:57.705 fused_ordering(120) 00:15:57.705 fused_ordering(121) 00:15:57.705 fused_ordering(122) 00:15:57.705 fused_ordering(123) 00:15:57.705 fused_ordering(124) 00:15:57.705 fused_ordering(125) 00:15:57.705 fused_ordering(126) 00:15:57.705 fused_ordering(127) 00:15:57.705 fused_ordering(128) 00:15:57.705 fused_ordering(129) 00:15:57.705 fused_ordering(130) 00:15:57.705 fused_ordering(131) 00:15:57.705 fused_ordering(132) 00:15:57.705 fused_ordering(133) 00:15:57.705 fused_ordering(134) 00:15:57.705 fused_ordering(135) 00:15:57.705 fused_ordering(136) 00:15:57.705 fused_ordering(137) 00:15:57.705 fused_ordering(138) 00:15:57.705 fused_ordering(139) 00:15:57.705 fused_ordering(140) 00:15:57.705 fused_ordering(141) 00:15:57.705 fused_ordering(142) 00:15:57.705 fused_ordering(143) 00:15:57.705 fused_ordering(144) 00:15:57.705 fused_ordering(145) 00:15:57.705 fused_ordering(146) 00:15:57.705 fused_ordering(147) 00:15:57.705 fused_ordering(148) 00:15:57.705 fused_ordering(149) 00:15:57.705 fused_ordering(150) 00:15:57.705 fused_ordering(151) 00:15:57.705 fused_ordering(152) 00:15:57.705 fused_ordering(153) 00:15:57.705 fused_ordering(154) 00:15:57.705 fused_ordering(155) 00:15:57.705 fused_ordering(156) 00:15:57.705 fused_ordering(157) 00:15:57.705 fused_ordering(158) 00:15:57.705 fused_ordering(159) 00:15:57.705 fused_ordering(160) 00:15:57.705 fused_ordering(161) 00:15:57.705 fused_ordering(162) 00:15:57.705 fused_ordering(163) 00:15:57.705 fused_ordering(164) 00:15:57.705 fused_ordering(165) 00:15:57.705 fused_ordering(166) 00:15:57.705 fused_ordering(167) 00:15:57.705 fused_ordering(168) 00:15:57.705 fused_ordering(169) 00:15:57.705 fused_ordering(170) 00:15:57.705 fused_ordering(171) 00:15:57.705 fused_ordering(172) 00:15:57.705 fused_ordering(173) 00:15:57.705 fused_ordering(174) 00:15:57.705 fused_ordering(175) 00:15:57.705 fused_ordering(176) 00:15:57.705 fused_ordering(177) 00:15:57.705 fused_ordering(178) 00:15:57.705 fused_ordering(179) 00:15:57.705 fused_ordering(180) 00:15:57.705 fused_ordering(181) 00:15:57.705 fused_ordering(182) 00:15:57.705 fused_ordering(183) 00:15:57.705 fused_ordering(184) 00:15:57.705 fused_ordering(185) 00:15:57.705 fused_ordering(186) 00:15:57.705 fused_ordering(187) 00:15:57.705 fused_ordering(188) 00:15:57.705 fused_ordering(189) 00:15:57.705 fused_ordering(190) 00:15:57.705 fused_ordering(191) 00:15:57.705 fused_ordering(192) 00:15:57.705 fused_ordering(193) 00:15:57.705 fused_ordering(194) 00:15:57.705 fused_ordering(195) 00:15:57.705 fused_ordering(196) 00:15:57.705 fused_ordering(197) 00:15:57.705 fused_ordering(198) 00:15:57.705 fused_ordering(199) 00:15:57.705 fused_ordering(200) 00:15:57.705 fused_ordering(201) 00:15:57.706 fused_ordering(202) 00:15:57.706 fused_ordering(203) 00:15:57.706 fused_ordering(204) 00:15:57.706 fused_ordering(205) 00:15:58.270 fused_ordering(206) 00:15:58.270 fused_ordering(207) 00:15:58.270 fused_ordering(208) 00:15:58.270 fused_ordering(209) 00:15:58.270 fused_ordering(210) 00:15:58.270 fused_ordering(211) 00:15:58.270 fused_ordering(212) 00:15:58.270 fused_ordering(213) 00:15:58.270 fused_ordering(214) 00:15:58.270 fused_ordering(215) 00:15:58.270 fused_ordering(216) 00:15:58.270 fused_ordering(217) 00:15:58.270 fused_ordering(218) 00:15:58.270 fused_ordering(219) 00:15:58.270 fused_ordering(220) 00:15:58.270 fused_ordering(221) 00:15:58.270 fused_ordering(222) 00:15:58.270 fused_ordering(223) 00:15:58.270 fused_ordering(224) 00:15:58.270 fused_ordering(225) 00:15:58.270 fused_ordering(226) 00:15:58.270 fused_ordering(227) 00:15:58.270 fused_ordering(228) 00:15:58.270 fused_ordering(229) 00:15:58.270 fused_ordering(230) 00:15:58.270 fused_ordering(231) 00:15:58.270 fused_ordering(232) 00:15:58.270 fused_ordering(233) 00:15:58.270 fused_ordering(234) 00:15:58.270 fused_ordering(235) 00:15:58.270 fused_ordering(236) 00:15:58.270 fused_ordering(237) 00:15:58.270 fused_ordering(238) 00:15:58.270 fused_ordering(239) 00:15:58.270 fused_ordering(240) 00:15:58.270 fused_ordering(241) 00:15:58.270 fused_ordering(242) 00:15:58.270 fused_ordering(243) 00:15:58.270 fused_ordering(244) 00:15:58.270 fused_ordering(245) 00:15:58.270 fused_ordering(246) 00:15:58.270 fused_ordering(247) 00:15:58.270 fused_ordering(248) 00:15:58.270 fused_ordering(249) 00:15:58.270 fused_ordering(250) 00:15:58.270 fused_ordering(251) 00:15:58.270 fused_ordering(252) 00:15:58.270 fused_ordering(253) 00:15:58.270 fused_ordering(254) 00:15:58.270 fused_ordering(255) 00:15:58.270 fused_ordering(256) 00:15:58.270 fused_ordering(257) 00:15:58.270 fused_ordering(258) 00:15:58.270 fused_ordering(259) 00:15:58.270 fused_ordering(260) 00:15:58.270 fused_ordering(261) 00:15:58.270 fused_ordering(262) 00:15:58.270 fused_ordering(263) 00:15:58.270 fused_ordering(264) 00:15:58.270 fused_ordering(265) 00:15:58.270 fused_ordering(266) 00:15:58.270 fused_ordering(267) 00:15:58.270 fused_ordering(268) 00:15:58.270 fused_ordering(269) 00:15:58.270 fused_ordering(270) 00:15:58.270 fused_ordering(271) 00:15:58.270 fused_ordering(272) 00:15:58.270 fused_ordering(273) 00:15:58.270 fused_ordering(274) 00:15:58.270 fused_ordering(275) 00:15:58.270 fused_ordering(276) 00:15:58.270 fused_ordering(277) 00:15:58.270 fused_ordering(278) 00:15:58.270 fused_ordering(279) 00:15:58.270 fused_ordering(280) 00:15:58.270 fused_ordering(281) 00:15:58.270 fused_ordering(282) 00:15:58.270 fused_ordering(283) 00:15:58.270 fused_ordering(284) 00:15:58.270 fused_ordering(285) 00:15:58.270 fused_ordering(286) 00:15:58.270 fused_ordering(287) 00:15:58.270 fused_ordering(288) 00:15:58.270 fused_ordering(289) 00:15:58.270 fused_ordering(290) 00:15:58.270 fused_ordering(291) 00:15:58.270 fused_ordering(292) 00:15:58.270 fused_ordering(293) 00:15:58.270 fused_ordering(294) 00:15:58.270 fused_ordering(295) 00:15:58.270 fused_ordering(296) 00:15:58.270 fused_ordering(297) 00:15:58.270 fused_ordering(298) 00:15:58.270 fused_ordering(299) 00:15:58.270 fused_ordering(300) 00:15:58.270 fused_ordering(301) 00:15:58.270 fused_ordering(302) 00:15:58.270 fused_ordering(303) 00:15:58.270 fused_ordering(304) 00:15:58.270 fused_ordering(305) 00:15:58.270 fused_ordering(306) 00:15:58.270 fused_ordering(307) 00:15:58.270 fused_ordering(308) 00:15:58.270 fused_ordering(309) 00:15:58.270 fused_ordering(310) 00:15:58.270 fused_ordering(311) 00:15:58.270 fused_ordering(312) 00:15:58.270 fused_ordering(313) 00:15:58.270 fused_ordering(314) 00:15:58.270 fused_ordering(315) 00:15:58.270 fused_ordering(316) 00:15:58.270 fused_ordering(317) 00:15:58.270 fused_ordering(318) 00:15:58.270 fused_ordering(319) 00:15:58.270 fused_ordering(320) 00:15:58.270 fused_ordering(321) 00:15:58.270 fused_ordering(322) 00:15:58.270 fused_ordering(323) 00:15:58.270 fused_ordering(324) 00:15:58.270 fused_ordering(325) 00:15:58.270 fused_ordering(326) 00:15:58.270 fused_ordering(327) 00:15:58.270 fused_ordering(328) 00:15:58.270 fused_ordering(329) 00:15:58.270 fused_ordering(330) 00:15:58.270 fused_ordering(331) 00:15:58.270 fused_ordering(332) 00:15:58.270 fused_ordering(333) 00:15:58.270 fused_ordering(334) 00:15:58.270 fused_ordering(335) 00:15:58.270 fused_ordering(336) 00:15:58.270 fused_ordering(337) 00:15:58.270 fused_ordering(338) 00:15:58.270 fused_ordering(339) 00:15:58.270 fused_ordering(340) 00:15:58.270 fused_ordering(341) 00:15:58.270 fused_ordering(342) 00:15:58.270 fused_ordering(343) 00:15:58.270 fused_ordering(344) 00:15:58.270 fused_ordering(345) 00:15:58.270 fused_ordering(346) 00:15:58.270 fused_ordering(347) 00:15:58.270 fused_ordering(348) 00:15:58.270 fused_ordering(349) 00:15:58.270 fused_ordering(350) 00:15:58.270 fused_ordering(351) 00:15:58.270 fused_ordering(352) 00:15:58.270 fused_ordering(353) 00:15:58.270 fused_ordering(354) 00:15:58.270 fused_ordering(355) 00:15:58.270 fused_ordering(356) 00:15:58.270 fused_ordering(357) 00:15:58.270 fused_ordering(358) 00:15:58.270 fused_ordering(359) 00:15:58.270 fused_ordering(360) 00:15:58.270 fused_ordering(361) 00:15:58.270 fused_ordering(362) 00:15:58.270 fused_ordering(363) 00:15:58.270 fused_ordering(364) 00:15:58.270 fused_ordering(365) 00:15:58.270 fused_ordering(366) 00:15:58.270 fused_ordering(367) 00:15:58.270 fused_ordering(368) 00:15:58.270 fused_ordering(369) 00:15:58.270 fused_ordering(370) 00:15:58.271 fused_ordering(371) 00:15:58.271 fused_ordering(372) 00:15:58.271 fused_ordering(373) 00:15:58.271 fused_ordering(374) 00:15:58.271 fused_ordering(375) 00:15:58.271 fused_ordering(376) 00:15:58.271 fused_ordering(377) 00:15:58.271 fused_ordering(378) 00:15:58.271 fused_ordering(379) 00:15:58.271 fused_ordering(380) 00:15:58.271 fused_ordering(381) 00:15:58.271 fused_ordering(382) 00:15:58.271 fused_ordering(383) 00:15:58.271 fused_ordering(384) 00:15:58.271 fused_ordering(385) 00:15:58.271 fused_ordering(386) 00:15:58.271 fused_ordering(387) 00:15:58.271 fused_ordering(388) 00:15:58.271 fused_ordering(389) 00:15:58.271 fused_ordering(390) 00:15:58.271 fused_ordering(391) 00:15:58.271 fused_ordering(392) 00:15:58.271 fused_ordering(393) 00:15:58.271 fused_ordering(394) 00:15:58.271 fused_ordering(395) 00:15:58.271 fused_ordering(396) 00:15:58.271 fused_ordering(397) 00:15:58.271 fused_ordering(398) 00:15:58.271 fused_ordering(399) 00:15:58.271 fused_ordering(400) 00:15:58.271 fused_ordering(401) 00:15:58.271 fused_ordering(402) 00:15:58.271 fused_ordering(403) 00:15:58.271 fused_ordering(404) 00:15:58.271 fused_ordering(405) 00:15:58.271 fused_ordering(406) 00:15:58.271 fused_ordering(407) 00:15:58.271 fused_ordering(408) 00:15:58.271 fused_ordering(409) 00:15:58.271 fused_ordering(410) 00:15:58.836 fused_ordering(411) 00:15:58.836 fused_ordering(412) 00:15:58.836 fused_ordering(413) 00:15:58.836 fused_ordering(414) 00:15:58.836 fused_ordering(415) 00:15:58.836 fused_ordering(416) 00:15:58.836 fused_ordering(417) 00:15:58.836 fused_ordering(418) 00:15:58.836 fused_ordering(419) 00:15:58.836 fused_ordering(420) 00:15:58.836 fused_ordering(421) 00:15:58.836 fused_ordering(422) 00:15:58.836 fused_ordering(423) 00:15:58.836 fused_ordering(424) 00:15:58.836 fused_ordering(425) 00:15:58.836 fused_ordering(426) 00:15:58.836 fused_ordering(427) 00:15:58.836 fused_ordering(428) 00:15:58.836 fused_ordering(429) 00:15:58.836 fused_ordering(430) 00:15:58.836 fused_ordering(431) 00:15:58.836 fused_ordering(432) 00:15:58.836 fused_ordering(433) 00:15:58.836 fused_ordering(434) 00:15:58.836 fused_ordering(435) 00:15:58.836 fused_ordering(436) 00:15:58.836 fused_ordering(437) 00:15:58.836 fused_ordering(438) 00:15:58.836 fused_ordering(439) 00:15:58.836 fused_ordering(440) 00:15:58.836 fused_ordering(441) 00:15:58.836 fused_ordering(442) 00:15:58.836 fused_ordering(443) 00:15:58.836 fused_ordering(444) 00:15:58.836 fused_ordering(445) 00:15:58.836 fused_ordering(446) 00:15:58.836 fused_ordering(447) 00:15:58.836 fused_ordering(448) 00:15:58.836 fused_ordering(449) 00:15:58.836 fused_ordering(450) 00:15:58.836 fused_ordering(451) 00:15:58.836 fused_ordering(452) 00:15:58.836 fused_ordering(453) 00:15:58.836 fused_ordering(454) 00:15:58.836 fused_ordering(455) 00:15:58.836 fused_ordering(456) 00:15:58.836 fused_ordering(457) 00:15:58.836 fused_ordering(458) 00:15:58.836 fused_ordering(459) 00:15:58.836 fused_ordering(460) 00:15:58.836 fused_ordering(461) 00:15:58.836 fused_ordering(462) 00:15:58.836 fused_ordering(463) 00:15:58.836 fused_ordering(464) 00:15:58.836 fused_ordering(465) 00:15:58.836 fused_ordering(466) 00:15:58.836 fused_ordering(467) 00:15:58.836 fused_ordering(468) 00:15:58.836 fused_ordering(469) 00:15:58.836 fused_ordering(470) 00:15:58.836 fused_ordering(471) 00:15:58.836 fused_ordering(472) 00:15:58.836 fused_ordering(473) 00:15:58.836 fused_ordering(474) 00:15:58.836 fused_ordering(475) 00:15:58.836 fused_ordering(476) 00:15:58.836 fused_ordering(477) 00:15:58.836 fused_ordering(478) 00:15:58.836 fused_ordering(479) 00:15:58.836 fused_ordering(480) 00:15:58.836 fused_ordering(481) 00:15:58.836 fused_ordering(482) 00:15:58.836 fused_ordering(483) 00:15:58.836 fused_ordering(484) 00:15:58.836 fused_ordering(485) 00:15:58.836 fused_ordering(486) 00:15:58.836 fused_ordering(487) 00:15:58.836 fused_ordering(488) 00:15:58.836 fused_ordering(489) 00:15:58.836 fused_ordering(490) 00:15:58.836 fused_ordering(491) 00:15:58.836 fused_ordering(492) 00:15:58.836 fused_ordering(493) 00:15:58.836 fused_ordering(494) 00:15:58.836 fused_ordering(495) 00:15:58.836 fused_ordering(496) 00:15:58.836 fused_ordering(497) 00:15:58.836 fused_ordering(498) 00:15:58.836 fused_ordering(499) 00:15:58.836 fused_ordering(500) 00:15:58.836 fused_ordering(501) 00:15:58.836 fused_ordering(502) 00:15:58.836 fused_ordering(503) 00:15:58.836 fused_ordering(504) 00:15:58.836 fused_ordering(505) 00:15:58.836 fused_ordering(506) 00:15:58.836 fused_ordering(507) 00:15:58.836 fused_ordering(508) 00:15:58.836 fused_ordering(509) 00:15:58.836 fused_ordering(510) 00:15:58.836 fused_ordering(511) 00:15:58.836 fused_ordering(512) 00:15:58.836 fused_ordering(513) 00:15:58.836 fused_ordering(514) 00:15:58.836 fused_ordering(515) 00:15:58.836 fused_ordering(516) 00:15:58.836 fused_ordering(517) 00:15:58.836 fused_ordering(518) 00:15:58.836 fused_ordering(519) 00:15:58.836 fused_ordering(520) 00:15:58.836 fused_ordering(521) 00:15:58.836 fused_ordering(522) 00:15:58.836 fused_ordering(523) 00:15:58.836 fused_ordering(524) 00:15:58.836 fused_ordering(525) 00:15:58.836 fused_ordering(526) 00:15:58.836 fused_ordering(527) 00:15:58.836 fused_ordering(528) 00:15:58.836 fused_ordering(529) 00:15:58.836 fused_ordering(530) 00:15:58.836 fused_ordering(531) 00:15:58.836 fused_ordering(532) 00:15:58.836 fused_ordering(533) 00:15:58.836 fused_ordering(534) 00:15:58.836 fused_ordering(535) 00:15:58.836 fused_ordering(536) 00:15:58.836 fused_ordering(537) 00:15:58.836 fused_ordering(538) 00:15:58.836 fused_ordering(539) 00:15:58.836 fused_ordering(540) 00:15:58.836 fused_ordering(541) 00:15:58.836 fused_ordering(542) 00:15:58.836 fused_ordering(543) 00:15:58.836 fused_ordering(544) 00:15:58.836 fused_ordering(545) 00:15:58.836 fused_ordering(546) 00:15:58.836 fused_ordering(547) 00:15:58.836 fused_ordering(548) 00:15:58.836 fused_ordering(549) 00:15:58.836 fused_ordering(550) 00:15:58.836 fused_ordering(551) 00:15:58.837 fused_ordering(552) 00:15:58.837 fused_ordering(553) 00:15:58.837 fused_ordering(554) 00:15:58.837 fused_ordering(555) 00:15:58.837 fused_ordering(556) 00:15:58.837 fused_ordering(557) 00:15:58.837 fused_ordering(558) 00:15:58.837 fused_ordering(559) 00:15:58.837 fused_ordering(560) 00:15:58.837 fused_ordering(561) 00:15:58.837 fused_ordering(562) 00:15:58.837 fused_ordering(563) 00:15:58.837 fused_ordering(564) 00:15:58.837 fused_ordering(565) 00:15:58.837 fused_ordering(566) 00:15:58.837 fused_ordering(567) 00:15:58.837 fused_ordering(568) 00:15:58.837 fused_ordering(569) 00:15:58.837 fused_ordering(570) 00:15:58.837 fused_ordering(571) 00:15:58.837 fused_ordering(572) 00:15:58.837 fused_ordering(573) 00:15:58.837 fused_ordering(574) 00:15:58.837 fused_ordering(575) 00:15:58.837 fused_ordering(576) 00:15:58.837 fused_ordering(577) 00:15:58.837 fused_ordering(578) 00:15:58.837 fused_ordering(579) 00:15:58.837 fused_ordering(580) 00:15:58.837 fused_ordering(581) 00:15:58.837 fused_ordering(582) 00:15:58.837 fused_ordering(583) 00:15:58.837 fused_ordering(584) 00:15:58.837 fused_ordering(585) 00:15:58.837 fused_ordering(586) 00:15:58.837 fused_ordering(587) 00:15:58.837 fused_ordering(588) 00:15:58.837 fused_ordering(589) 00:15:58.837 fused_ordering(590) 00:15:58.837 fused_ordering(591) 00:15:58.837 fused_ordering(592) 00:15:58.837 fused_ordering(593) 00:15:58.837 fused_ordering(594) 00:15:58.837 fused_ordering(595) 00:15:58.837 fused_ordering(596) 00:15:58.837 fused_ordering(597) 00:15:58.837 fused_ordering(598) 00:15:58.837 fused_ordering(599) 00:15:58.837 fused_ordering(600) 00:15:58.837 fused_ordering(601) 00:15:58.837 fused_ordering(602) 00:15:58.837 fused_ordering(603) 00:15:58.837 fused_ordering(604) 00:15:58.837 fused_ordering(605) 00:15:58.837 fused_ordering(606) 00:15:58.837 fused_ordering(607) 00:15:58.837 fused_ordering(608) 00:15:58.837 fused_ordering(609) 00:15:58.837 fused_ordering(610) 00:15:58.837 fused_ordering(611) 00:15:58.837 fused_ordering(612) 00:15:58.837 fused_ordering(613) 00:15:58.837 fused_ordering(614) 00:15:58.837 fused_ordering(615) 00:15:59.401 fused_ordering(616) 00:15:59.401 fused_ordering(617) 00:15:59.401 fused_ordering(618) 00:15:59.401 fused_ordering(619) 00:15:59.401 fused_ordering(620) 00:15:59.401 fused_ordering(621) 00:15:59.401 fused_ordering(622) 00:15:59.401 fused_ordering(623) 00:15:59.401 fused_ordering(624) 00:15:59.401 fused_ordering(625) 00:15:59.401 fused_ordering(626) 00:15:59.401 fused_ordering(627) 00:15:59.401 fused_ordering(628) 00:15:59.401 fused_ordering(629) 00:15:59.401 fused_ordering(630) 00:15:59.401 fused_ordering(631) 00:15:59.401 fused_ordering(632) 00:15:59.401 fused_ordering(633) 00:15:59.401 fused_ordering(634) 00:15:59.401 fused_ordering(635) 00:15:59.401 fused_ordering(636) 00:15:59.401 fused_ordering(637) 00:15:59.401 fused_ordering(638) 00:15:59.401 fused_ordering(639) 00:15:59.401 fused_ordering(640) 00:15:59.401 fused_ordering(641) 00:15:59.401 fused_ordering(642) 00:15:59.401 fused_ordering(643) 00:15:59.401 fused_ordering(644) 00:15:59.401 fused_ordering(645) 00:15:59.401 fused_ordering(646) 00:15:59.401 fused_ordering(647) 00:15:59.401 fused_ordering(648) 00:15:59.401 fused_ordering(649) 00:15:59.401 fused_ordering(650) 00:15:59.401 fused_ordering(651) 00:15:59.401 fused_ordering(652) 00:15:59.401 fused_ordering(653) 00:15:59.401 fused_ordering(654) 00:15:59.401 fused_ordering(655) 00:15:59.401 fused_ordering(656) 00:15:59.401 fused_ordering(657) 00:15:59.401 fused_ordering(658) 00:15:59.401 fused_ordering(659) 00:15:59.401 fused_ordering(660) 00:15:59.401 fused_ordering(661) 00:15:59.401 fused_ordering(662) 00:15:59.401 fused_ordering(663) 00:15:59.401 fused_ordering(664) 00:15:59.401 fused_ordering(665) 00:15:59.401 fused_ordering(666) 00:15:59.401 fused_ordering(667) 00:15:59.401 fused_ordering(668) 00:15:59.401 fused_ordering(669) 00:15:59.401 fused_ordering(670) 00:15:59.401 fused_ordering(671) 00:15:59.401 fused_ordering(672) 00:15:59.401 fused_ordering(673) 00:15:59.401 fused_ordering(674) 00:15:59.401 fused_ordering(675) 00:15:59.401 fused_ordering(676) 00:15:59.401 fused_ordering(677) 00:15:59.401 fused_ordering(678) 00:15:59.401 fused_ordering(679) 00:15:59.401 fused_ordering(680) 00:15:59.401 fused_ordering(681) 00:15:59.401 fused_ordering(682) 00:15:59.401 fused_ordering(683) 00:15:59.401 fused_ordering(684) 00:15:59.401 fused_ordering(685) 00:15:59.401 fused_ordering(686) 00:15:59.401 fused_ordering(687) 00:15:59.401 fused_ordering(688) 00:15:59.401 fused_ordering(689) 00:15:59.401 fused_ordering(690) 00:15:59.401 fused_ordering(691) 00:15:59.401 fused_ordering(692) 00:15:59.401 fused_ordering(693) 00:15:59.401 fused_ordering(694) 00:15:59.401 fused_ordering(695) 00:15:59.401 fused_ordering(696) 00:15:59.401 fused_ordering(697) 00:15:59.401 fused_ordering(698) 00:15:59.401 fused_ordering(699) 00:15:59.401 fused_ordering(700) 00:15:59.401 fused_ordering(701) 00:15:59.401 fused_ordering(702) 00:15:59.401 fused_ordering(703) 00:15:59.401 fused_ordering(704) 00:15:59.401 fused_ordering(705) 00:15:59.401 fused_ordering(706) 00:15:59.401 fused_ordering(707) 00:15:59.401 fused_ordering(708) 00:15:59.401 fused_ordering(709) 00:15:59.401 fused_ordering(710) 00:15:59.401 fused_ordering(711) 00:15:59.401 fused_ordering(712) 00:15:59.401 fused_ordering(713) 00:15:59.401 fused_ordering(714) 00:15:59.401 fused_ordering(715) 00:15:59.401 fused_ordering(716) 00:15:59.401 fused_ordering(717) 00:15:59.401 fused_ordering(718) 00:15:59.401 fused_ordering(719) 00:15:59.401 fused_ordering(720) 00:15:59.401 fused_ordering(721) 00:15:59.401 fused_ordering(722) 00:15:59.401 fused_ordering(723) 00:15:59.401 fused_ordering(724) 00:15:59.401 fused_ordering(725) 00:15:59.401 fused_ordering(726) 00:15:59.401 fused_ordering(727) 00:15:59.401 fused_ordering(728) 00:15:59.401 fused_ordering(729) 00:15:59.401 fused_ordering(730) 00:15:59.401 fused_ordering(731) 00:15:59.401 fused_ordering(732) 00:15:59.401 fused_ordering(733) 00:15:59.401 fused_ordering(734) 00:15:59.401 fused_ordering(735) 00:15:59.401 fused_ordering(736) 00:15:59.401 fused_ordering(737) 00:15:59.401 fused_ordering(738) 00:15:59.401 fused_ordering(739) 00:15:59.401 fused_ordering(740) 00:15:59.401 fused_ordering(741) 00:15:59.401 fused_ordering(742) 00:15:59.401 fused_ordering(743) 00:15:59.401 fused_ordering(744) 00:15:59.401 fused_ordering(745) 00:15:59.401 fused_ordering(746) 00:15:59.401 fused_ordering(747) 00:15:59.401 fused_ordering(748) 00:15:59.401 fused_ordering(749) 00:15:59.401 fused_ordering(750) 00:15:59.401 fused_ordering(751) 00:15:59.401 fused_ordering(752) 00:15:59.401 fused_ordering(753) 00:15:59.401 fused_ordering(754) 00:15:59.401 fused_ordering(755) 00:15:59.401 fused_ordering(756) 00:15:59.401 fused_ordering(757) 00:15:59.401 fused_ordering(758) 00:15:59.401 fused_ordering(759) 00:15:59.401 fused_ordering(760) 00:15:59.401 fused_ordering(761) 00:15:59.401 fused_ordering(762) 00:15:59.401 fused_ordering(763) 00:15:59.401 fused_ordering(764) 00:15:59.401 fused_ordering(765) 00:15:59.401 fused_ordering(766) 00:15:59.401 fused_ordering(767) 00:15:59.401 fused_ordering(768) 00:15:59.401 fused_ordering(769) 00:15:59.401 fused_ordering(770) 00:15:59.401 fused_ordering(771) 00:15:59.401 fused_ordering(772) 00:15:59.401 fused_ordering(773) 00:15:59.401 fused_ordering(774) 00:15:59.401 fused_ordering(775) 00:15:59.401 fused_ordering(776) 00:15:59.401 fused_ordering(777) 00:15:59.401 fused_ordering(778) 00:15:59.401 fused_ordering(779) 00:15:59.401 fused_ordering(780) 00:15:59.401 fused_ordering(781) 00:15:59.401 fused_ordering(782) 00:15:59.401 fused_ordering(783) 00:15:59.401 fused_ordering(784) 00:15:59.401 fused_ordering(785) 00:15:59.401 fused_ordering(786) 00:15:59.401 fused_ordering(787) 00:15:59.401 fused_ordering(788) 00:15:59.401 fused_ordering(789) 00:15:59.401 fused_ordering(790) 00:15:59.401 fused_ordering(791) 00:15:59.401 fused_ordering(792) 00:15:59.401 fused_ordering(793) 00:15:59.401 fused_ordering(794) 00:15:59.401 fused_ordering(795) 00:15:59.401 fused_ordering(796) 00:15:59.401 fused_ordering(797) 00:15:59.401 fused_ordering(798) 00:15:59.401 fused_ordering(799) 00:15:59.401 fused_ordering(800) 00:15:59.401 fused_ordering(801) 00:15:59.401 fused_ordering(802) 00:15:59.401 fused_ordering(803) 00:15:59.401 fused_ordering(804) 00:15:59.401 fused_ordering(805) 00:15:59.401 fused_ordering(806) 00:15:59.401 fused_ordering(807) 00:15:59.401 fused_ordering(808) 00:15:59.401 fused_ordering(809) 00:15:59.401 fused_ordering(810) 00:15:59.401 fused_ordering(811) 00:15:59.401 fused_ordering(812) 00:15:59.401 fused_ordering(813) 00:15:59.401 fused_ordering(814) 00:15:59.401 fused_ordering(815) 00:15:59.401 fused_ordering(816) 00:15:59.401 fused_ordering(817) 00:15:59.401 fused_ordering(818) 00:15:59.401 fused_ordering(819) 00:15:59.401 fused_ordering(820) 00:16:00.332 fused_ordering(821) 00:16:00.332 fused_ordering(822) 00:16:00.332 fused_ordering(823) 00:16:00.332 fused_ordering(824) 00:16:00.332 fused_ordering(825) 00:16:00.332 fused_ordering(826) 00:16:00.332 fused_ordering(827) 00:16:00.332 fused_ordering(828) 00:16:00.332 fused_ordering(829) 00:16:00.332 fused_ordering(830) 00:16:00.332 fused_ordering(831) 00:16:00.332 fused_ordering(832) 00:16:00.332 fused_ordering(833) 00:16:00.332 fused_ordering(834) 00:16:00.332 fused_ordering(835) 00:16:00.332 fused_ordering(836) 00:16:00.332 fused_ordering(837) 00:16:00.332 fused_ordering(838) 00:16:00.332 fused_ordering(839) 00:16:00.332 fused_ordering(840) 00:16:00.332 fused_ordering(841) 00:16:00.332 fused_ordering(842) 00:16:00.332 fused_ordering(843) 00:16:00.332 fused_ordering(844) 00:16:00.332 fused_ordering(845) 00:16:00.332 fused_ordering(846) 00:16:00.332 fused_ordering(847) 00:16:00.332 fused_ordering(848) 00:16:00.332 fused_ordering(849) 00:16:00.332 fused_ordering(850) 00:16:00.332 fused_ordering(851) 00:16:00.332 fused_ordering(852) 00:16:00.332 fused_ordering(853) 00:16:00.332 fused_ordering(854) 00:16:00.332 fused_ordering(855) 00:16:00.332 fused_ordering(856) 00:16:00.332 fused_ordering(857) 00:16:00.332 fused_ordering(858) 00:16:00.332 fused_ordering(859) 00:16:00.332 fused_ordering(860) 00:16:00.332 fused_ordering(861) 00:16:00.332 fused_ordering(862) 00:16:00.332 fused_ordering(863) 00:16:00.332 fused_ordering(864) 00:16:00.332 fused_ordering(865) 00:16:00.332 fused_ordering(866) 00:16:00.332 fused_ordering(867) 00:16:00.332 fused_ordering(868) 00:16:00.332 fused_ordering(869) 00:16:00.332 fused_ordering(870) 00:16:00.332 fused_ordering(871) 00:16:00.332 fused_ordering(872) 00:16:00.332 fused_ordering(873) 00:16:00.332 fused_ordering(874) 00:16:00.332 fused_ordering(875) 00:16:00.332 fused_ordering(876) 00:16:00.332 fused_ordering(877) 00:16:00.332 fused_ordering(878) 00:16:00.332 fused_ordering(879) 00:16:00.332 fused_ordering(880) 00:16:00.332 fused_ordering(881) 00:16:00.332 fused_ordering(882) 00:16:00.332 fused_ordering(883) 00:16:00.332 fused_ordering(884) 00:16:00.332 fused_ordering(885) 00:16:00.332 fused_ordering(886) 00:16:00.332 fused_ordering(887) 00:16:00.332 fused_ordering(888) 00:16:00.332 fused_ordering(889) 00:16:00.332 fused_ordering(890) 00:16:00.332 fused_ordering(891) 00:16:00.332 fused_ordering(892) 00:16:00.332 fused_ordering(893) 00:16:00.332 fused_ordering(894) 00:16:00.332 fused_ordering(895) 00:16:00.332 fused_ordering(896) 00:16:00.332 fused_ordering(897) 00:16:00.332 fused_ordering(898) 00:16:00.332 fused_ordering(899) 00:16:00.332 fused_ordering(900) 00:16:00.332 fused_ordering(901) 00:16:00.332 fused_ordering(902) 00:16:00.332 fused_ordering(903) 00:16:00.332 fused_ordering(904) 00:16:00.332 fused_ordering(905) 00:16:00.332 fused_ordering(906) 00:16:00.332 fused_ordering(907) 00:16:00.332 fused_ordering(908) 00:16:00.332 fused_ordering(909) 00:16:00.332 fused_ordering(910) 00:16:00.332 fused_ordering(911) 00:16:00.332 fused_ordering(912) 00:16:00.332 fused_ordering(913) 00:16:00.332 fused_ordering(914) 00:16:00.332 fused_ordering(915) 00:16:00.332 fused_ordering(916) 00:16:00.332 fused_ordering(917) 00:16:00.332 fused_ordering(918) 00:16:00.332 fused_ordering(919) 00:16:00.332 fused_ordering(920) 00:16:00.332 fused_ordering(921) 00:16:00.332 fused_ordering(922) 00:16:00.332 fused_ordering(923) 00:16:00.332 fused_ordering(924) 00:16:00.332 fused_ordering(925) 00:16:00.332 fused_ordering(926) 00:16:00.332 fused_ordering(927) 00:16:00.332 fused_ordering(928) 00:16:00.332 fused_ordering(929) 00:16:00.332 fused_ordering(930) 00:16:00.332 fused_ordering(931) 00:16:00.332 fused_ordering(932) 00:16:00.332 fused_ordering(933) 00:16:00.332 fused_ordering(934) 00:16:00.332 fused_ordering(935) 00:16:00.332 fused_ordering(936) 00:16:00.332 fused_ordering(937) 00:16:00.332 fused_ordering(938) 00:16:00.332 fused_ordering(939) 00:16:00.332 fused_ordering(940) 00:16:00.332 fused_ordering(941) 00:16:00.332 fused_ordering(942) 00:16:00.332 fused_ordering(943) 00:16:00.332 fused_ordering(944) 00:16:00.332 fused_ordering(945) 00:16:00.332 fused_ordering(946) 00:16:00.332 fused_ordering(947) 00:16:00.332 fused_ordering(948) 00:16:00.332 fused_ordering(949) 00:16:00.332 fused_ordering(950) 00:16:00.332 fused_ordering(951) 00:16:00.332 fused_ordering(952) 00:16:00.332 fused_ordering(953) 00:16:00.332 fused_ordering(954) 00:16:00.332 fused_ordering(955) 00:16:00.332 fused_ordering(956) 00:16:00.332 fused_ordering(957) 00:16:00.332 fused_ordering(958) 00:16:00.332 fused_ordering(959) 00:16:00.332 fused_ordering(960) 00:16:00.332 fused_ordering(961) 00:16:00.332 fused_ordering(962) 00:16:00.332 fused_ordering(963) 00:16:00.332 fused_ordering(964) 00:16:00.332 fused_ordering(965) 00:16:00.332 fused_ordering(966) 00:16:00.332 fused_ordering(967) 00:16:00.332 fused_ordering(968) 00:16:00.332 fused_ordering(969) 00:16:00.332 fused_ordering(970) 00:16:00.332 fused_ordering(971) 00:16:00.332 fused_ordering(972) 00:16:00.332 fused_ordering(973) 00:16:00.332 fused_ordering(974) 00:16:00.332 fused_ordering(975) 00:16:00.332 fused_ordering(976) 00:16:00.332 fused_ordering(977) 00:16:00.332 fused_ordering(978) 00:16:00.332 fused_ordering(979) 00:16:00.332 fused_ordering(980) 00:16:00.332 fused_ordering(981) 00:16:00.332 fused_ordering(982) 00:16:00.332 fused_ordering(983) 00:16:00.332 fused_ordering(984) 00:16:00.332 fused_ordering(985) 00:16:00.332 fused_ordering(986) 00:16:00.332 fused_ordering(987) 00:16:00.332 fused_ordering(988) 00:16:00.332 fused_ordering(989) 00:16:00.332 fused_ordering(990) 00:16:00.332 fused_ordering(991) 00:16:00.332 fused_ordering(992) 00:16:00.332 fused_ordering(993) 00:16:00.332 fused_ordering(994) 00:16:00.332 fused_ordering(995) 00:16:00.332 fused_ordering(996) 00:16:00.332 fused_ordering(997) 00:16:00.332 fused_ordering(998) 00:16:00.332 fused_ordering(999) 00:16:00.332 fused_ordering(1000) 00:16:00.332 fused_ordering(1001) 00:16:00.332 fused_ordering(1002) 00:16:00.332 fused_ordering(1003) 00:16:00.332 fused_ordering(1004) 00:16:00.332 fused_ordering(1005) 00:16:00.332 fused_ordering(1006) 00:16:00.332 fused_ordering(1007) 00:16:00.332 fused_ordering(1008) 00:16:00.332 fused_ordering(1009) 00:16:00.332 fused_ordering(1010) 00:16:00.332 fused_ordering(1011) 00:16:00.332 fused_ordering(1012) 00:16:00.332 fused_ordering(1013) 00:16:00.332 fused_ordering(1014) 00:16:00.332 fused_ordering(1015) 00:16:00.332 fused_ordering(1016) 00:16:00.332 fused_ordering(1017) 00:16:00.332 fused_ordering(1018) 00:16:00.332 fused_ordering(1019) 00:16:00.332 fused_ordering(1020) 00:16:00.332 fused_ordering(1021) 00:16:00.332 fused_ordering(1022) 00:16:00.332 fused_ordering(1023) 00:16:00.332 14:48:39 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:16:00.332 14:48:39 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:16:00.332 14:48:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:00.332 14:48:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:16:00.332 14:48:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:00.332 14:48:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:16:00.332 14:48:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:00.332 14:48:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:00.332 rmmod nvme_tcp 00:16:00.332 rmmod nvme_fabrics 00:16:00.332 rmmod nvme_keyring 00:16:00.332 14:48:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:00.332 14:48:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:16:00.332 14:48:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:16:00.332 14:48:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1848748 ']' 00:16:00.332 14:48:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1848748 00:16:00.332 14:48:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 1848748 ']' 00:16:00.333 14:48:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 1848748 00:16:00.333 14:48:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:16:00.333 14:48:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:00.333 14:48:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1848748 00:16:00.333 14:48:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:00.333 14:48:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:00.333 14:48:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1848748' 00:16:00.333 killing process with pid 1848748 00:16:00.333 14:48:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 1848748 00:16:00.333 14:48:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 1848748 00:16:01.704 14:48:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:01.704 14:48:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:01.704 14:48:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:01.704 14:48:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:01.704 14:48:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:01.704 14:48:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:01.704 14:48:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:01.704 14:48:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:04.271 14:48:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:04.271 00:16:04.271 real 0m9.858s 00:16:04.271 user 0m8.008s 00:16:04.271 sys 0m3.534s 00:16:04.271 14:48:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:04.271 14:48:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:04.271 ************************************ 00:16:04.271 END TEST nvmf_fused_ordering 00:16:04.271 ************************************ 00:16:04.271 14:48:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:04.271 14:48:42 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:16:04.271 14:48:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:04.271 14:48:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:04.271 14:48:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:04.271 ************************************ 00:16:04.271 START TEST nvmf_delete_subsystem 00:16:04.271 ************************************ 00:16:04.271 14:48:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:16:04.271 * Looking for test storage... 00:16:04.271 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:04.271 14:48:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:04.271 14:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:16:04.271 14:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:04.271 14:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:04.271 14:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:04.271 14:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:04.271 14:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:04.271 14:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:04.271 14:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:04.271 14:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:04.271 14:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:04.271 14:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:04.271 14:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:04.271 14:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:04.271 14:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:04.271 14:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:04.271 14:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:04.271 14:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:04.271 14:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:04.271 14:48:43 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:04.271 14:48:43 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:04.271 14:48:43 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:04.271 14:48:43 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.271 14:48:43 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.271 14:48:43 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.271 14:48:43 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:16:04.271 14:48:43 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.272 14:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:16:04.272 14:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:04.272 14:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:04.272 14:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:04.272 14:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:04.272 14:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:04.272 14:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:04.272 14:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:04.272 14:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:04.272 14:48:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:16:04.272 14:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:04.272 14:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:04.272 14:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:04.272 14:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:04.272 14:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:04.272 14:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:04.272 14:48:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:04.272 14:48:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:04.272 14:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:04.272 14:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:04.272 14:48:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:16:04.272 14:48:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:06.172 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:06.172 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:06.172 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:06.172 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:06.172 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:06.173 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:06.173 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:06.173 14:48:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:06.173 14:48:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:06.173 14:48:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:06.173 14:48:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:06.173 14:48:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:06.173 14:48:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:06.173 14:48:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:06.173 14:48:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:06.173 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:06.173 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:16:06.173 00:16:06.173 --- 10.0.0.2 ping statistics --- 00:16:06.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.173 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:16:06.173 14:48:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:06.173 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:06.173 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:16:06.173 00:16:06.173 --- 10.0.0.1 ping statistics --- 00:16:06.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.173 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:16:06.173 14:48:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:06.173 14:48:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:16:06.173 14:48:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:06.173 14:48:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:06.173 14:48:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:06.173 14:48:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:06.173 14:48:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:06.173 14:48:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:06.173 14:48:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:06.173 14:48:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:16:06.173 14:48:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:06.173 14:48:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:06.173 14:48:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:06.173 14:48:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1851361 00:16:06.173 14:48:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:06.173 14:48:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1851361 00:16:06.173 14:48:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 1851361 ']' 00:16:06.173 14:48:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:06.173 14:48:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:06.173 14:48:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:06.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:06.173 14:48:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:06.173 14:48:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:06.173 [2024-07-14 14:48:45.222472] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:06.173 [2024-07-14 14:48:45.222608] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:06.173 EAL: No free 2048 kB hugepages reported on node 1 00:16:06.173 [2024-07-14 14:48:45.351806] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:06.431 [2024-07-14 14:48:45.564322] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:06.431 [2024-07-14 14:48:45.564394] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:06.431 [2024-07-14 14:48:45.564436] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:06.431 [2024-07-14 14:48:45.564453] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:06.431 [2024-07-14 14:48:45.564470] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:06.431 [2024-07-14 14:48:45.564547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.431 [2024-07-14 14:48:45.564557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:06.998 14:48:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:06.998 14:48:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:16:06.998 14:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:06.998 14:48:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:06.998 14:48:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:06.998 14:48:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:06.999 14:48:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:06.999 14:48:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.999 14:48:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:06.999 [2024-07-14 14:48:46.163621] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:06.999 14:48:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.999 14:48:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:06.999 14:48:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.999 14:48:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:06.999 14:48:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.999 14:48:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:06.999 14:48:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.999 14:48:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:06.999 [2024-07-14 14:48:46.181088] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:06.999 14:48:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.999 14:48:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:06.999 14:48:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.999 14:48:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:06.999 NULL1 00:16:06.999 14:48:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.999 14:48:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:06.999 14:48:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.999 14:48:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:06.999 Delay0 00:16:06.999 14:48:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.999 14:48:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:06.999 14:48:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.999 14:48:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:06.999 14:48:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.999 14:48:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1851511 00:16:06.999 14:48:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:16:06.999 14:48:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:16:06.999 EAL: No free 2048 kB hugepages reported on node 1 00:16:07.257 [2024-07-14 14:48:46.315606] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:09.156 14:48:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:09.156 14:48:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.156 14:48:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:09.156 Read completed with error (sct=0, sc=8) 00:16:09.156 Read completed with error (sct=0, sc=8) 00:16:09.156 Read completed with error (sct=0, sc=8) 00:16:09.156 Read completed with error (sct=0, sc=8) 00:16:09.156 starting I/O failed: -6 00:16:09.156 Write completed with error (sct=0, sc=8) 00:16:09.156 Write completed with error (sct=0, sc=8) 00:16:09.156 Read completed with error (sct=0, sc=8) 00:16:09.156 Read completed with error (sct=0, sc=8) 00:16:09.414 starting I/O failed: -6 00:16:09.414 Read completed with error (sct=0, sc=8) 00:16:09.414 Read completed with error (sct=0, sc=8) 00:16:09.414 Read completed with error (sct=0, sc=8) 00:16:09.414 Read completed with error (sct=0, sc=8) 00:16:09.414 starting I/O failed: -6 00:16:09.414 Read completed with error (sct=0, sc=8) 00:16:09.414 Read completed with error (sct=0, sc=8) 00:16:09.414 Write completed with error (sct=0, sc=8) 00:16:09.414 Read completed with error (sct=0, sc=8) 00:16:09.414 starting I/O failed: -6 00:16:09.414 Read completed with error (sct=0, sc=8) 00:16:09.414 Read completed with error (sct=0, sc=8) 00:16:09.414 Read completed with error (sct=0, sc=8) 00:16:09.414 Read completed with error (sct=0, sc=8) 00:16:09.414 starting I/O failed: -6 00:16:09.414 Read completed with error (sct=0, sc=8) 00:16:09.414 Write completed with error (sct=0, sc=8) 00:16:09.414 Write completed with error (sct=0, sc=8) 00:16:09.414 Read completed with error (sct=0, sc=8) 00:16:09.414 starting I/O failed: -6 00:16:09.414 Write completed with error (sct=0, sc=8) 00:16:09.414 Write completed with error (sct=0, sc=8) 00:16:09.414 Read completed with error (sct=0, sc=8) 00:16:09.414 Read completed with error (sct=0, sc=8) 00:16:09.414 starting I/O failed: -6 00:16:09.414 Write completed with error (sct=0, sc=8) 00:16:09.414 Write completed with error (sct=0, sc=8) 00:16:09.414 Read completed with error (sct=0, sc=8) 00:16:09.414 Read completed with error (sct=0, sc=8) 00:16:09.414 starting I/O failed: -6 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Write completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 starting I/O failed: -6 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Write completed with error (sct=0, sc=8) 00:16:09.415 starting I/O failed: -6 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Write completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 starting I/O failed: -6 00:16:09.415 Write completed with error (sct=0, sc=8) 00:16:09.415 starting I/O failed: -6 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Write completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 starting I/O failed: -6 00:16:09.415 Write completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 starting I/O failed: -6 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Write completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Write completed with error (sct=0, sc=8) 00:16:09.415 starting I/O failed: -6 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 starting I/O failed: -6 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Write completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 starting I/O failed: -6 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 starting I/O failed: -6 00:16:09.415 Write completed with error (sct=0, sc=8) 00:16:09.415 Write completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Write completed with error (sct=0, sc=8) 00:16:09.415 starting I/O failed: -6 00:16:09.415 Write completed with error (sct=0, sc=8) 00:16:09.415 Write completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 starting I/O failed: -6 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 starting I/O failed: -6 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Write completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 starting I/O failed: -6 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Write completed with error (sct=0, sc=8) 00:16:09.415 Write completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 starting I/O failed: -6 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Write completed with error (sct=0, sc=8) 00:16:09.415 Write completed with error (sct=0, sc=8) 00:16:09.415 starting I/O failed: -6 00:16:09.415 Write completed with error (sct=0, sc=8) 00:16:09.415 Write completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Write completed with error (sct=0, sc=8) 00:16:09.415 Write completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Write completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Write completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Write completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Write completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Write completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Write completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Write completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Write completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 [2024-07-14 14:48:48.467154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001fe80 is same with the state(5) to be set 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Write completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Write completed with error (sct=0, sc=8) 00:16:09.415 Write completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Write completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Write completed with error (sct=0, sc=8) 00:16:09.415 Write completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Write completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Write completed with error (sct=0, sc=8) 00:16:09.415 Read completed with error (sct=0, sc=8) 00:16:09.415 Write completed with error (sct=0, sc=8) 00:16:09.415 Write completed with error (sct=0, sc=8) 00:16:09.416 Read completed with error (sct=0, sc=8) 00:16:09.416 Read completed with error (sct=0, sc=8) 00:16:09.416 Read completed with error (sct=0, sc=8) 00:16:09.416 Write completed with error (sct=0, sc=8) 00:16:09.416 Write completed with error (sct=0, sc=8) 00:16:09.416 Write completed with error (sct=0, sc=8) 00:16:09.416 Read completed with error (sct=0, sc=8) 00:16:09.416 Read completed with error (sct=0, sc=8) 00:16:09.416 Write completed with error (sct=0, sc=8) 00:16:09.416 Read completed with error (sct=0, sc=8) 00:16:09.416 Read completed with error (sct=0, sc=8) 00:16:09.416 Write completed with error (sct=0, sc=8) 00:16:09.416 Write completed with error (sct=0, sc=8) 00:16:09.416 Read completed with error (sct=0, sc=8) 00:16:09.416 Write completed with error (sct=0, sc=8) 00:16:09.416 Read completed with error (sct=0, sc=8) 00:16:09.416 Read completed with error (sct=0, sc=8) 00:16:09.416 Read completed with error (sct=0, sc=8) 00:16:09.416 Read completed with error (sct=0, sc=8) 00:16:09.416 [2024-07-14 14:48:48.467930] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020100 is same with the state(5) to be set 00:16:10.350 [2024-07-14 14:48:49.415673] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015980 is same with the state(5) to be set 00:16:10.350 Read completed with error (sct=0, sc=8) 00:16:10.350 Write completed with error (sct=0, sc=8) 00:16:10.350 Read completed with error (sct=0, sc=8) 00:16:10.350 Write completed with error (sct=0, sc=8) 00:16:10.350 Read completed with error (sct=0, sc=8) 00:16:10.350 Read completed with error (sct=0, sc=8) 00:16:10.350 Write completed with error (sct=0, sc=8) 00:16:10.350 Read completed with error (sct=0, sc=8) 00:16:10.350 Write completed with error (sct=0, sc=8) 00:16:10.350 Read completed with error (sct=0, sc=8) 00:16:10.350 Write completed with error (sct=0, sc=8) 00:16:10.350 Read completed with error (sct=0, sc=8) 00:16:10.350 Read completed with error (sct=0, sc=8) 00:16:10.350 Read completed with error (sct=0, sc=8) 00:16:10.350 Read completed with error (sct=0, sc=8) 00:16:10.350 Write completed with error (sct=0, sc=8) 00:16:10.350 Write completed with error (sct=0, sc=8) 00:16:10.350 Read completed with error (sct=0, sc=8) 00:16:10.350 Read completed with error (sct=0, sc=8) 00:16:10.350 Write completed with error (sct=0, sc=8) 00:16:10.350 Read completed with error (sct=0, sc=8) 00:16:10.350 Read completed with error (sct=0, sc=8) 00:16:10.350 Write completed with error (sct=0, sc=8) 00:16:10.350 Write completed with error (sct=0, sc=8) 00:16:10.350 Write completed with error (sct=0, sc=8) 00:16:10.350 Read completed with error (sct=0, sc=8) 00:16:10.350 Read completed with error (sct=0, sc=8) 00:16:10.350 Read completed with error (sct=0, sc=8) 00:16:10.350 Read completed with error (sct=0, sc=8) 00:16:10.350 Read completed with error (sct=0, sc=8) 00:16:10.350 Read completed with error (sct=0, sc=8) 00:16:10.350 Read completed with error (sct=0, sc=8) 00:16:10.350 Read completed with error (sct=0, sc=8) 00:16:10.350 Write completed with error (sct=0, sc=8) 00:16:10.350 Read completed with error (sct=0, sc=8) 00:16:10.350 Write completed with error (sct=0, sc=8) 00:16:10.350 Read completed with error (sct=0, sc=8) 00:16:10.350 Read completed with error (sct=0, sc=8) 00:16:10.350 [2024-07-14 14:48:49.466066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016100 is same with the state(5) to be set 00:16:10.350 Read completed with error (sct=0, sc=8) 00:16:10.350 Read completed with error (sct=0, sc=8) 00:16:10.350 Read completed with error (sct=0, sc=8) 00:16:10.350 Read completed with error (sct=0, sc=8) 00:16:10.350 Read completed with error (sct=0, sc=8) 00:16:10.350 Read completed with error (sct=0, sc=8) 00:16:10.350 Read completed with error (sct=0, sc=8) 00:16:10.350 Write completed with error (sct=0, sc=8) 00:16:10.350 Read completed with error (sct=0, sc=8) 00:16:10.350 Read completed with error (sct=0, sc=8) 00:16:10.350 Read completed with error (sct=0, sc=8) 00:16:10.350 Read completed with error (sct=0, sc=8) 00:16:10.350 Read completed with error (sct=0, sc=8) 00:16:10.350 Write completed with error (sct=0, sc=8) 00:16:10.350 Write completed with error (sct=0, sc=8) 00:16:10.350 Read completed with error (sct=0, sc=8) 00:16:10.350 Read completed with error (sct=0, sc=8) 00:16:10.350 Read completed with error (sct=0, sc=8) 00:16:10.350 Read completed with error (sct=0, sc=8) 00:16:10.350 Read completed with error (sct=0, sc=8) 00:16:10.350 Write completed with error (sct=0, sc=8) 00:16:10.350 Read completed with error (sct=0, sc=8) 00:16:10.350 Write completed with error (sct=0, sc=8) 00:16:10.350 Read completed with error (sct=0, sc=8) 00:16:10.350 Read completed with error (sct=0, sc=8) 00:16:10.350 Read completed with error (sct=0, sc=8) 00:16:10.350 Read completed with error (sct=0, sc=8) 00:16:10.350 Read completed with error (sct=0, sc=8) 00:16:10.350 Write completed with error (sct=0, sc=8) 00:16:10.350 Write completed with error (sct=0, sc=8) 00:16:10.350 Read completed with error (sct=0, sc=8) 00:16:10.350 Write completed with error (sct=0, sc=8) 00:16:10.350 Read completed with error (sct=0, sc=8) 00:16:10.351 Read completed with error (sct=0, sc=8) 00:16:10.351 Read completed with error (sct=0, sc=8) 00:16:10.351 Read completed with error (sct=0, sc=8) 00:16:10.351 Read completed with error (sct=0, sc=8) 00:16:10.351 Read completed with error (sct=0, sc=8) 00:16:10.351 [2024-07-14 14:48:49.466680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016600 is same with the state(5) to be set 00:16:10.351 Write completed with error (sct=0, sc=8) 00:16:10.351 Read completed with error (sct=0, sc=8) 00:16:10.351 Read completed with error (sct=0, sc=8) 00:16:10.351 Write completed with error (sct=0, sc=8) 00:16:10.351 Write completed with error (sct=0, sc=8) 00:16:10.351 Read completed with error (sct=0, sc=8) 00:16:10.351 Read completed with error (sct=0, sc=8) 00:16:10.351 Read completed with error (sct=0, sc=8) 00:16:10.351 Read completed with error (sct=0, sc=8) 00:16:10.351 Write completed with error (sct=0, sc=8) 00:16:10.351 Write completed with error (sct=0, sc=8) 00:16:10.351 Write completed with error (sct=0, sc=8) 00:16:10.351 Write completed with error (sct=0, sc=8) 00:16:10.351 Read completed with error (sct=0, sc=8) 00:16:10.351 Read completed with error (sct=0, sc=8) 00:16:10.351 Read completed with error (sct=0, sc=8) 00:16:10.351 Read completed with error (sct=0, sc=8) 00:16:10.351 Write completed with error (sct=0, sc=8) 00:16:10.351 Read completed with error (sct=0, sc=8) 00:16:10.351 Read completed with error (sct=0, sc=8) 00:16:10.351 Write completed with error (sct=0, sc=8) 00:16:10.351 Read completed with error (sct=0, sc=8) 00:16:10.351 Read completed with error (sct=0, sc=8) 00:16:10.351 Write completed with error (sct=0, sc=8) 00:16:10.351 Write completed with error (sct=0, sc=8) 00:16:10.351 Read completed with error (sct=0, sc=8) 00:16:10.351 Read completed with error (sct=0, sc=8) 00:16:10.351 Read completed with error (sct=0, sc=8) 00:16:10.351 Read completed with error (sct=0, sc=8) 00:16:10.351 Read completed with error (sct=0, sc=8) 00:16:10.351 Write completed with error (sct=0, sc=8) 00:16:10.351 Read completed with error (sct=0, sc=8) 00:16:10.351 Read completed with error (sct=0, sc=8) 00:16:10.351 Write completed with error (sct=0, sc=8) 00:16:10.351 Read completed with error (sct=0, sc=8) 00:16:10.351 Read completed with error (sct=0, sc=8) 00:16:10.351 Write completed with error (sct=0, sc=8) 00:16:10.351 Read completed with error (sct=0, sc=8) 00:16:10.351 [2024-07-14 14:48:49.467520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015e80 is same with the state(5) to be set 00:16:10.351 Write completed with error (sct=0, sc=8) 00:16:10.351 Read completed with error (sct=0, sc=8) 00:16:10.351 Read completed with error (sct=0, sc=8) 00:16:10.351 Read completed with error (sct=0, sc=8) 00:16:10.351 Write completed with error (sct=0, sc=8) 00:16:10.351 Write completed with error (sct=0, sc=8) 00:16:10.351 Read completed with error (sct=0, sc=8) 00:16:10.351 Read completed with error (sct=0, sc=8) 00:16:10.351 Write completed with error (sct=0, sc=8) 00:16:10.351 Write completed with error (sct=0, sc=8) 00:16:10.351 Read completed with error (sct=0, sc=8) 00:16:10.351 Read completed with error (sct=0, sc=8) 00:16:10.351 Read completed with error (sct=0, sc=8) 00:16:10.351 Write completed with error (sct=0, sc=8) 00:16:10.351 Read completed with error (sct=0, sc=8) 00:16:10.351 Read completed with error (sct=0, sc=8) 00:16:10.351 Read completed with error (sct=0, sc=8) 00:16:10.351 Read completed with error (sct=0, sc=8) 00:16:10.351 [2024-07-14 14:48:49.472135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020380 is same with the state(5) to be set 00:16:10.351 14:48:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.351 14:48:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:16:10.351 14:48:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1851511 00:16:10.351 14:48:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:16:10.351 Initializing NVMe Controllers 00:16:10.351 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:10.351 Controller IO queue size 128, less than required. 00:16:10.351 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:10.351 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:16:10.351 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:16:10.351 Initialization complete. Launching workers. 00:16:10.351 ======================================================== 00:16:10.351 Latency(us) 00:16:10.351 Device Information : IOPS MiB/s Average min max 00:16:10.351 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 195.05 0.10 944926.89 1058.18 1014733.43 00:16:10.351 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.43 0.08 869968.26 996.83 1016789.79 00:16:10.351 ======================================================== 00:16:10.351 Total : 352.47 0.17 911448.17 996.83 1016789.79 00:16:10.351 00:16:10.351 [2024-07-14 14:48:49.473690] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000015980 (9): Bad file descriptor 00:16:10.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:16:10.917 14:48:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:16:10.917 14:48:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1851511 00:16:10.917 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1851511) - No such process 00:16:10.917 14:48:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1851511 00:16:10.917 14:48:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:16:10.917 14:48:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 1851511 00:16:10.917 14:48:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:16:10.917 14:48:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:10.917 14:48:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:16:10.917 14:48:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:10.917 14:48:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 1851511 00:16:10.917 14:48:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:16:10.917 14:48:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:10.917 14:48:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:10.917 14:48:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:10.917 14:48:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:10.917 14:48:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.917 14:48:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:10.917 14:48:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.918 14:48:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:10.918 14:48:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.918 14:48:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:10.918 [2024-07-14 14:48:49.993888] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:10.918 14:48:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.918 14:48:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:10.918 14:48:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.918 14:48:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:10.918 14:48:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.918 14:48:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1851920 00:16:10.918 14:48:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:16:10.918 14:48:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:16:10.918 14:48:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1851920 00:16:10.918 14:48:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:10.918 EAL: No free 2048 kB hugepages reported on node 1 00:16:10.918 [2024-07-14 14:48:50.115292] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:11.483 14:48:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:11.483 14:48:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1851920 00:16:11.483 14:48:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:11.740 14:48:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:11.740 14:48:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1851920 00:16:11.740 14:48:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:12.304 14:48:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:12.305 14:48:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1851920 00:16:12.305 14:48:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:12.871 14:48:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:12.871 14:48:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1851920 00:16:12.871 14:48:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:13.435 14:48:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:13.435 14:48:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1851920 00:16:13.435 14:48:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:14.000 14:48:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:14.001 14:48:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1851920 00:16:14.001 14:48:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:14.258 Initializing NVMe Controllers 00:16:14.258 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:14.258 Controller IO queue size 128, less than required. 00:16:14.258 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:14.258 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:16:14.258 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:16:14.258 Initialization complete. Launching workers. 00:16:14.258 ======================================================== 00:16:14.258 Latency(us) 00:16:14.258 Device Information : IOPS MiB/s Average min max 00:16:14.258 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005081.19 1000242.31 1042759.53 00:16:14.258 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005317.90 1000244.13 1015334.72 00:16:14.258 ======================================================== 00:16:14.258 Total : 256.00 0.12 1005199.54 1000242.31 1042759.53 00:16:14.258 00:16:14.258 14:48:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:14.258 14:48:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1851920 00:16:14.258 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1851920) - No such process 00:16:14.258 14:48:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1851920 00:16:14.258 14:48:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:16:14.258 14:48:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:16:14.258 14:48:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:14.258 14:48:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:16:14.258 14:48:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:14.258 14:48:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:16:14.258 14:48:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:14.258 14:48:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:14.258 rmmod nvme_tcp 00:16:14.258 rmmod nvme_fabrics 00:16:14.516 rmmod nvme_keyring 00:16:14.516 14:48:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:14.516 14:48:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:16:14.516 14:48:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:16:14.516 14:48:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1851361 ']' 00:16:14.516 14:48:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1851361 00:16:14.516 14:48:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 1851361 ']' 00:16:14.516 14:48:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 1851361 00:16:14.516 14:48:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:16:14.516 14:48:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:14.516 14:48:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1851361 00:16:14.516 14:48:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:14.516 14:48:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:14.516 14:48:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1851361' 00:16:14.516 killing process with pid 1851361 00:16:14.516 14:48:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 1851361 00:16:14.516 14:48:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 1851361 00:16:15.891 14:48:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:15.891 14:48:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:15.891 14:48:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:15.891 14:48:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:15.891 14:48:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:15.891 14:48:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:15.891 14:48:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:15.891 14:48:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:17.793 14:48:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:17.793 00:16:17.793 real 0m13.946s 00:16:17.793 user 0m30.592s 00:16:17.793 sys 0m3.184s 00:16:17.793 14:48:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:17.793 14:48:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:17.793 ************************************ 00:16:17.793 END TEST nvmf_delete_subsystem 00:16:17.793 ************************************ 00:16:17.793 14:48:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:17.793 14:48:56 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:16:17.793 14:48:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:17.793 14:48:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:17.793 14:48:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:17.793 ************************************ 00:16:17.793 START TEST nvmf_ns_masking 00:16:17.793 ************************************ 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:16:17.793 * Looking for test storage... 00:16:17.793 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=dacc4c18-be87-4e1c-948e-1688cc8adccf 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=6ac8672b-e829-4363-9092-450576b46e0d 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=fecb69d0-74d3-403c-b01f-f8a4ffa024c4 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:17.793 14:48:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:17.794 14:48:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:17.794 14:48:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:17.794 14:48:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:17.794 14:48:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:16:17.794 14:48:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:19.697 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:19.697 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:19.697 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:19.697 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:19.697 14:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:19.956 14:48:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:19.956 14:48:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:19.956 14:48:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:19.956 14:48:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:19.956 14:48:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:19.956 14:48:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:19.956 14:48:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:19.956 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:19.956 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:16:19.956 00:16:19.956 --- 10.0.0.2 ping statistics --- 00:16:19.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.956 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:16:19.956 14:48:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:19.956 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:19.956 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:16:19.956 00:16:19.956 --- 10.0.0.1 ping statistics --- 00:16:19.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.956 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:16:19.956 14:48:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:19.956 14:48:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:16:19.956 14:48:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:19.956 14:48:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:19.956 14:48:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:19.956 14:48:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:19.956 14:48:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:19.956 14:48:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:19.956 14:48:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:19.956 14:48:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:16:19.956 14:48:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:19.956 14:48:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:19.956 14:48:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:19.956 14:48:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1854395 00:16:19.956 14:48:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:19.956 14:48:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1854395 00:16:19.956 14:48:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1854395 ']' 00:16:19.956 14:48:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:19.956 14:48:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:19.956 14:48:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:19.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:19.956 14:48:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:19.956 14:48:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:19.956 [2024-07-14 14:48:59.190233] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:19.956 [2024-07-14 14:48:59.190370] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:19.956 EAL: No free 2048 kB hugepages reported on node 1 00:16:20.214 [2024-07-14 14:48:59.317263] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.473 [2024-07-14 14:48:59.572810] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:20.473 [2024-07-14 14:48:59.572898] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:20.473 [2024-07-14 14:48:59.572939] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:20.473 [2024-07-14 14:48:59.572965] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:20.473 [2024-07-14 14:48:59.572986] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:20.473 [2024-07-14 14:48:59.573031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.038 14:49:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:21.038 14:49:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:16:21.038 14:49:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:21.038 14:49:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:21.038 14:49:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:21.038 14:49:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:21.038 14:49:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:21.296 [2024-07-14 14:49:00.367713] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:21.296 14:49:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:16:21.296 14:49:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:16:21.296 14:49:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:21.554 Malloc1 00:16:21.554 14:49:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:21.812 Malloc2 00:16:22.070 14:49:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:22.328 14:49:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:16:22.585 14:49:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:22.585 [2024-07-14 14:49:01.881279] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:22.843 14:49:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:16:22.843 14:49:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I fecb69d0-74d3-403c-b01f-f8a4ffa024c4 -a 10.0.0.2 -s 4420 -i 4 00:16:22.843 14:49:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:16:22.843 14:49:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:22.843 14:49:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:22.843 14:49:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:22.843 14:49:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:25.385 14:49:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:25.385 14:49:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:25.385 14:49:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:25.385 14:49:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:25.385 14:49:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:25.385 14:49:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:25.385 14:49:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:25.385 14:49:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:25.385 14:49:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:25.385 14:49:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:25.385 14:49:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:16:25.385 14:49:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:25.385 14:49:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:25.385 [ 0]:0x1 00:16:25.385 14:49:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:25.385 14:49:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:25.385 14:49:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=53df038d5fca4b87a65d1396430c85d2 00:16:25.385 14:49:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 53df038d5fca4b87a65d1396430c85d2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:25.385 14:49:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:16:25.385 14:49:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:16:25.385 14:49:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:25.385 14:49:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:25.385 [ 0]:0x1 00:16:25.385 14:49:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:25.385 14:49:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:25.386 14:49:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=53df038d5fca4b87a65d1396430c85d2 00:16:25.386 14:49:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 53df038d5fca4b87a65d1396430c85d2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:25.386 14:49:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:16:25.386 14:49:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:25.386 14:49:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:25.386 [ 1]:0x2 00:16:25.386 14:49:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:25.386 14:49:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:25.386 14:49:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6d329a246e814980aee9cce02046a8c1 00:16:25.386 14:49:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6d329a246e814980aee9cce02046a8c1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:25.386 14:49:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:16:25.386 14:49:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:25.644 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:25.644 14:49:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:25.902 14:49:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:16:26.161 14:49:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:16:26.161 14:49:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I fecb69d0-74d3-403c-b01f-f8a4ffa024c4 -a 10.0.0.2 -s 4420 -i 4 00:16:26.162 14:49:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:16:26.162 14:49:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:26.162 14:49:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:26.162 14:49:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:16:26.162 14:49:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:16:26.162 14:49:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:28.704 14:49:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:28.705 14:49:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:28.705 14:49:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:28.705 14:49:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:28.705 14:49:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:28.705 14:49:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:28.705 14:49:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:28.705 14:49:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:28.705 14:49:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:28.705 14:49:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:28.705 14:49:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:16:28.705 14:49:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:16:28.705 14:49:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:16:28.705 14:49:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:16:28.705 14:49:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:28.705 14:49:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:16:28.705 14:49:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:28.705 14:49:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:16:28.705 14:49:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:28.705 14:49:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:28.705 14:49:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:28.705 14:49:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:28.705 14:49:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:28.705 14:49:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:28.705 14:49:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:16:28.705 14:49:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:28.705 14:49:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:28.705 14:49:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:28.705 14:49:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:16:28.705 14:49:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:28.705 14:49:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:28.705 [ 0]:0x2 00:16:28.705 14:49:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:28.705 14:49:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:28.705 14:49:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6d329a246e814980aee9cce02046a8c1 00:16:28.705 14:49:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6d329a246e814980aee9cce02046a8c1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:28.705 14:49:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:28.705 14:49:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:16:28.705 14:49:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:28.705 14:49:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:28.705 [ 0]:0x1 00:16:28.705 14:49:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:28.705 14:49:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:28.705 14:49:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=53df038d5fca4b87a65d1396430c85d2 00:16:28.705 14:49:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 53df038d5fca4b87a65d1396430c85d2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:28.705 14:49:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:16:28.705 14:49:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:28.705 14:49:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:28.705 [ 1]:0x2 00:16:28.705 14:49:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:28.705 14:49:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:28.705 14:49:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6d329a246e814980aee9cce02046a8c1 00:16:28.705 14:49:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6d329a246e814980aee9cce02046a8c1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:28.705 14:49:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:28.964 14:49:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:16:28.964 14:49:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:16:28.964 14:49:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:16:28.964 14:49:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:16:28.964 14:49:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:28.964 14:49:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:16:28.964 14:49:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:28.964 14:49:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:16:28.964 14:49:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:28.964 14:49:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:28.964 14:49:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:28.964 14:49:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:28.964 14:49:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:28.964 14:49:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:28.964 14:49:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:16:28.964 14:49:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:28.964 14:49:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:28.964 14:49:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:28.964 14:49:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:16:28.964 14:49:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:28.964 14:49:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:28.964 [ 0]:0x2 00:16:28.964 14:49:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:28.964 14:49:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:29.222 14:49:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6d329a246e814980aee9cce02046a8c1 00:16:29.223 14:49:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6d329a246e814980aee9cce02046a8c1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:29.223 14:49:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:16:29.223 14:49:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:29.223 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:29.223 14:49:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:29.480 14:49:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:16:29.480 14:49:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I fecb69d0-74d3-403c-b01f-f8a4ffa024c4 -a 10.0.0.2 -s 4420 -i 4 00:16:29.480 14:49:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:29.480 14:49:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:29.480 14:49:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:29.480 14:49:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:16:29.480 14:49:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:16:29.480 14:49:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:32.018 14:49:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:32.018 14:49:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:32.018 14:49:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:32.018 14:49:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:16:32.018 14:49:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:32.018 14:49:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:32.018 14:49:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:32.018 14:49:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:32.018 14:49:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:32.018 14:49:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:32.018 14:49:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:16:32.018 14:49:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:32.018 14:49:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:32.018 [ 0]:0x1 00:16:32.018 14:49:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:32.018 14:49:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:32.018 14:49:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=53df038d5fca4b87a65d1396430c85d2 00:16:32.018 14:49:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 53df038d5fca4b87a65d1396430c85d2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:32.018 14:49:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:16:32.018 14:49:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:32.018 14:49:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:32.018 [ 1]:0x2 00:16:32.018 14:49:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:32.018 14:49:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:32.018 14:49:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6d329a246e814980aee9cce02046a8c1 00:16:32.018 14:49:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6d329a246e814980aee9cce02046a8c1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:32.018 14:49:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:32.018 14:49:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:16:32.018 14:49:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:16:32.018 14:49:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:16:32.018 14:49:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:16:32.018 14:49:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:32.018 14:49:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:16:32.018 14:49:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:32.018 14:49:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:16:32.018 14:49:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:32.018 14:49:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:32.276 14:49:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:32.276 14:49:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:32.276 14:49:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:32.276 14:49:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:32.276 14:49:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:16:32.276 14:49:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:32.276 14:49:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:32.276 14:49:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:32.276 14:49:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:16:32.276 14:49:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:32.276 14:49:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:32.276 [ 0]:0x2 00:16:32.276 14:49:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:32.276 14:49:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:32.276 14:49:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6d329a246e814980aee9cce02046a8c1 00:16:32.276 14:49:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6d329a246e814980aee9cce02046a8c1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:32.276 14:49:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:32.276 14:49:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:16:32.276 14:49:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:32.276 14:49:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:32.276 14:49:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:32.276 14:49:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:32.276 14:49:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:32.276 14:49:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:32.276 14:49:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:32.276 14:49:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:32.276 14:49:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:32.276 14:49:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:32.534 [2024-07-14 14:49:11.627749] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:16:32.534 request: 00:16:32.534 { 00:16:32.534 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:32.534 "nsid": 2, 00:16:32.534 "host": "nqn.2016-06.io.spdk:host1", 00:16:32.534 "method": "nvmf_ns_remove_host", 00:16:32.535 "req_id": 1 00:16:32.535 } 00:16:32.535 Got JSON-RPC error response 00:16:32.535 response: 00:16:32.535 { 00:16:32.535 "code": -32602, 00:16:32.535 "message": "Invalid parameters" 00:16:32.535 } 00:16:32.535 14:49:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:16:32.535 14:49:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:32.535 14:49:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:32.535 14:49:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:32.535 14:49:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:16:32.535 14:49:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:16:32.535 14:49:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:16:32.535 14:49:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:16:32.535 14:49:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:32.535 14:49:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:16:32.535 14:49:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:32.535 14:49:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:16:32.535 14:49:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:32.535 14:49:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:32.535 14:49:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:32.535 14:49:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:32.535 14:49:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:32.535 14:49:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:32.535 14:49:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:16:32.535 14:49:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:32.535 14:49:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:32.535 14:49:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:32.535 14:49:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:16:32.535 14:49:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:32.535 14:49:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:32.535 [ 0]:0x2 00:16:32.535 14:49:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:32.535 14:49:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:32.535 14:49:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6d329a246e814980aee9cce02046a8c1 00:16:32.535 14:49:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6d329a246e814980aee9cce02046a8c1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:32.535 14:49:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:16:32.535 14:49:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:32.535 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:32.535 14:49:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1856030 00:16:32.535 14:49:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:16:32.535 14:49:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:16:32.535 14:49:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1856030 /var/tmp/host.sock 00:16:32.535 14:49:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1856030 ']' 00:16:32.535 14:49:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:16:32.535 14:49:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:32.535 14:49:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:32.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:32.535 14:49:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:32.535 14:49:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:32.813 [2024-07-14 14:49:11.886974] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:32.813 [2024-07-14 14:49:11.887122] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1856030 ] 00:16:32.813 EAL: No free 2048 kB hugepages reported on node 1 00:16:32.813 [2024-07-14 14:49:12.017730] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.071 [2024-07-14 14:49:12.268596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:34.039 14:49:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:34.039 14:49:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:16:34.039 14:49:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:34.297 14:49:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:34.554 14:49:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid dacc4c18-be87-4e1c-948e-1688cc8adccf 00:16:34.554 14:49:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:16:34.554 14:49:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g DACC4C18BE874E1C948E1688CC8ADCCF -i 00:16:34.812 14:49:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 6ac8672b-e829-4363-9092-450576b46e0d 00:16:34.812 14:49:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:16:34.812 14:49:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 6AC8672BE82943639092450576B46E0D -i 00:16:35.069 14:49:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:35.328 14:49:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:16:35.587 14:49:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:35.587 14:49:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:35.846 nvme0n1 00:16:35.846 14:49:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:35.846 14:49:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:36.106 nvme1n2 00:16:36.366 14:49:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:16:36.366 14:49:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:16:36.366 14:49:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:16:36.366 14:49:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:16:36.366 14:49:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:16:36.626 14:49:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:16:36.626 14:49:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:16:36.626 14:49:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:16:36.626 14:49:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:16:36.886 14:49:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ dacc4c18-be87-4e1c-948e-1688cc8adccf == \d\a\c\c\4\c\1\8\-\b\e\8\7\-\4\e\1\c\-\9\4\8\e\-\1\6\8\8\c\c\8\a\d\c\c\f ]] 00:16:36.886 14:49:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:16:36.886 14:49:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:16:36.886 14:49:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:16:37.146 14:49:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 6ac8672b-e829-4363-9092-450576b46e0d == \6\a\c\8\6\7\2\b\-\e\8\2\9\-\4\3\6\3\-\9\0\9\2\-\4\5\0\5\7\6\b\4\6\e\0\d ]] 00:16:37.146 14:49:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1856030 00:16:37.146 14:49:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1856030 ']' 00:16:37.146 14:49:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1856030 00:16:37.146 14:49:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:16:37.146 14:49:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:37.146 14:49:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1856030 00:16:37.146 14:49:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:37.146 14:49:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:37.146 14:49:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1856030' 00:16:37.146 killing process with pid 1856030 00:16:37.146 14:49:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1856030 00:16:37.146 14:49:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1856030 00:16:39.682 14:49:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:39.682 14:49:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:16:39.682 14:49:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:16:39.682 14:49:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:39.682 14:49:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:16:39.682 14:49:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:39.682 14:49:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:16:39.682 14:49:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:39.682 14:49:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:39.682 rmmod nvme_tcp 00:16:39.682 rmmod nvme_fabrics 00:16:39.682 rmmod nvme_keyring 00:16:39.682 14:49:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:39.682 14:49:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:16:39.682 14:49:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:16:39.682 14:49:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1854395 ']' 00:16:39.682 14:49:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1854395 00:16:39.682 14:49:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1854395 ']' 00:16:39.682 14:49:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1854395 00:16:39.682 14:49:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:16:39.682 14:49:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:39.682 14:49:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1854395 00:16:39.682 14:49:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:39.682 14:49:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:39.682 14:49:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1854395' 00:16:39.682 killing process with pid 1854395 00:16:39.682 14:49:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1854395 00:16:39.682 14:49:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1854395 00:16:41.589 14:49:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:41.589 14:49:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:41.589 14:49:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:41.589 14:49:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:41.589 14:49:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:41.589 14:49:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:41.589 14:49:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:41.589 14:49:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.499 14:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:43.499 00:16:43.499 real 0m25.616s 00:16:43.499 user 0m34.611s 00:16:43.499 sys 0m4.399s 00:16:43.499 14:49:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:43.499 14:49:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:43.499 ************************************ 00:16:43.499 END TEST nvmf_ns_masking 00:16:43.499 ************************************ 00:16:43.499 14:49:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:43.499 14:49:22 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:16:43.499 14:49:22 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:43.499 14:49:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:43.499 14:49:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:43.499 14:49:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:43.499 ************************************ 00:16:43.499 START TEST nvmf_nvme_cli 00:16:43.499 ************************************ 00:16:43.499 14:49:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:43.499 * Looking for test storage... 00:16:43.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:43.499 14:49:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:43.499 14:49:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:16:43.499 14:49:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:43.499 14:49:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:43.499 14:49:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:43.499 14:49:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:43.499 14:49:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:43.499 14:49:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:43.499 14:49:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:43.499 14:49:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:43.499 14:49:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:43.499 14:49:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:43.499 14:49:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:43.499 14:49:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:43.499 14:49:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:43.499 14:49:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:43.499 14:49:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:43.499 14:49:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:43.499 14:49:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:43.499 14:49:22 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:43.499 14:49:22 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:43.499 14:49:22 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:43.499 14:49:22 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.499 14:49:22 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.499 14:49:22 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.499 14:49:22 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:16:43.499 14:49:22 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.499 14:49:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:16:43.499 14:49:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:43.499 14:49:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:43.499 14:49:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:43.499 14:49:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:43.499 14:49:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:43.499 14:49:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:43.499 14:49:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:43.499 14:49:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:43.499 14:49:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:43.499 14:49:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:43.499 14:49:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:16:43.499 14:49:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:16:43.499 14:49:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:43.499 14:49:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:43.499 14:49:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:43.499 14:49:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:43.499 14:49:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:43.499 14:49:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:43.499 14:49:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:43.499 14:49:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.499 14:49:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:43.499 14:49:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:43.499 14:49:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:16:43.499 14:49:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:45.408 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:45.408 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:45.408 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:45.408 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:45.408 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:45.409 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:45.409 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:45.409 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:45.409 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:45.409 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:45.409 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:45.409 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:45.666 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:45.666 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:45.666 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:45.666 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:45.666 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:45.666 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:45.666 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:45.666 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:45.666 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:16:45.666 00:16:45.666 --- 10.0.0.2 ping statistics --- 00:16:45.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.666 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:16:45.666 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:45.666 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:45.666 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:16:45.666 00:16:45.666 --- 10.0.0.1 ping statistics --- 00:16:45.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.666 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:16:45.666 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:45.666 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:16:45.666 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:45.666 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:45.666 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:45.666 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:45.666 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:45.666 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:45.666 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:45.666 14:49:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:16:45.666 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:45.666 14:49:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:45.666 14:49:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:45.666 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1859034 00:16:45.666 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:45.666 14:49:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1859034 00:16:45.666 14:49:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 1859034 ']' 00:16:45.666 14:49:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.666 14:49:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:45.666 14:49:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.666 14:49:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:45.666 14:49:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:45.666 [2024-07-14 14:49:24.921475] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:45.666 [2024-07-14 14:49:24.921635] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:45.923 EAL: No free 2048 kB hugepages reported on node 1 00:16:45.923 [2024-07-14 14:49:25.063004] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:46.180 [2024-07-14 14:49:25.327028] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:46.180 [2024-07-14 14:49:25.327104] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:46.180 [2024-07-14 14:49:25.327132] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:46.180 [2024-07-14 14:49:25.327154] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:46.180 [2024-07-14 14:49:25.327175] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:46.180 [2024-07-14 14:49:25.327507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:46.180 [2024-07-14 14:49:25.327608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:46.180 [2024-07-14 14:49:25.327758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.180 [2024-07-14 14:49:25.327766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:46.747 14:49:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:46.747 14:49:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:16:46.747 14:49:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:46.747 14:49:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:46.747 14:49:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:46.747 14:49:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:46.747 14:49:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:46.747 14:49:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.747 14:49:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:46.747 [2024-07-14 14:49:25.872341] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:46.747 14:49:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.747 14:49:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:46.747 14:49:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.747 14:49:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:46.747 Malloc0 00:16:46.747 14:49:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.747 14:49:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:46.747 14:49:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.747 14:49:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:46.747 Malloc1 00:16:46.747 14:49:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.747 14:49:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:16:46.747 14:49:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.747 14:49:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:46.747 14:49:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.747 14:49:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:46.747 14:49:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.747 14:49:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:46.747 14:49:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.747 14:49:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:46.747 14:49:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.747 14:49:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:47.008 14:49:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.008 14:49:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:47.008 14:49:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.008 14:49:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:47.008 [2024-07-14 14:49:26.063006] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:47.008 14:49:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.008 14:49:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:47.008 14:49:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.008 14:49:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:47.008 14:49:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.008 14:49:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:16:47.008 00:16:47.008 Discovery Log Number of Records 2, Generation counter 2 00:16:47.008 =====Discovery Log Entry 0====== 00:16:47.008 trtype: tcp 00:16:47.008 adrfam: ipv4 00:16:47.008 subtype: current discovery subsystem 00:16:47.008 treq: not required 00:16:47.008 portid: 0 00:16:47.008 trsvcid: 4420 00:16:47.008 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:47.008 traddr: 10.0.0.2 00:16:47.008 eflags: explicit discovery connections, duplicate discovery information 00:16:47.008 sectype: none 00:16:47.008 =====Discovery Log Entry 1====== 00:16:47.008 trtype: tcp 00:16:47.008 adrfam: ipv4 00:16:47.008 subtype: nvme subsystem 00:16:47.008 treq: not required 00:16:47.008 portid: 0 00:16:47.008 trsvcid: 4420 00:16:47.008 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:47.008 traddr: 10.0.0.2 00:16:47.008 eflags: none 00:16:47.008 sectype: none 00:16:47.008 14:49:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:16:47.008 14:49:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:16:47.008 14:49:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:47.008 14:49:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:47.008 14:49:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:47.008 14:49:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:47.008 14:49:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:47.008 14:49:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:47.008 14:49:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:47.008 14:49:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:16:47.008 14:49:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:47.941 14:49:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:47.941 14:49:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:16:47.941 14:49:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:47.941 14:49:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:16:47.941 14:49:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:16:47.941 14:49:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:16:49.837 14:49:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:49.837 14:49:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:49.837 14:49:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:49.837 14:49:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:16:49.837 14:49:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:49.837 14:49:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:16:49.837 14:49:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:16:49.837 14:49:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:49.837 14:49:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:49.837 14:49:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:49.837 14:49:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:49.837 14:49:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:49.837 14:49:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:49.837 14:49:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:49.837 14:49:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:49.837 14:49:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:16:49.837 14:49:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:49.837 14:49:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:49.837 14:49:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:16:49.837 14:49:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:49.837 14:49:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:16:49.837 /dev/nvme0n1 ]] 00:16:49.837 14:49:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:16:49.837 14:49:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:16:49.837 14:49:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:49.837 14:49:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:49.837 14:49:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:49.837 14:49:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:49.837 14:49:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:49.837 14:49:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:49.837 14:49:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:49.837 14:49:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:49.837 14:49:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:16:49.837 14:49:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:49.837 14:49:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:49.837 14:49:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:16:49.837 14:49:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:49.837 14:49:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:16:49.837 14:49:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:49.837 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:49.838 14:49:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:49.838 14:49:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:16:49.838 14:49:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:49.838 14:49:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:49.838 14:49:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:49.838 14:49:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:49.838 14:49:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:16:49.838 14:49:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:16:49.838 14:49:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:49.838 14:49:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.838 14:49:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:49.838 14:49:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.838 14:49:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:49.838 14:49:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:16:49.838 14:49:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:49.838 14:49:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:16:49.838 14:49:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:49.838 14:49:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:16:49.838 14:49:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:49.838 14:49:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:50.095 rmmod nvme_tcp 00:16:50.095 rmmod nvme_fabrics 00:16:50.095 rmmod nvme_keyring 00:16:50.095 14:49:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:50.095 14:49:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:16:50.095 14:49:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:16:50.095 14:49:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1859034 ']' 00:16:50.095 14:49:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1859034 00:16:50.095 14:49:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 1859034 ']' 00:16:50.095 14:49:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 1859034 00:16:50.095 14:49:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:16:50.095 14:49:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:50.095 14:49:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1859034 00:16:50.095 14:49:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:50.095 14:49:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:50.095 14:49:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1859034' 00:16:50.095 killing process with pid 1859034 00:16:50.095 14:49:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 1859034 00:16:50.095 14:49:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 1859034 00:16:51.994 14:49:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:51.994 14:49:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:51.994 14:49:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:51.994 14:49:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:51.994 14:49:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:51.994 14:49:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:51.994 14:49:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:51.994 14:49:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:53.893 14:49:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:53.893 00:16:53.893 real 0m10.176s 00:16:53.893 user 0m20.948s 00:16:53.893 sys 0m2.369s 00:16:53.893 14:49:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:53.893 14:49:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:53.893 ************************************ 00:16:53.893 END TEST nvmf_nvme_cli 00:16:53.893 ************************************ 00:16:53.893 14:49:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:53.893 14:49:32 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:16:53.893 14:49:32 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:53.893 14:49:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:53.893 14:49:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:53.893 14:49:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:53.893 ************************************ 00:16:53.893 START TEST nvmf_host_management 00:16:53.893 ************************************ 00:16:53.893 14:49:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:53.893 * Looking for test storage... 00:16:53.893 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:53.893 14:49:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:53.893 14:49:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:16:53.893 14:49:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:53.893 14:49:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:53.893 14:49:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:53.893 14:49:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:53.893 14:49:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:53.893 14:49:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:53.893 14:49:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:53.893 14:49:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:53.893 14:49:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:53.894 14:49:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:53.894 14:49:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:53.894 14:49:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:53.894 14:49:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:53.894 14:49:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:53.894 14:49:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:53.894 14:49:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:53.894 14:49:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:53.894 14:49:32 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:53.894 14:49:32 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:53.894 14:49:32 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:53.894 14:49:32 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.894 14:49:32 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.894 14:49:32 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.894 14:49:32 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:16:53.894 14:49:32 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.894 14:49:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:16:53.894 14:49:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:53.894 14:49:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:53.894 14:49:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:53.894 14:49:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:53.894 14:49:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:53.894 14:49:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:53.894 14:49:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:53.894 14:49:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:53.894 14:49:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:53.894 14:49:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:53.894 14:49:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:16:53.894 14:49:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:53.894 14:49:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:53.894 14:49:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:53.894 14:49:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:53.894 14:49:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:53.894 14:49:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:53.894 14:49:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:53.894 14:49:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:53.894 14:49:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:53.894 14:49:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:53.894 14:49:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:16:53.894 14:49:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:55.820 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:55.820 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:55.820 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:55.820 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:55.820 14:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:55.820 14:49:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:55.820 14:49:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:55.820 14:49:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:55.820 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:55.820 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:16:55.820 00:16:55.820 --- 10.0.0.2 ping statistics --- 00:16:55.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:55.820 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:16:55.820 14:49:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:55.820 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:55.820 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:16:55.820 00:16:55.820 --- 10.0.0.1 ping statistics --- 00:16:55.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:55.820 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:16:55.820 14:49:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:55.820 14:49:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:16:55.820 14:49:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:55.820 14:49:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:55.820 14:49:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:55.820 14:49:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:55.820 14:49:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:55.821 14:49:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:55.821 14:49:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:55.821 14:49:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:16:55.821 14:49:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:16:55.821 14:49:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:55.821 14:49:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:55.821 14:49:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:55.821 14:49:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:55.821 14:49:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1861679 00:16:55.821 14:49:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:55.821 14:49:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1861679 00:16:55.821 14:49:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1861679 ']' 00:16:55.821 14:49:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.821 14:49:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:55.821 14:49:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.821 14:49:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:55.821 14:49:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:56.079 [2024-07-14 14:49:35.170908] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:56.079 [2024-07-14 14:49:35.171050] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:56.079 EAL: No free 2048 kB hugepages reported on node 1 00:16:56.079 [2024-07-14 14:49:35.308290] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:56.337 [2024-07-14 14:49:35.574732] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:56.337 [2024-07-14 14:49:35.574816] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:56.337 [2024-07-14 14:49:35.574843] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:56.337 [2024-07-14 14:49:35.574864] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:56.337 [2024-07-14 14:49:35.574897] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:56.337 [2024-07-14 14:49:35.575026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:56.337 [2024-07-14 14:49:35.575086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:56.337 [2024-07-14 14:49:35.575319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:56.337 [2024-07-14 14:49:35.575322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:56.903 14:49:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:56.903 14:49:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:16:56.903 14:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:56.903 14:49:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:56.903 14:49:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:56.903 14:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:56.903 14:49:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:56.903 14:49:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.903 14:49:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:56.903 [2024-07-14 14:49:36.135415] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:56.903 14:49:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.903 14:49:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:56.903 14:49:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:56.903 14:49:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:56.903 14:49:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:56.903 14:49:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:16:56.903 14:49:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:16:56.903 14:49:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.903 14:49:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:57.161 Malloc0 00:16:57.161 [2024-07-14 14:49:36.249625] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:57.161 14:49:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.161 14:49:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:57.161 14:49:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:57.161 14:49:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:57.161 14:49:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1861855 00:16:57.161 14:49:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1861855 /var/tmp/bdevperf.sock 00:16:57.161 14:49:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1861855 ']' 00:16:57.161 14:49:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:57.161 14:49:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:57.161 14:49:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:57.161 14:49:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:57.161 14:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:57.161 14:49:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:57.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:57.161 14:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:57.161 14:49:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:57.161 14:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:57.161 14:49:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:57.161 14:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:57.161 { 00:16:57.161 "params": { 00:16:57.161 "name": "Nvme$subsystem", 00:16:57.161 "trtype": "$TEST_TRANSPORT", 00:16:57.161 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:57.161 "adrfam": "ipv4", 00:16:57.161 "trsvcid": "$NVMF_PORT", 00:16:57.161 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:57.161 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:57.161 "hdgst": ${hdgst:-false}, 00:16:57.161 "ddgst": ${ddgst:-false} 00:16:57.161 }, 00:16:57.161 "method": "bdev_nvme_attach_controller" 00:16:57.161 } 00:16:57.161 EOF 00:16:57.161 )") 00:16:57.161 14:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:57.161 14:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:57.161 14:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:57.161 14:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:57.161 "params": { 00:16:57.161 "name": "Nvme0", 00:16:57.161 "trtype": "tcp", 00:16:57.161 "traddr": "10.0.0.2", 00:16:57.161 "adrfam": "ipv4", 00:16:57.161 "trsvcid": "4420", 00:16:57.161 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:57.161 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:57.161 "hdgst": false, 00:16:57.161 "ddgst": false 00:16:57.161 }, 00:16:57.161 "method": "bdev_nvme_attach_controller" 00:16:57.161 }' 00:16:57.161 [2024-07-14 14:49:36.362136] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:57.161 [2024-07-14 14:49:36.362313] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1861855 ] 00:16:57.161 EAL: No free 2048 kB hugepages reported on node 1 00:16:57.419 [2024-07-14 14:49:36.492529] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.676 [2024-07-14 14:49:36.731036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.243 Running I/O for 10 seconds... 00:16:58.243 14:49:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:58.243 14:49:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:16:58.243 14:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:58.243 14:49:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.243 14:49:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:58.243 14:49:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.243 14:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:58.243 14:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:58.243 14:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:58.243 14:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:58.243 14:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:16:58.243 14:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:16:58.243 14:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:58.243 14:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:58.243 14:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:58.243 14:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:58.243 14:49:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.243 14:49:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:58.243 14:49:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.243 14:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=3 00:16:58.243 14:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 3 -ge 100 ']' 00:16:58.243 14:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:16:58.507 14:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:16:58.507 14:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:58.507 14:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:58.508 14:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:58.508 14:49:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.508 14:49:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:58.508 14:49:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.508 14:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=424 00:16:58.508 14:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 424 -ge 100 ']' 00:16:58.508 14:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:16:58.508 14:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:16:58.508 14:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:16:58.508 14:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:58.508 14:49:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.508 14:49:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:58.508 [2024-07-14 14:49:37.650493] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.650587] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.650610] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.650629] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.650647] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.650665] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.650696] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.650714] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.650732] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.650751] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.650769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.650786] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.650805] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.650833] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.650851] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.650869] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.650899] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.650918] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.650946] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.650963] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.650980] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.650997] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.651014] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.651031] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.651049] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.651066] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.651083] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.651100] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.651117] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.651134] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.651152] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.651169] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.651194] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.651211] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.651229] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.651246] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.651264] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.651281] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.651298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.651320] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.651338] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.651355] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.651372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.651399] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.651416] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.651435] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.651462] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.651479] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.651498] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.651516] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.651533] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.651551] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.651568] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.651585] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.651603] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.651620] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.651638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.651655] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.651672] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.651689] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.651706] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.651723] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.651741] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.508 [2024-07-14 14:49:37.651857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.508 [2024-07-14 14:49:37.651925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.508 [2024-07-14 14:49:37.651979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:57472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.508 [2024-07-14 14:49:37.652004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.508 [2024-07-14 14:49:37.652030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:57600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.508 [2024-07-14 14:49:37.652054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.509 [2024-07-14 14:49:37.652079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.509 [2024-07-14 14:49:37.652101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.509 [2024-07-14 14:49:37.652125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:57856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.509 [2024-07-14 14:49:37.652148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.509 [2024-07-14 14:49:37.652182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:57984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.509 [2024-07-14 14:49:37.652204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.509 [2024-07-14 14:49:37.652239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:58112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.509 [2024-07-14 14:49:37.652262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.509 [2024-07-14 14:49:37.652286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:58240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.509 [2024-07-14 14:49:37.652308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.509 [2024-07-14 14:49:37.652351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:58368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.509 [2024-07-14 14:49:37.652374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.509 [2024-07-14 14:49:37.652399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:58496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.509 [2024-07-14 14:49:37.652421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.509 [2024-07-14 14:49:37.652445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:58624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.509 [2024-07-14 14:49:37.652467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.509 [2024-07-14 14:49:37.652491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:58752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.509 [2024-07-14 14:49:37.652512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.509 [2024-07-14 14:49:37.652536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.509 [2024-07-14 14:49:37.652559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.509 [2024-07-14 14:49:37.652583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:59008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.509 [2024-07-14 14:49:37.652610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.509 [2024-07-14 14:49:37.652635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:59136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.509 [2024-07-14 14:49:37.652657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.509 [2024-07-14 14:49:37.652681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:59264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.509 [2024-07-14 14:49:37.652703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.509 [2024-07-14 14:49:37.652735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:59392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.509 [2024-07-14 14:49:37.652758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.509 [2024-07-14 14:49:37.652781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.509 [2024-07-14 14:49:37.652803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.509 [2024-07-14 14:49:37.652828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.509 [2024-07-14 14:49:37.652850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.509 [2024-07-14 14:49:37.652873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.509 [2024-07-14 14:49:37.652905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.509 [2024-07-14 14:49:37.652930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:59904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.509 [2024-07-14 14:49:37.652952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.509 [2024-07-14 14:49:37.652976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:60032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.509 [2024-07-14 14:49:37.652998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.509 [2024-07-14 14:49:37.653022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:60160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.509 [2024-07-14 14:49:37.653044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.509 [2024-07-14 14:49:37.653067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:60288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.509 [2024-07-14 14:49:37.653089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.509 [2024-07-14 14:49:37.653113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:60416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.509 [2024-07-14 14:49:37.653136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.509 [2024-07-14 14:49:37.653160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:60544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.509 [2024-07-14 14:49:37.653191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.509 [2024-07-14 14:49:37.653220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:60672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.509 [2024-07-14 14:49:37.653243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.509 [2024-07-14 14:49:37.653267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:60800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.509 [2024-07-14 14:49:37.653289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.509 [2024-07-14 14:49:37.653312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:60928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.509 [2024-07-14 14:49:37.653334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.509 [2024-07-14 14:49:37.653358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:61056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.509 [2024-07-14 14:49:37.653380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.509 [2024-07-14 14:49:37.653404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:61184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.509 [2024-07-14 14:49:37.653426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.509 [2024-07-14 14:49:37.653450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:61312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.509 [2024-07-14 14:49:37.653472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.509 [2024-07-14 14:49:37.653497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.509 [2024-07-14 14:49:37.653519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.509 [2024-07-14 14:49:37.653543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:61568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.509 [2024-07-14 14:49:37.653565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.509 [2024-07-14 14:49:37.653589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:61696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.509 [2024-07-14 14:49:37.653611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.509 [2024-07-14 14:49:37.653636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:61824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.509 [2024-07-14 14:49:37.653658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.509 [2024-07-14 14:49:37.653682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:61952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.509 [2024-07-14 14:49:37.653704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.509 [2024-07-14 14:49:37.653728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.509 [2024-07-14 14:49:37.653749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.509 [2024-07-14 14:49:37.653773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:62208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.509 [2024-07-14 14:49:37.653798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.509 [2024-07-14 14:49:37.653823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:62336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.509 [2024-07-14 14:49:37.653845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.509 [2024-07-14 14:49:37.653869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:62464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.509 [2024-07-14 14:49:37.653899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.510 [2024-07-14 14:49:37.653932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:62592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.510 [2024-07-14 14:49:37.653954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.510 [2024-07-14 14:49:37.653978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.510 [2024-07-14 14:49:37.653999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.510 [2024-07-14 14:49:37.654023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:62848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.510 [2024-07-14 14:49:37.654044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.510 [2024-07-14 14:49:37.654068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:62976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.510 [2024-07-14 14:49:37.654090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.510 [2024-07-14 14:49:37.654114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.510 [2024-07-14 14:49:37.654136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.510 [2024-07-14 14:49:37.654159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.510 [2024-07-14 14:49:37.654188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.510 [2024-07-14 14:49:37.654213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.510 [2024-07-14 14:49:37.654234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.510 [2024-07-14 14:49:37.654259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:63488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.510 [2024-07-14 14:49:37.654281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.510 [2024-07-14 14:49:37.654305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.510 [2024-07-14 14:49:37.654327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.510 [2024-07-14 14:49:37.654351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:63744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.510 [2024-07-14 14:49:37.654373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.510 [2024-07-14 14:49:37.654401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.510 [2024-07-14 14:49:37.654424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.510 [2024-07-14 14:49:37.654448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:64000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.510 [2024-07-14 14:49:37.654473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.510 [2024-07-14 14:49:37.654498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:64128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.510 [2024-07-14 14:49:37.654520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.510 [2024-07-14 14:49:37.654545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.510 [2024-07-14 14:49:37.654568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.510 [2024-07-14 14:49:37.654593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.510 [2024-07-14 14:49:37.654616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.510 [2024-07-14 14:49:37.654640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.510 [2024-07-14 14:49:37.654662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.510 [2024-07-14 14:49:37.654687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.510 [2024-07-14 14:49:37.654710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.510 [2024-07-14 14:49:37.654735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.510 [2024-07-14 14:49:37.654757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.510 [2024-07-14 14:49:37.654782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.510 [2024-07-14 14:49:37.654804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.510 [2024-07-14 14:49:37.654830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:65024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.510 14:49:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.510 [2024-07-14 14:49:37.654853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.510 [2024-07-14 14:49:37.654884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:65152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.510 [2024-07-14 14:49:37.654909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.510 [2024-07-14 14:49:37.654939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.510 [2024-07-14 14:49:37.654962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.510 [2024-07-14 14:49:37.654987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.510 [2024-07-14 14:49:37.655016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.510 14:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:58.510 [2024-07-14 14:49:37.655041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2c80 is same with the state(5) to be set 00:16:58.510 14:49:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.510 14:49:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:58.510 [2024-07-14 14:49:37.655351] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f2c80 was disconnected and freed. reset controller. 00:16:58.510 [2024-07-14 14:49:37.656662] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:58.510 task offset: 57344 on job bdev=Nvme0n1 fails 00:16:58.510 00:16:58.510 Latency(us) 00:16:58.510 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:58.510 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:58.510 Job: Nvme0n1 ended in about 0.37 seconds with error 00:16:58.510 Verification LBA range: start 0x0 length 0x400 00:16:58.510 Nvme0n1 : 0.37 1205.89 75.37 172.27 0.00 44942.08 9126.49 40777.96 00:16:58.510 =================================================================================================================== 00:16:58.510 Total : 1205.89 75.37 172.27 0.00 44942.08 9126.49 40777.96 00:16:58.510 [2024-07-14 14:49:37.662208] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:58.510 [2024-07-14 14:49:37.662273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:16:58.510 14:49:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.510 14:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:16:58.510 [2024-07-14 14:49:37.714113] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:59.442 14:49:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1861855 00:16:59.442 14:49:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:59.442 14:49:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:59.442 14:49:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:59.442 14:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:59.442 14:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:59.442 14:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:59.442 14:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:59.442 { 00:16:59.442 "params": { 00:16:59.442 "name": "Nvme$subsystem", 00:16:59.442 "trtype": "$TEST_TRANSPORT", 00:16:59.442 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:59.442 "adrfam": "ipv4", 00:16:59.442 "trsvcid": "$NVMF_PORT", 00:16:59.442 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:59.442 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:59.442 "hdgst": ${hdgst:-false}, 00:16:59.442 "ddgst": ${ddgst:-false} 00:16:59.442 }, 00:16:59.442 "method": "bdev_nvme_attach_controller" 00:16:59.442 } 00:16:59.442 EOF 00:16:59.442 )") 00:16:59.442 14:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:59.442 14:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:59.442 14:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:59.442 14:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:59.442 "params": { 00:16:59.442 "name": "Nvme0", 00:16:59.442 "trtype": "tcp", 00:16:59.442 "traddr": "10.0.0.2", 00:16:59.442 "adrfam": "ipv4", 00:16:59.442 "trsvcid": "4420", 00:16:59.442 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:59.442 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:59.442 "hdgst": false, 00:16:59.442 "ddgst": false 00:16:59.442 }, 00:16:59.442 "method": "bdev_nvme_attach_controller" 00:16:59.442 }' 00:16:59.442 [2024-07-14 14:49:38.748239] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:59.443 [2024-07-14 14:49:38.748378] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1862133 ] 00:16:59.700 EAL: No free 2048 kB hugepages reported on node 1 00:16:59.700 [2024-07-14 14:49:38.876905] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.958 [2024-07-14 14:49:39.118579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.524 Running I/O for 1 seconds... 00:17:01.456 00:17:01.456 Latency(us) 00:17:01.456 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:01.456 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:01.456 Verification LBA range: start 0x0 length 0x400 00:17:01.456 Nvme0n1 : 1.01 1391.04 86.94 0.00 0.00 45212.62 7524.50 39612.87 00:17:01.456 =================================================================================================================== 00:17:01.456 Total : 1391.04 86.94 0.00 0.00 45212.62 7524.50 39612.87 00:17:02.389 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 1861855 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:17:02.389 14:49:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:17:02.389 14:49:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:17:02.389 14:49:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:02.389 14:49:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:02.389 14:49:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:17:02.389 14:49:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:02.389 14:49:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:17:02.389 14:49:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:02.389 14:49:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:17:02.389 14:49:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:02.389 14:49:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:02.389 rmmod nvme_tcp 00:17:02.389 rmmod nvme_fabrics 00:17:02.389 rmmod nvme_keyring 00:17:02.389 14:49:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:02.389 14:49:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:17:02.389 14:49:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:17:02.389 14:49:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1861679 ']' 00:17:02.389 14:49:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1861679 00:17:02.389 14:49:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 1861679 ']' 00:17:02.389 14:49:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 1861679 00:17:02.389 14:49:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:17:02.389 14:49:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:02.389 14:49:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1861679 00:17:02.389 14:49:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:02.389 14:49:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:02.389 14:49:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1861679' 00:17:02.389 killing process with pid 1861679 00:17:02.389 14:49:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 1861679 00:17:02.389 14:49:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 1861679 00:17:03.764 [2024-07-14 14:49:42.911782] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:17:03.764 14:49:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:03.764 14:49:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:03.764 14:49:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:03.764 14:49:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:03.764 14:49:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:03.764 14:49:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:03.764 14:49:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:03.764 14:49:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.296 14:49:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:06.296 14:49:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:17:06.296 00:17:06.296 real 0m12.121s 00:17:06.296 user 0m33.691s 00:17:06.296 sys 0m3.054s 00:17:06.296 14:49:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:06.296 14:49:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:06.296 ************************************ 00:17:06.296 END TEST nvmf_host_management 00:17:06.296 ************************************ 00:17:06.296 14:49:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:06.296 14:49:45 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:06.296 14:49:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:06.296 14:49:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:06.296 14:49:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:06.296 ************************************ 00:17:06.296 START TEST nvmf_lvol 00:17:06.296 ************************************ 00:17:06.296 14:49:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:06.296 * Looking for test storage... 00:17:06.296 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:06.296 14:49:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:06.296 14:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:17:06.296 14:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:06.296 14:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:06.296 14:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:06.296 14:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:06.296 14:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:06.296 14:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:06.296 14:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:06.296 14:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:06.296 14:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:06.296 14:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:06.296 14:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:06.296 14:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:06.296 14:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:06.296 14:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:06.296 14:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:06.296 14:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:06.296 14:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:06.296 14:49:45 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:06.296 14:49:45 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:06.296 14:49:45 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:06.297 14:49:45 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.297 14:49:45 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.297 14:49:45 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.297 14:49:45 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:17:06.297 14:49:45 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.297 14:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:17:06.297 14:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:06.297 14:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:06.297 14:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:06.297 14:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:06.297 14:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:06.297 14:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:06.297 14:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:06.297 14:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:06.297 14:49:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:06.297 14:49:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:06.297 14:49:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:17:06.297 14:49:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:17:06.297 14:49:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:06.297 14:49:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:17:06.297 14:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:06.297 14:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:06.297 14:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:06.297 14:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:06.297 14:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:06.297 14:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.297 14:49:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:06.297 14:49:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.297 14:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:06.297 14:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:06.297 14:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:17:06.297 14:49:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:08.193 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:08.193 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:17:08.193 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:08.193 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:08.193 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:08.193 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:08.194 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:08.194 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:08.194 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:08.194 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:08.194 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:08.194 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:17:08.194 00:17:08.194 --- 10.0.0.2 ping statistics --- 00:17:08.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.194 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:08.194 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:08.194 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:17:08.194 00:17:08.194 --- 10.0.0.1 ping statistics --- 00:17:08.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.194 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1864502 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1864502 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 1864502 ']' 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:08.194 14:49:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:08.194 [2024-07-14 14:49:47.369482] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:08.194 [2024-07-14 14:49:47.369618] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:08.194 EAL: No free 2048 kB hugepages reported on node 1 00:17:08.452 [2024-07-14 14:49:47.508464] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:08.709 [2024-07-14 14:49:47.763477] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:08.709 [2024-07-14 14:49:47.763543] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:08.709 [2024-07-14 14:49:47.763591] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:08.709 [2024-07-14 14:49:47.763612] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:08.709 [2024-07-14 14:49:47.763633] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:08.709 [2024-07-14 14:49:47.763747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:08.709 [2024-07-14 14:49:47.763804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:08.709 [2024-07-14 14:49:47.763814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:08.967 14:49:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:08.967 14:49:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:17:08.967 14:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:08.967 14:49:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:08.967 14:49:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:09.224 14:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:09.224 14:49:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:09.481 [2024-07-14 14:49:48.565446] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:09.481 14:49:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:09.738 14:49:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:17:09.738 14:49:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:09.995 14:49:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:17:09.995 14:49:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:17:10.585 14:49:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:17:10.842 14:49:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=b370069d-cf87-4ff7-99dc-f9450dfb739a 00:17:10.842 14:49:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b370069d-cf87-4ff7-99dc-f9450dfb739a lvol 20 00:17:11.100 14:49:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=3a7ce946-60a2-4ba1-a00b-72143f7aaff8 00:17:11.100 14:49:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:11.358 14:49:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3a7ce946-60a2-4ba1-a00b-72143f7aaff8 00:17:11.615 14:49:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:11.872 [2024-07-14 14:49:50.988969] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:11.872 14:49:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:12.129 14:49:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1865033 00:17:12.129 14:49:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:17:12.129 14:49:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:17:12.129 EAL: No free 2048 kB hugepages reported on node 1 00:17:13.059 14:49:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 3a7ce946-60a2-4ba1-a00b-72143f7aaff8 MY_SNAPSHOT 00:17:13.317 14:49:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=a80c3ae2-0226-40b8-9441-6edb8d6b1fc6 00:17:13.317 14:49:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 3a7ce946-60a2-4ba1-a00b-72143f7aaff8 30 00:17:13.883 14:49:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone a80c3ae2-0226-40b8-9441-6edb8d6b1fc6 MY_CLONE 00:17:14.143 14:49:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=c253f3ba-dd2c-4cca-852b-aa9c4d98f480 00:17:14.143 14:49:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate c253f3ba-dd2c-4cca-852b-aa9c4d98f480 00:17:14.735 14:49:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1865033 00:17:22.843 Initializing NVMe Controllers 00:17:22.843 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:17:22.843 Controller IO queue size 128, less than required. 00:17:22.843 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:22.843 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:17:22.843 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:17:22.843 Initialization complete. Launching workers. 00:17:22.844 ======================================================== 00:17:22.844 Latency(us) 00:17:22.844 Device Information : IOPS MiB/s Average min max 00:17:22.844 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8313.30 32.47 15412.08 453.46 135980.71 00:17:22.844 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8055.40 31.47 15896.11 3269.74 149455.55 00:17:22.844 ======================================================== 00:17:22.844 Total : 16368.70 63.94 15650.28 453.46 149455.55 00:17:22.844 00:17:22.844 14:50:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:22.844 14:50:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3a7ce946-60a2-4ba1-a00b-72143f7aaff8 00:17:23.100 14:50:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b370069d-cf87-4ff7-99dc-f9450dfb739a 00:17:23.358 14:50:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:17:23.358 14:50:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:17:23.358 14:50:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:17:23.358 14:50:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:23.358 14:50:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:17:23.358 14:50:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:23.358 14:50:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:17:23.358 14:50:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:23.358 14:50:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:23.358 rmmod nvme_tcp 00:17:23.358 rmmod nvme_fabrics 00:17:23.358 rmmod nvme_keyring 00:17:23.358 14:50:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:23.358 14:50:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:17:23.358 14:50:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:17:23.358 14:50:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1864502 ']' 00:17:23.358 14:50:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1864502 00:17:23.358 14:50:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 1864502 ']' 00:17:23.358 14:50:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 1864502 00:17:23.358 14:50:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:17:23.358 14:50:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:23.358 14:50:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1864502 00:17:23.358 14:50:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:23.358 14:50:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:23.358 14:50:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1864502' 00:17:23.358 killing process with pid 1864502 00:17:23.358 14:50:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 1864502 00:17:23.358 14:50:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 1864502 00:17:25.276 14:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:25.276 14:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:25.276 14:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:25.276 14:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:25.276 14:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:25.276 14:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.276 14:50:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:25.276 14:50:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.176 14:50:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:27.176 00:17:27.176 real 0m21.108s 00:17:27.176 user 1m10.038s 00:17:27.176 sys 0m5.552s 00:17:27.176 14:50:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:27.176 14:50:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:27.176 ************************************ 00:17:27.176 END TEST nvmf_lvol 00:17:27.176 ************************************ 00:17:27.176 14:50:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:27.176 14:50:06 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:27.176 14:50:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:27.176 14:50:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:27.176 14:50:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:27.176 ************************************ 00:17:27.176 START TEST nvmf_lvs_grow 00:17:27.176 ************************************ 00:17:27.176 14:50:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:27.176 * Looking for test storage... 00:17:27.176 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:27.176 14:50:06 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:27.176 14:50:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:17:27.176 14:50:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:27.176 14:50:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:27.176 14:50:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:27.176 14:50:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:27.176 14:50:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:27.176 14:50:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:27.176 14:50:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:27.176 14:50:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:27.176 14:50:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:27.176 14:50:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:27.176 14:50:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:27.176 14:50:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:27.176 14:50:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:27.176 14:50:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:27.176 14:50:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:27.176 14:50:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:27.176 14:50:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:27.176 14:50:06 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:27.176 14:50:06 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:27.176 14:50:06 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:27.176 14:50:06 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.176 14:50:06 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.176 14:50:06 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.176 14:50:06 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:17:27.176 14:50:06 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.176 14:50:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:17:27.176 14:50:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:27.176 14:50:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:27.176 14:50:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:27.176 14:50:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:27.176 14:50:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:27.177 14:50:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:27.177 14:50:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:27.177 14:50:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:27.177 14:50:06 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:27.177 14:50:06 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:27.177 14:50:06 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:17:27.177 14:50:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:27.177 14:50:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:27.177 14:50:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:27.177 14:50:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:27.177 14:50:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:27.177 14:50:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.177 14:50:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:27.177 14:50:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.177 14:50:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:27.177 14:50:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:27.177 14:50:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:17:27.177 14:50:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:29.077 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:29.077 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:29.077 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:29.078 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:29.078 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:29.078 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:29.078 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:29.078 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:29.078 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:29.078 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:29.078 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:29.078 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:29.078 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:29.078 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:29.078 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:29.078 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:29.078 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:29.078 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:17:29.078 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:29.078 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:29.078 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:29.078 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:29.078 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:29.078 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:29.078 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:29.078 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:29.078 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:29.078 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:29.078 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:29.078 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:29.078 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:29.078 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:29.078 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:29.078 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:29.336 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:29.336 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:29.336 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:29.336 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:29.336 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:29.336 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:29.336 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:29.336 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:29.336 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:17:29.336 00:17:29.336 --- 10.0.0.2 ping statistics --- 00:17:29.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.336 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:17:29.336 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:29.336 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:29.336 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:17:29.336 00:17:29.336 --- 10.0.0.1 ping statistics --- 00:17:29.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.336 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:17:29.336 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:29.336 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:17:29.336 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:29.336 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:29.336 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:29.336 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:29.336 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:29.336 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:29.336 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:29.336 14:50:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:17:29.336 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:29.336 14:50:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:29.336 14:50:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:29.336 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1868422 00:17:29.336 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:29.336 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1868422 00:17:29.336 14:50:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 1868422 ']' 00:17:29.336 14:50:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.336 14:50:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:29.336 14:50:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:29.336 14:50:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:29.336 14:50:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:29.336 [2024-07-14 14:50:08.611513] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:29.336 [2024-07-14 14:50:08.611660] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:29.593 EAL: No free 2048 kB hugepages reported on node 1 00:17:29.593 [2024-07-14 14:50:08.746463] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.851 [2024-07-14 14:50:08.995676] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:29.851 [2024-07-14 14:50:08.995762] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:29.851 [2024-07-14 14:50:08.995790] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:29.851 [2024-07-14 14:50:08.995828] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:29.851 [2024-07-14 14:50:08.995849] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:29.851 [2024-07-14 14:50:08.995915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.416 14:50:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:30.416 14:50:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:17:30.416 14:50:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:30.416 14:50:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:30.416 14:50:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:30.416 14:50:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:30.416 14:50:09 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:30.675 [2024-07-14 14:50:09.856485] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:30.675 14:50:09 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:17:30.675 14:50:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:17:30.675 14:50:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:30.675 14:50:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:30.675 ************************************ 00:17:30.675 START TEST lvs_grow_clean 00:17:30.675 ************************************ 00:17:30.675 14:50:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:17:30.675 14:50:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:30.675 14:50:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:30.675 14:50:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:30.675 14:50:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:30.675 14:50:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:30.675 14:50:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:30.675 14:50:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:30.675 14:50:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:30.675 14:50:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:30.951 14:50:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:30.951 14:50:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:31.209 14:50:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=da8fe4a6-d37e-41ae-bfea-f784cbc9ee68 00:17:31.209 14:50:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da8fe4a6-d37e-41ae-bfea-f784cbc9ee68 00:17:31.209 14:50:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:31.467 14:50:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:31.467 14:50:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:31.467 14:50:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u da8fe4a6-d37e-41ae-bfea-f784cbc9ee68 lvol 150 00:17:31.724 14:50:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=9cb2dd5b-0f21-40f3-8b20-4f1f2608ec28 00:17:31.724 14:50:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:31.724 14:50:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:31.982 [2024-07-14 14:50:11.218562] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:31.982 [2024-07-14 14:50:11.218696] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:31.982 true 00:17:31.982 14:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da8fe4a6-d37e-41ae-bfea-f784cbc9ee68 00:17:31.982 14:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:32.239 14:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:32.239 14:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:32.804 14:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9cb2dd5b-0f21-40f3-8b20-4f1f2608ec28 00:17:32.804 14:50:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:33.062 [2024-07-14 14:50:12.366424] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:33.319 14:50:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:33.577 14:50:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1868988 00:17:33.577 14:50:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:33.577 14:50:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:33.577 14:50:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1868988 /var/tmp/bdevperf.sock 00:17:33.577 14:50:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 1868988 ']' 00:17:33.577 14:50:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:33.577 14:50:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:33.577 14:50:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:33.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:33.577 14:50:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:33.577 14:50:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:33.577 [2024-07-14 14:50:12.737290] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:33.577 [2024-07-14 14:50:12.737429] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1868988 ] 00:17:33.577 EAL: No free 2048 kB hugepages reported on node 1 00:17:33.577 [2024-07-14 14:50:12.868071] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.835 [2024-07-14 14:50:13.108341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:34.769 14:50:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:34.769 14:50:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:17:34.769 14:50:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:35.027 Nvme0n1 00:17:35.027 14:50:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:35.284 [ 00:17:35.284 { 00:17:35.284 "name": "Nvme0n1", 00:17:35.284 "aliases": [ 00:17:35.284 "9cb2dd5b-0f21-40f3-8b20-4f1f2608ec28" 00:17:35.284 ], 00:17:35.284 "product_name": "NVMe disk", 00:17:35.284 "block_size": 4096, 00:17:35.284 "num_blocks": 38912, 00:17:35.284 "uuid": "9cb2dd5b-0f21-40f3-8b20-4f1f2608ec28", 00:17:35.284 "assigned_rate_limits": { 00:17:35.284 "rw_ios_per_sec": 0, 00:17:35.284 "rw_mbytes_per_sec": 0, 00:17:35.284 "r_mbytes_per_sec": 0, 00:17:35.284 "w_mbytes_per_sec": 0 00:17:35.285 }, 00:17:35.285 "claimed": false, 00:17:35.285 "zoned": false, 00:17:35.285 "supported_io_types": { 00:17:35.285 "read": true, 00:17:35.285 "write": true, 00:17:35.285 "unmap": true, 00:17:35.285 "flush": true, 00:17:35.285 "reset": true, 00:17:35.285 "nvme_admin": true, 00:17:35.285 "nvme_io": true, 00:17:35.285 "nvme_io_md": false, 00:17:35.285 "write_zeroes": true, 00:17:35.285 "zcopy": false, 00:17:35.285 "get_zone_info": false, 00:17:35.285 "zone_management": false, 00:17:35.285 "zone_append": false, 00:17:35.285 "compare": true, 00:17:35.285 "compare_and_write": true, 00:17:35.285 "abort": true, 00:17:35.285 "seek_hole": false, 00:17:35.285 "seek_data": false, 00:17:35.285 "copy": true, 00:17:35.285 "nvme_iov_md": false 00:17:35.285 }, 00:17:35.285 "memory_domains": [ 00:17:35.285 { 00:17:35.285 "dma_device_id": "system", 00:17:35.285 "dma_device_type": 1 00:17:35.285 } 00:17:35.285 ], 00:17:35.285 "driver_specific": { 00:17:35.285 "nvme": [ 00:17:35.285 { 00:17:35.285 "trid": { 00:17:35.285 "trtype": "TCP", 00:17:35.285 "adrfam": "IPv4", 00:17:35.285 "traddr": "10.0.0.2", 00:17:35.285 "trsvcid": "4420", 00:17:35.285 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:35.285 }, 00:17:35.285 "ctrlr_data": { 00:17:35.285 "cntlid": 1, 00:17:35.285 "vendor_id": "0x8086", 00:17:35.285 "model_number": "SPDK bdev Controller", 00:17:35.285 "serial_number": "SPDK0", 00:17:35.285 "firmware_revision": "24.09", 00:17:35.285 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:35.285 "oacs": { 00:17:35.285 "security": 0, 00:17:35.285 "format": 0, 00:17:35.285 "firmware": 0, 00:17:35.285 "ns_manage": 0 00:17:35.285 }, 00:17:35.285 "multi_ctrlr": true, 00:17:35.285 "ana_reporting": false 00:17:35.285 }, 00:17:35.285 "vs": { 00:17:35.285 "nvme_version": "1.3" 00:17:35.285 }, 00:17:35.285 "ns_data": { 00:17:35.285 "id": 1, 00:17:35.285 "can_share": true 00:17:35.285 } 00:17:35.285 } 00:17:35.285 ], 00:17:35.285 "mp_policy": "active_passive" 00:17:35.285 } 00:17:35.285 } 00:17:35.285 ] 00:17:35.285 14:50:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1869202 00:17:35.285 14:50:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:35.285 14:50:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:35.543 Running I/O for 10 seconds... 00:17:36.477 Latency(us) 00:17:36.477 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.477 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:36.477 Nvme0n1 : 1.00 11177.00 43.66 0.00 0.00 0.00 0.00 0.00 00:17:36.477 =================================================================================================================== 00:17:36.477 Total : 11177.00 43.66 0.00 0.00 0.00 0.00 0.00 00:17:36.477 00:17:37.412 14:50:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u da8fe4a6-d37e-41ae-bfea-f784cbc9ee68 00:17:37.412 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:37.412 Nvme0n1 : 2.00 11209.00 43.79 0.00 0.00 0.00 0.00 0.00 00:17:37.412 =================================================================================================================== 00:17:37.412 Total : 11209.00 43.79 0.00 0.00 0.00 0.00 0.00 00:17:37.412 00:17:37.670 true 00:17:37.670 14:50:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da8fe4a6-d37e-41ae-bfea-f784cbc9ee68 00:17:37.670 14:50:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:37.930 14:50:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:37.930 14:50:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:37.930 14:50:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1869202 00:17:38.529 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:38.529 Nvme0n1 : 3.00 11198.00 43.74 0.00 0.00 0.00 0.00 0.00 00:17:38.529 =================================================================================================================== 00:17:38.529 Total : 11198.00 43.74 0.00 0.00 0.00 0.00 0.00 00:17:38.529 00:17:39.470 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:39.470 Nvme0n1 : 4.00 11287.75 44.09 0.00 0.00 0.00 0.00 0.00 00:17:39.470 =================================================================================================================== 00:17:39.470 Total : 11287.75 44.09 0.00 0.00 0.00 0.00 0.00 00:17:39.470 00:17:40.408 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:40.408 Nvme0n1 : 5.00 11278.20 44.06 0.00 0.00 0.00 0.00 0.00 00:17:40.408 =================================================================================================================== 00:17:40.408 Total : 11278.20 44.06 0.00 0.00 0.00 0.00 0.00 00:17:40.408 00:17:41.349 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:41.349 Nvme0n1 : 6.00 11317.67 44.21 0.00 0.00 0.00 0.00 0.00 00:17:41.349 =================================================================================================================== 00:17:41.349 Total : 11317.67 44.21 0.00 0.00 0.00 0.00 0.00 00:17:41.349 00:17:42.727 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:42.727 Nvme0n1 : 7.00 11297.43 44.13 0.00 0.00 0.00 0.00 0.00 00:17:42.727 =================================================================================================================== 00:17:42.727 Total : 11297.43 44.13 0.00 0.00 0.00 0.00 0.00 00:17:42.727 00:17:43.666 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:43.667 Nvme0n1 : 8.00 11298.12 44.13 0.00 0.00 0.00 0.00 0.00 00:17:43.667 =================================================================================================================== 00:17:43.667 Total : 11298.12 44.13 0.00 0.00 0.00 0.00 0.00 00:17:43.667 00:17:44.605 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:44.605 Nvme0n1 : 9.00 11326.89 44.25 0.00 0.00 0.00 0.00 0.00 00:17:44.605 =================================================================================================================== 00:17:44.605 Total : 11326.89 44.25 0.00 0.00 0.00 0.00 0.00 00:17:44.605 00:17:45.544 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:45.544 Nvme0n1 : 10.00 11330.90 44.26 0.00 0.00 0.00 0.00 0.00 00:17:45.544 =================================================================================================================== 00:17:45.545 Total : 11330.90 44.26 0.00 0.00 0.00 0.00 0.00 00:17:45.545 00:17:45.545 00:17:45.545 Latency(us) 00:17:45.545 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.545 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:45.545 Nvme0n1 : 10.01 11335.23 44.28 0.00 0.00 11284.12 5291.43 22330.79 00:17:45.545 =================================================================================================================== 00:17:45.545 Total : 11335.23 44.28 0.00 0.00 11284.12 5291.43 22330.79 00:17:45.545 0 00:17:45.545 14:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1868988 00:17:45.545 14:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 1868988 ']' 00:17:45.545 14:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 1868988 00:17:45.545 14:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:17:45.545 14:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:45.545 14:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1868988 00:17:45.545 14:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:45.545 14:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:45.545 14:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1868988' 00:17:45.545 killing process with pid 1868988 00:17:45.545 14:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 1868988 00:17:45.545 Received shutdown signal, test time was about 10.000000 seconds 00:17:45.545 00:17:45.545 Latency(us) 00:17:45.545 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.545 =================================================================================================================== 00:17:45.545 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:45.545 14:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 1868988 00:17:46.482 14:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:46.740 14:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:46.998 14:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da8fe4a6-d37e-41ae-bfea-f784cbc9ee68 00:17:46.998 14:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:47.255 14:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:47.255 14:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:17:47.255 14:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:47.513 [2024-07-14 14:50:26.750042] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:47.513 14:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da8fe4a6-d37e-41ae-bfea-f784cbc9ee68 00:17:47.513 14:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:17:47.513 14:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da8fe4a6-d37e-41ae-bfea-f784cbc9ee68 00:17:47.513 14:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:47.513 14:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:47.513 14:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:47.513 14:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:47.513 14:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:47.513 14:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:47.513 14:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:47.513 14:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:47.513 14:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da8fe4a6-d37e-41ae-bfea-f784cbc9ee68 00:17:47.771 request: 00:17:47.771 { 00:17:47.771 "uuid": "da8fe4a6-d37e-41ae-bfea-f784cbc9ee68", 00:17:47.771 "method": "bdev_lvol_get_lvstores", 00:17:47.771 "req_id": 1 00:17:47.771 } 00:17:47.771 Got JSON-RPC error response 00:17:47.771 response: 00:17:47.771 { 00:17:47.771 "code": -19, 00:17:47.771 "message": "No such device" 00:17:47.771 } 00:17:47.771 14:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:17:47.771 14:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:47.771 14:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:47.771 14:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:47.771 14:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:48.031 aio_bdev 00:17:48.031 14:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 9cb2dd5b-0f21-40f3-8b20-4f1f2608ec28 00:17:48.031 14:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=9cb2dd5b-0f21-40f3-8b20-4f1f2608ec28 00:17:48.031 14:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:48.031 14:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:17:48.031 14:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:48.031 14:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:48.031 14:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:48.291 14:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9cb2dd5b-0f21-40f3-8b20-4f1f2608ec28 -t 2000 00:17:48.550 [ 00:17:48.550 { 00:17:48.550 "name": "9cb2dd5b-0f21-40f3-8b20-4f1f2608ec28", 00:17:48.550 "aliases": [ 00:17:48.550 "lvs/lvol" 00:17:48.550 ], 00:17:48.550 "product_name": "Logical Volume", 00:17:48.550 "block_size": 4096, 00:17:48.550 "num_blocks": 38912, 00:17:48.550 "uuid": "9cb2dd5b-0f21-40f3-8b20-4f1f2608ec28", 00:17:48.550 "assigned_rate_limits": { 00:17:48.550 "rw_ios_per_sec": 0, 00:17:48.550 "rw_mbytes_per_sec": 0, 00:17:48.550 "r_mbytes_per_sec": 0, 00:17:48.550 "w_mbytes_per_sec": 0 00:17:48.550 }, 00:17:48.550 "claimed": false, 00:17:48.550 "zoned": false, 00:17:48.550 "supported_io_types": { 00:17:48.550 "read": true, 00:17:48.550 "write": true, 00:17:48.550 "unmap": true, 00:17:48.550 "flush": false, 00:17:48.550 "reset": true, 00:17:48.550 "nvme_admin": false, 00:17:48.550 "nvme_io": false, 00:17:48.550 "nvme_io_md": false, 00:17:48.550 "write_zeroes": true, 00:17:48.550 "zcopy": false, 00:17:48.550 "get_zone_info": false, 00:17:48.550 "zone_management": false, 00:17:48.550 "zone_append": false, 00:17:48.550 "compare": false, 00:17:48.550 "compare_and_write": false, 00:17:48.550 "abort": false, 00:17:48.550 "seek_hole": true, 00:17:48.550 "seek_data": true, 00:17:48.550 "copy": false, 00:17:48.550 "nvme_iov_md": false 00:17:48.550 }, 00:17:48.550 "driver_specific": { 00:17:48.550 "lvol": { 00:17:48.550 "lvol_store_uuid": "da8fe4a6-d37e-41ae-bfea-f784cbc9ee68", 00:17:48.550 "base_bdev": "aio_bdev", 00:17:48.550 "thin_provision": false, 00:17:48.550 "num_allocated_clusters": 38, 00:17:48.550 "snapshot": false, 00:17:48.550 "clone": false, 00:17:48.550 "esnap_clone": false 00:17:48.550 } 00:17:48.550 } 00:17:48.550 } 00:17:48.550 ] 00:17:48.550 14:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:17:48.550 14:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da8fe4a6-d37e-41ae-bfea-f784cbc9ee68 00:17:48.550 14:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:49.117 14:50:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:49.117 14:50:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da8fe4a6-d37e-41ae-bfea-f784cbc9ee68 00:17:49.117 14:50:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:49.117 14:50:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:49.117 14:50:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9cb2dd5b-0f21-40f3-8b20-4f1f2608ec28 00:17:49.376 14:50:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u da8fe4a6-d37e-41ae-bfea-f784cbc9ee68 00:17:49.635 14:50:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:49.893 14:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:50.152 00:17:50.152 real 0m19.299s 00:17:50.152 user 0m18.979s 00:17:50.152 sys 0m1.979s 00:17:50.152 14:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:50.152 14:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:50.152 ************************************ 00:17:50.152 END TEST lvs_grow_clean 00:17:50.152 ************************************ 00:17:50.152 14:50:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:17:50.152 14:50:29 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:50.152 14:50:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:50.152 14:50:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:50.152 14:50:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:50.152 ************************************ 00:17:50.152 START TEST lvs_grow_dirty 00:17:50.152 ************************************ 00:17:50.152 14:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:17:50.152 14:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:50.152 14:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:50.152 14:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:50.152 14:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:50.152 14:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:50.152 14:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:50.152 14:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:50.152 14:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:50.152 14:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:50.410 14:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:50.410 14:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:50.668 14:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=f8f68029-78bb-427e-9fb3-ec48fb89438d 00:17:50.668 14:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f8f68029-78bb-427e-9fb3-ec48fb89438d 00:17:50.668 14:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:50.928 14:50:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:50.928 14:50:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:50.928 14:50:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f8f68029-78bb-427e-9fb3-ec48fb89438d lvol 150 00:17:51.187 14:50:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=831801cd-481a-424a-bf7a-0850084a24c2 00:17:51.187 14:50:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:51.187 14:50:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:51.447 [2024-07-14 14:50:30.603766] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:51.447 [2024-07-14 14:50:30.603903] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:51.447 true 00:17:51.447 14:50:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f8f68029-78bb-427e-9fb3-ec48fb89438d 00:17:51.447 14:50:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:51.707 14:50:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:51.707 14:50:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:51.967 14:50:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 831801cd-481a-424a-bf7a-0850084a24c2 00:17:52.225 14:50:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:52.483 [2024-07-14 14:50:31.659275] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:52.483 14:50:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:52.741 14:50:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1871295 00:17:52.741 14:50:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:52.741 14:50:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:52.741 14:50:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1871295 /var/tmp/bdevperf.sock 00:17:52.741 14:50:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1871295 ']' 00:17:52.741 14:50:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:52.741 14:50:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:52.741 14:50:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:52.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:52.741 14:50:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:52.741 14:50:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:52.741 [2024-07-14 14:50:32.004040] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:52.741 [2024-07-14 14:50:32.004196] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1871295 ] 00:17:52.999 EAL: No free 2048 kB hugepages reported on node 1 00:17:52.999 [2024-07-14 14:50:32.134284] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.285 [2024-07-14 14:50:32.387364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:53.852 14:50:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:53.852 14:50:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:17:53.852 14:50:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:54.112 Nvme0n1 00:17:54.112 14:50:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:54.370 [ 00:17:54.370 { 00:17:54.370 "name": "Nvme0n1", 00:17:54.370 "aliases": [ 00:17:54.370 "831801cd-481a-424a-bf7a-0850084a24c2" 00:17:54.370 ], 00:17:54.370 "product_name": "NVMe disk", 00:17:54.370 "block_size": 4096, 00:17:54.370 "num_blocks": 38912, 00:17:54.370 "uuid": "831801cd-481a-424a-bf7a-0850084a24c2", 00:17:54.370 "assigned_rate_limits": { 00:17:54.370 "rw_ios_per_sec": 0, 00:17:54.370 "rw_mbytes_per_sec": 0, 00:17:54.370 "r_mbytes_per_sec": 0, 00:17:54.370 "w_mbytes_per_sec": 0 00:17:54.370 }, 00:17:54.370 "claimed": false, 00:17:54.370 "zoned": false, 00:17:54.370 "supported_io_types": { 00:17:54.370 "read": true, 00:17:54.370 "write": true, 00:17:54.370 "unmap": true, 00:17:54.370 "flush": true, 00:17:54.370 "reset": true, 00:17:54.370 "nvme_admin": true, 00:17:54.370 "nvme_io": true, 00:17:54.370 "nvme_io_md": false, 00:17:54.370 "write_zeroes": true, 00:17:54.370 "zcopy": false, 00:17:54.370 "get_zone_info": false, 00:17:54.370 "zone_management": false, 00:17:54.370 "zone_append": false, 00:17:54.370 "compare": true, 00:17:54.370 "compare_and_write": true, 00:17:54.370 "abort": true, 00:17:54.370 "seek_hole": false, 00:17:54.370 "seek_data": false, 00:17:54.370 "copy": true, 00:17:54.370 "nvme_iov_md": false 00:17:54.370 }, 00:17:54.370 "memory_domains": [ 00:17:54.370 { 00:17:54.370 "dma_device_id": "system", 00:17:54.370 "dma_device_type": 1 00:17:54.370 } 00:17:54.370 ], 00:17:54.370 "driver_specific": { 00:17:54.370 "nvme": [ 00:17:54.370 { 00:17:54.370 "trid": { 00:17:54.370 "trtype": "TCP", 00:17:54.370 "adrfam": "IPv4", 00:17:54.370 "traddr": "10.0.0.2", 00:17:54.370 "trsvcid": "4420", 00:17:54.370 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:54.370 }, 00:17:54.370 "ctrlr_data": { 00:17:54.371 "cntlid": 1, 00:17:54.371 "vendor_id": "0x8086", 00:17:54.371 "model_number": "SPDK bdev Controller", 00:17:54.371 "serial_number": "SPDK0", 00:17:54.371 "firmware_revision": "24.09", 00:17:54.371 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:54.371 "oacs": { 00:17:54.371 "security": 0, 00:17:54.371 "format": 0, 00:17:54.371 "firmware": 0, 00:17:54.371 "ns_manage": 0 00:17:54.371 }, 00:17:54.371 "multi_ctrlr": true, 00:17:54.371 "ana_reporting": false 00:17:54.371 }, 00:17:54.371 "vs": { 00:17:54.371 "nvme_version": "1.3" 00:17:54.371 }, 00:17:54.371 "ns_data": { 00:17:54.371 "id": 1, 00:17:54.371 "can_share": true 00:17:54.371 } 00:17:54.371 } 00:17:54.371 ], 00:17:54.371 "mp_policy": "active_passive" 00:17:54.371 } 00:17:54.371 } 00:17:54.371 ] 00:17:54.371 14:50:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1871435 00:17:54.371 14:50:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:54.371 14:50:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:54.371 Running I/O for 10 seconds... 00:17:55.308 Latency(us) 00:17:55.308 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.308 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:55.308 Nvme0n1 : 1.00 10925.00 42.68 0.00 0.00 0.00 0.00 0.00 00:17:55.308 =================================================================================================================== 00:17:55.308 Total : 10925.00 42.68 0.00 0.00 0.00 0.00 0.00 00:17:55.308 00:17:56.245 14:50:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f8f68029-78bb-427e-9fb3-ec48fb89438d 00:17:56.510 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:56.510 Nvme0n1 : 2.00 11145.50 43.54 0.00 0.00 0.00 0.00 0.00 00:17:56.510 =================================================================================================================== 00:17:56.510 Total : 11145.50 43.54 0.00 0.00 0.00 0.00 0.00 00:17:56.510 00:17:56.510 true 00:17:56.772 14:50:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f8f68029-78bb-427e-9fb3-ec48fb89438d 00:17:56.772 14:50:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:57.030 14:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:57.030 14:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:57.030 14:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1871435 00:17:57.596 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:57.596 Nvme0n1 : 3.00 11114.33 43.42 0.00 0.00 0.00 0.00 0.00 00:17:57.596 =================================================================================================================== 00:17:57.596 Total : 11114.33 43.42 0.00 0.00 0.00 0.00 0.00 00:17:57.596 00:17:58.544 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:58.544 Nvme0n1 : 4.00 11129.75 43.48 0.00 0.00 0.00 0.00 0.00 00:17:58.544 =================================================================================================================== 00:17:58.544 Total : 11129.75 43.48 0.00 0.00 0.00 0.00 0.00 00:17:58.544 00:17:59.476 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:59.476 Nvme0n1 : 5.00 11151.80 43.56 0.00 0.00 0.00 0.00 0.00 00:17:59.476 =================================================================================================================== 00:17:59.476 Total : 11151.80 43.56 0.00 0.00 0.00 0.00 0.00 00:17:59.476 00:18:00.411 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:00.411 Nvme0n1 : 6.00 11187.50 43.70 0.00 0.00 0.00 0.00 0.00 00:18:00.411 =================================================================================================================== 00:18:00.411 Total : 11187.50 43.70 0.00 0.00 0.00 0.00 0.00 00:18:00.411 00:18:01.347 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:01.347 Nvme0n1 : 7.00 11289.57 44.10 0.00 0.00 0.00 0.00 0.00 00:18:01.347 =================================================================================================================== 00:18:01.347 Total : 11289.57 44.10 0.00 0.00 0.00 0.00 0.00 00:18:01.347 00:18:02.723 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:02.723 Nvme0n1 : 8.00 11363.88 44.39 0.00 0.00 0.00 0.00 0.00 00:18:02.723 =================================================================================================================== 00:18:02.723 Total : 11363.88 44.39 0.00 0.00 0.00 0.00 0.00 00:18:02.723 00:18:03.660 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:03.660 Nvme0n1 : 9.00 11364.11 44.39 0.00 0.00 0.00 0.00 0.00 00:18:03.660 =================================================================================================================== 00:18:03.660 Total : 11364.11 44.39 0.00 0.00 0.00 0.00 0.00 00:18:03.660 00:18:04.595 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:04.595 Nvme0n1 : 10.00 11358.00 44.37 0.00 0.00 0.00 0.00 0.00 00:18:04.595 =================================================================================================================== 00:18:04.595 Total : 11358.00 44.37 0.00 0.00 0.00 0.00 0.00 00:18:04.595 00:18:04.595 00:18:04.595 Latency(us) 00:18:04.595 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.595 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:04.595 Nvme0n1 : 10.01 11364.09 44.39 0.00 0.00 11256.30 3689.43 22622.06 00:18:04.595 =================================================================================================================== 00:18:04.595 Total : 11364.09 44.39 0.00 0.00 11256.30 3689.43 22622.06 00:18:04.595 0 00:18:04.595 14:50:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1871295 00:18:04.595 14:50:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 1871295 ']' 00:18:04.595 14:50:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 1871295 00:18:04.595 14:50:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:18:04.595 14:50:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:04.595 14:50:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1871295 00:18:04.595 14:50:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:04.595 14:50:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:04.595 14:50:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1871295' 00:18:04.595 killing process with pid 1871295 00:18:04.595 14:50:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 1871295 00:18:04.595 Received shutdown signal, test time was about 10.000000 seconds 00:18:04.595 00:18:04.595 Latency(us) 00:18:04.595 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.595 =================================================================================================================== 00:18:04.595 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:04.595 14:50:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 1871295 00:18:05.527 14:50:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:05.785 14:50:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:06.042 14:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f8f68029-78bb-427e-9fb3-ec48fb89438d 00:18:06.042 14:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:18:06.301 14:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:18:06.301 14:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:18:06.301 14:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1868422 00:18:06.301 14:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1868422 00:18:06.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1868422 Killed "${NVMF_APP[@]}" "$@" 00:18:06.301 14:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:18:06.301 14:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:18:06.301 14:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:06.301 14:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:06.301 14:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:06.301 14:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1872887 00:18:06.301 14:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:06.301 14:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1872887 00:18:06.301 14:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1872887 ']' 00:18:06.301 14:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:06.301 14:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:06.301 14:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:06.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:06.301 14:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:06.301 14:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:06.559 [2024-07-14 14:50:45.621297] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:06.559 [2024-07-14 14:50:45.621437] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:06.559 EAL: No free 2048 kB hugepages reported on node 1 00:18:06.559 [2024-07-14 14:50:45.762359] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.819 [2024-07-14 14:50:46.012733] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:06.819 [2024-07-14 14:50:46.012800] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:06.819 [2024-07-14 14:50:46.012825] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:06.819 [2024-07-14 14:50:46.012845] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:06.819 [2024-07-14 14:50:46.012898] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:06.819 [2024-07-14 14:50:46.012950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:07.386 14:50:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:07.386 14:50:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:18:07.386 14:50:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:07.386 14:50:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:07.386 14:50:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:07.386 14:50:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:07.386 14:50:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:07.645 [2024-07-14 14:50:46.843663] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:18:07.645 [2024-07-14 14:50:46.843912] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:18:07.645 [2024-07-14 14:50:46.843999] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:18:07.645 14:50:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:18:07.645 14:50:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 831801cd-481a-424a-bf7a-0850084a24c2 00:18:07.645 14:50:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=831801cd-481a-424a-bf7a-0850084a24c2 00:18:07.645 14:50:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:07.645 14:50:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:18:07.646 14:50:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:07.646 14:50:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:07.646 14:50:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:07.906 14:50:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 831801cd-481a-424a-bf7a-0850084a24c2 -t 2000 00:18:08.164 [ 00:18:08.164 { 00:18:08.164 "name": "831801cd-481a-424a-bf7a-0850084a24c2", 00:18:08.164 "aliases": [ 00:18:08.164 "lvs/lvol" 00:18:08.164 ], 00:18:08.164 "product_name": "Logical Volume", 00:18:08.164 "block_size": 4096, 00:18:08.164 "num_blocks": 38912, 00:18:08.164 "uuid": "831801cd-481a-424a-bf7a-0850084a24c2", 00:18:08.164 "assigned_rate_limits": { 00:18:08.164 "rw_ios_per_sec": 0, 00:18:08.164 "rw_mbytes_per_sec": 0, 00:18:08.164 "r_mbytes_per_sec": 0, 00:18:08.164 "w_mbytes_per_sec": 0 00:18:08.164 }, 00:18:08.164 "claimed": false, 00:18:08.164 "zoned": false, 00:18:08.164 "supported_io_types": { 00:18:08.164 "read": true, 00:18:08.164 "write": true, 00:18:08.164 "unmap": true, 00:18:08.164 "flush": false, 00:18:08.164 "reset": true, 00:18:08.164 "nvme_admin": false, 00:18:08.164 "nvme_io": false, 00:18:08.164 "nvme_io_md": false, 00:18:08.164 "write_zeroes": true, 00:18:08.164 "zcopy": false, 00:18:08.164 "get_zone_info": false, 00:18:08.164 "zone_management": false, 00:18:08.164 "zone_append": false, 00:18:08.164 "compare": false, 00:18:08.164 "compare_and_write": false, 00:18:08.164 "abort": false, 00:18:08.164 "seek_hole": true, 00:18:08.164 "seek_data": true, 00:18:08.164 "copy": false, 00:18:08.164 "nvme_iov_md": false 00:18:08.164 }, 00:18:08.164 "driver_specific": { 00:18:08.164 "lvol": { 00:18:08.164 "lvol_store_uuid": "f8f68029-78bb-427e-9fb3-ec48fb89438d", 00:18:08.164 "base_bdev": "aio_bdev", 00:18:08.165 "thin_provision": false, 00:18:08.165 "num_allocated_clusters": 38, 00:18:08.165 "snapshot": false, 00:18:08.165 "clone": false, 00:18:08.165 "esnap_clone": false 00:18:08.165 } 00:18:08.165 } 00:18:08.165 } 00:18:08.165 ] 00:18:08.165 14:50:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:18:08.165 14:50:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f8f68029-78bb-427e-9fb3-ec48fb89438d 00:18:08.165 14:50:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:18:08.423 14:50:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:18:08.423 14:50:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f8f68029-78bb-427e-9fb3-ec48fb89438d 00:18:08.423 14:50:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:18:08.681 14:50:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:18:08.681 14:50:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:08.940 [2024-07-14 14:50:48.092200] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:08.940 14:50:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f8f68029-78bb-427e-9fb3-ec48fb89438d 00:18:08.940 14:50:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:18:08.940 14:50:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f8f68029-78bb-427e-9fb3-ec48fb89438d 00:18:08.940 14:50:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:08.940 14:50:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:08.940 14:50:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:08.940 14:50:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:08.940 14:50:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:08.940 14:50:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:08.940 14:50:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:08.940 14:50:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:08.940 14:50:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f8f68029-78bb-427e-9fb3-ec48fb89438d 00:18:09.198 request: 00:18:09.198 { 00:18:09.198 "uuid": "f8f68029-78bb-427e-9fb3-ec48fb89438d", 00:18:09.198 "method": "bdev_lvol_get_lvstores", 00:18:09.198 "req_id": 1 00:18:09.198 } 00:18:09.198 Got JSON-RPC error response 00:18:09.198 response: 00:18:09.198 { 00:18:09.198 "code": -19, 00:18:09.198 "message": "No such device" 00:18:09.198 } 00:18:09.198 14:50:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:18:09.198 14:50:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:09.198 14:50:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:09.198 14:50:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:09.198 14:50:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:09.457 aio_bdev 00:18:09.457 14:50:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 831801cd-481a-424a-bf7a-0850084a24c2 00:18:09.457 14:50:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=831801cd-481a-424a-bf7a-0850084a24c2 00:18:09.457 14:50:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:09.457 14:50:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:18:09.457 14:50:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:09.457 14:50:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:09.457 14:50:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:09.715 14:50:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 831801cd-481a-424a-bf7a-0850084a24c2 -t 2000 00:18:09.973 [ 00:18:09.973 { 00:18:09.973 "name": "831801cd-481a-424a-bf7a-0850084a24c2", 00:18:09.973 "aliases": [ 00:18:09.973 "lvs/lvol" 00:18:09.973 ], 00:18:09.973 "product_name": "Logical Volume", 00:18:09.973 "block_size": 4096, 00:18:09.973 "num_blocks": 38912, 00:18:09.974 "uuid": "831801cd-481a-424a-bf7a-0850084a24c2", 00:18:09.974 "assigned_rate_limits": { 00:18:09.974 "rw_ios_per_sec": 0, 00:18:09.974 "rw_mbytes_per_sec": 0, 00:18:09.974 "r_mbytes_per_sec": 0, 00:18:09.974 "w_mbytes_per_sec": 0 00:18:09.974 }, 00:18:09.974 "claimed": false, 00:18:09.974 "zoned": false, 00:18:09.974 "supported_io_types": { 00:18:09.974 "read": true, 00:18:09.974 "write": true, 00:18:09.974 "unmap": true, 00:18:09.974 "flush": false, 00:18:09.974 "reset": true, 00:18:09.974 "nvme_admin": false, 00:18:09.974 "nvme_io": false, 00:18:09.974 "nvme_io_md": false, 00:18:09.974 "write_zeroes": true, 00:18:09.974 "zcopy": false, 00:18:09.974 "get_zone_info": false, 00:18:09.974 "zone_management": false, 00:18:09.974 "zone_append": false, 00:18:09.974 "compare": false, 00:18:09.974 "compare_and_write": false, 00:18:09.974 "abort": false, 00:18:09.974 "seek_hole": true, 00:18:09.974 "seek_data": true, 00:18:09.974 "copy": false, 00:18:09.974 "nvme_iov_md": false 00:18:09.974 }, 00:18:09.974 "driver_specific": { 00:18:09.974 "lvol": { 00:18:09.974 "lvol_store_uuid": "f8f68029-78bb-427e-9fb3-ec48fb89438d", 00:18:09.974 "base_bdev": "aio_bdev", 00:18:09.974 "thin_provision": false, 00:18:09.974 "num_allocated_clusters": 38, 00:18:09.974 "snapshot": false, 00:18:09.974 "clone": false, 00:18:09.974 "esnap_clone": false 00:18:09.974 } 00:18:09.974 } 00:18:09.974 } 00:18:09.974 ] 00:18:09.974 14:50:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:18:09.974 14:50:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:18:09.974 14:50:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f8f68029-78bb-427e-9fb3-ec48fb89438d 00:18:10.231 14:50:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:18:10.232 14:50:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f8f68029-78bb-427e-9fb3-ec48fb89438d 00:18:10.232 14:50:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:18:10.490 14:50:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:18:10.490 14:50:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 831801cd-481a-424a-bf7a-0850084a24c2 00:18:10.750 14:50:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f8f68029-78bb-427e-9fb3-ec48fb89438d 00:18:11.011 14:50:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:11.270 14:50:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:11.270 00:18:11.270 real 0m21.280s 00:18:11.270 user 0m54.393s 00:18:11.270 sys 0m4.501s 00:18:11.270 14:50:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:11.270 14:50:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:11.270 ************************************ 00:18:11.270 END TEST lvs_grow_dirty 00:18:11.270 ************************************ 00:18:11.270 14:50:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:18:11.270 14:50:50 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:18:11.270 14:50:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:18:11.270 14:50:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:18:11.271 14:50:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:11.271 14:50:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:11.271 14:50:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:11.271 14:50:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:11.271 14:50:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:11.271 14:50:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:11.271 nvmf_trace.0 00:18:11.528 14:50:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:18:11.528 14:50:50 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:18:11.528 14:50:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:11.528 14:50:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:18:11.528 14:50:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:11.528 14:50:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:18:11.529 14:50:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:11.529 14:50:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:11.529 rmmod nvme_tcp 00:18:11.529 rmmod nvme_fabrics 00:18:11.529 rmmod nvme_keyring 00:18:11.529 14:50:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:11.529 14:50:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:18:11.529 14:50:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:18:11.529 14:50:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1872887 ']' 00:18:11.529 14:50:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1872887 00:18:11.529 14:50:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 1872887 ']' 00:18:11.529 14:50:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 1872887 00:18:11.529 14:50:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:18:11.529 14:50:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:11.529 14:50:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1872887 00:18:11.529 14:50:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:11.529 14:50:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:11.529 14:50:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1872887' 00:18:11.529 killing process with pid 1872887 00:18:11.529 14:50:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 1872887 00:18:11.529 14:50:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 1872887 00:18:12.912 14:50:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:12.912 14:50:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:12.912 14:50:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:12.912 14:50:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:12.912 14:50:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:12.912 14:50:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:12.912 14:50:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:12.912 14:50:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:14.817 14:50:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:14.817 00:18:14.817 real 0m47.712s 00:18:14.817 user 1m20.953s 00:18:14.817 sys 0m8.583s 00:18:14.817 14:50:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:14.817 14:50:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:14.817 ************************************ 00:18:14.817 END TEST nvmf_lvs_grow 00:18:14.817 ************************************ 00:18:14.817 14:50:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:14.817 14:50:53 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:14.817 14:50:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:14.817 14:50:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:14.817 14:50:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:14.817 ************************************ 00:18:14.817 START TEST nvmf_bdev_io_wait 00:18:14.817 ************************************ 00:18:14.817 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:14.817 * Looking for test storage... 00:18:14.817 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:14.817 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:14.817 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:18:14.817 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:14.817 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:14.817 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:14.817 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:14.817 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:14.817 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:14.817 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:14.817 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:14.817 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:14.817 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:14.817 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:14.817 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:14.817 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:14.817 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:14.817 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:14.817 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:14.817 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:14.817 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:14.817 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:14.817 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:14.817 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.817 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.818 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.818 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:18:14.818 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.818 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:18:14.818 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:14.818 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:14.818 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:14.818 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:14.818 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:14.818 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:14.818 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:14.818 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:14.818 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:14.818 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:14.818 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:18:14.818 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:14.818 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:14.818 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:14.818 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:14.818 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:14.818 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.818 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:14.818 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:14.818 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:14.818 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:14.818 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:18:14.818 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:16.758 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:16.758 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:16.758 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:16.758 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:16.758 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:16.759 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:16.759 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:18:16.759 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:16.759 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:16.759 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:16.759 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:16.759 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:16.759 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:16.759 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:16.759 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:16.759 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:16.759 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:16.759 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:16.759 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:16.759 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:16.759 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:16.759 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:16.759 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:17.017 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:17.017 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:17.017 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:17.017 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:17.017 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:17.017 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:17.017 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:17.017 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:17.017 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:18:17.017 00:18:17.017 --- 10.0.0.2 ping statistics --- 00:18:17.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.017 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:18:17.017 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:17.017 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:17.017 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:18:17.017 00:18:17.017 --- 10.0.0.1 ping statistics --- 00:18:17.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.017 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:18:17.017 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:17.017 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:18:17.017 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:17.017 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:17.017 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:17.017 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:17.017 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:17.017 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:17.017 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:17.017 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:17.017 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:17.017 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:17.017 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:17.017 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1875549 00:18:17.017 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:17.017 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1875549 00:18:17.017 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 1875549 ']' 00:18:17.017 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:17.017 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:17.017 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:17.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:17.017 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:17.017 14:50:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:17.017 [2024-07-14 14:50:56.250733] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:17.017 [2024-07-14 14:50:56.250892] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:17.276 EAL: No free 2048 kB hugepages reported on node 1 00:18:17.276 [2024-07-14 14:50:56.390085] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:17.535 [2024-07-14 14:50:56.655520] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:17.535 [2024-07-14 14:50:56.655592] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:17.535 [2024-07-14 14:50:56.655620] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:17.535 [2024-07-14 14:50:56.655640] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:17.535 [2024-07-14 14:50:56.655663] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:17.535 [2024-07-14 14:50:56.655789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:17.535 [2024-07-14 14:50:56.655865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:17.535 [2024-07-14 14:50:56.655964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.535 [2024-07-14 14:50:56.655970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:18.101 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:18.102 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:18:18.102 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:18.102 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:18.102 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:18.102 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:18.102 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:18:18.102 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.102 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:18.102 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.102 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:18:18.102 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.102 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:18.360 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.360 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:18.360 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.360 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:18.360 [2024-07-14 14:50:57.435871] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:18.360 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.360 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:18.360 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.360 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:18.360 Malloc0 00:18:18.360 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.360 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:18.360 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.360 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:18.360 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.360 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:18.360 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.360 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:18.360 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.360 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:18.360 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.360 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:18.360 [2024-07-14 14:50:57.545196] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:18.360 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.360 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1875706 00:18:18.360 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:18:18.360 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:18:18.360 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1875708 00:18:18.360 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:18.360 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:18.360 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:18.360 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:18.360 { 00:18:18.360 "params": { 00:18:18.360 "name": "Nvme$subsystem", 00:18:18.360 "trtype": "$TEST_TRANSPORT", 00:18:18.360 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:18.360 "adrfam": "ipv4", 00:18:18.360 "trsvcid": "$NVMF_PORT", 00:18:18.360 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:18.360 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:18.360 "hdgst": ${hdgst:-false}, 00:18:18.360 "ddgst": ${ddgst:-false} 00:18:18.360 }, 00:18:18.361 "method": "bdev_nvme_attach_controller" 00:18:18.361 } 00:18:18.361 EOF 00:18:18.361 )") 00:18:18.361 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:18:18.361 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:18:18.361 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1875710 00:18:18.361 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:18.361 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:18.361 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:18.361 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:18.361 { 00:18:18.361 "params": { 00:18:18.361 "name": "Nvme$subsystem", 00:18:18.361 "trtype": "$TEST_TRANSPORT", 00:18:18.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:18.361 "adrfam": "ipv4", 00:18:18.361 "trsvcid": "$NVMF_PORT", 00:18:18.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:18.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:18.361 "hdgst": ${hdgst:-false}, 00:18:18.361 "ddgst": ${ddgst:-false} 00:18:18.361 }, 00:18:18.361 "method": "bdev_nvme_attach_controller" 00:18:18.361 } 00:18:18.361 EOF 00:18:18.361 )") 00:18:18.361 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:18.361 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:18:18.361 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:18:18.361 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1875713 00:18:18.361 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:18:18.361 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:18.361 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:18.361 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:18.361 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:18.361 { 00:18:18.361 "params": { 00:18:18.361 "name": "Nvme$subsystem", 00:18:18.361 "trtype": "$TEST_TRANSPORT", 00:18:18.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:18.361 "adrfam": "ipv4", 00:18:18.361 "trsvcid": "$NVMF_PORT", 00:18:18.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:18.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:18.361 "hdgst": ${hdgst:-false}, 00:18:18.361 "ddgst": ${ddgst:-false} 00:18:18.361 }, 00:18:18.361 "method": "bdev_nvme_attach_controller" 00:18:18.361 } 00:18:18.361 EOF 00:18:18.361 )") 00:18:18.361 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:18:18.361 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:18:18.361 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:18.361 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:18.361 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:18.361 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:18.361 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:18.361 { 00:18:18.361 "params": { 00:18:18.361 "name": "Nvme$subsystem", 00:18:18.361 "trtype": "$TEST_TRANSPORT", 00:18:18.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:18.361 "adrfam": "ipv4", 00:18:18.361 "trsvcid": "$NVMF_PORT", 00:18:18.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:18.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:18.361 "hdgst": ${hdgst:-false}, 00:18:18.361 "ddgst": ${ddgst:-false} 00:18:18.361 }, 00:18:18.361 "method": "bdev_nvme_attach_controller" 00:18:18.361 } 00:18:18.361 EOF 00:18:18.361 )") 00:18:18.361 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:18.361 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1875706 00:18:18.361 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:18.361 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:18.361 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:18.361 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:18.361 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:18.361 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:18.361 "params": { 00:18:18.361 "name": "Nvme1", 00:18:18.361 "trtype": "tcp", 00:18:18.361 "traddr": "10.0.0.2", 00:18:18.361 "adrfam": "ipv4", 00:18:18.361 "trsvcid": "4420", 00:18:18.361 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.361 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:18.361 "hdgst": false, 00:18:18.361 "ddgst": false 00:18:18.361 }, 00:18:18.361 "method": "bdev_nvme_attach_controller" 00:18:18.361 }' 00:18:18.361 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:18.361 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:18.361 "params": { 00:18:18.361 "name": "Nvme1", 00:18:18.361 "trtype": "tcp", 00:18:18.361 "traddr": "10.0.0.2", 00:18:18.361 "adrfam": "ipv4", 00:18:18.361 "trsvcid": "4420", 00:18:18.361 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.361 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:18.361 "hdgst": false, 00:18:18.361 "ddgst": false 00:18:18.361 }, 00:18:18.361 "method": "bdev_nvme_attach_controller" 00:18:18.361 }' 00:18:18.361 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:18.361 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:18.361 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:18.361 "params": { 00:18:18.361 "name": "Nvme1", 00:18:18.361 "trtype": "tcp", 00:18:18.361 "traddr": "10.0.0.2", 00:18:18.361 "adrfam": "ipv4", 00:18:18.361 "trsvcid": "4420", 00:18:18.361 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.361 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:18.361 "hdgst": false, 00:18:18.361 "ddgst": false 00:18:18.361 }, 00:18:18.361 "method": "bdev_nvme_attach_controller" 00:18:18.361 }' 00:18:18.361 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:18.361 14:50:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:18.361 "params": { 00:18:18.361 "name": "Nvme1", 00:18:18.361 "trtype": "tcp", 00:18:18.361 "traddr": "10.0.0.2", 00:18:18.361 "adrfam": "ipv4", 00:18:18.361 "trsvcid": "4420", 00:18:18.361 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.361 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:18.361 "hdgst": false, 00:18:18.361 "ddgst": false 00:18:18.361 }, 00:18:18.361 "method": "bdev_nvme_attach_controller" 00:18:18.361 }' 00:18:18.361 [2024-07-14 14:50:57.628608] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:18.361 [2024-07-14 14:50:57.628608] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:18.361 [2024-07-14 14:50:57.628767] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-14 14:50:57.628769] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:18:18.361 --proc-type=auto ] 00:18:18.361 [2024-07-14 14:50:57.630739] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:18.361 [2024-07-14 14:50:57.630739] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:18.361 [2024-07-14 14:50:57.630903] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:18:18.361 [2024-07-14 14:50:57.630915] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:18:18.620 EAL: No free 2048 kB hugepages reported on node 1 00:18:18.620 EAL: No free 2048 kB hugepages reported on node 1 00:18:18.620 [2024-07-14 14:50:57.877256] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.620 EAL: No free 2048 kB hugepages reported on node 1 00:18:18.897 [2024-07-14 14:50:57.979802] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.897 EAL: No free 2048 kB hugepages reported on node 1 00:18:18.897 [2024-07-14 14:50:58.054515] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.897 [2024-07-14 14:50:58.103967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:18:18.897 [2024-07-14 14:50:58.131894] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.897 [2024-07-14 14:50:58.205489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:18:19.155 [2024-07-14 14:50:58.269438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:18:19.155 [2024-07-14 14:50:58.346473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:18:19.415 Running I/O for 1 seconds... 00:18:19.415 Running I/O for 1 seconds... 00:18:19.673 Running I/O for 1 seconds... 00:18:19.673 Running I/O for 1 seconds... 00:18:20.610 00:18:20.610 Latency(us) 00:18:20.610 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.611 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:18:20.611 Nvme1n1 : 1.01 7366.15 28.77 0.00 0.00 17280.91 4077.80 22233.69 00:18:20.611 =================================================================================================================== 00:18:20.611 Total : 7366.15 28.77 0.00 0.00 17280.91 4077.80 22233.69 00:18:20.611 00:18:20.611 Latency(us) 00:18:20.611 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.611 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:18:20.611 Nvme1n1 : 1.01 7320.37 28.60 0.00 0.00 17391.67 4805.97 25826.04 00:18:20.611 =================================================================================================================== 00:18:20.611 Total : 7320.37 28.60 0.00 0.00 17391.67 4805.97 25826.04 00:18:20.611 00:18:20.611 Latency(us) 00:18:20.611 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.611 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:18:20.611 Nvme1n1 : 1.01 6485.05 25.33 0.00 0.00 19617.68 11359.57 35146.71 00:18:20.611 =================================================================================================================== 00:18:20.611 Total : 6485.05 25.33 0.00 0.00 19617.68 11359.57 35146.71 00:18:20.611 00:18:20.611 Latency(us) 00:18:20.611 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.611 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:18:20.611 Nvme1n1 : 1.00 137536.73 537.25 0.00 0.00 927.33 358.02 1231.83 00:18:20.611 =================================================================================================================== 00:18:20.611 Total : 137536.73 537.25 0.00 0.00 927.33 358.02 1231.83 00:18:21.547 14:51:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1875708 00:18:21.547 14:51:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1875710 00:18:21.806 14:51:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1875713 00:18:21.806 14:51:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:21.806 14:51:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.806 14:51:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:21.806 14:51:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.806 14:51:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:18:21.806 14:51:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:18:21.806 14:51:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:21.806 14:51:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:18:21.806 14:51:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:21.806 14:51:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:18:21.806 14:51:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:21.806 14:51:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:21.806 rmmod nvme_tcp 00:18:21.806 rmmod nvme_fabrics 00:18:21.806 rmmod nvme_keyring 00:18:21.806 14:51:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:21.806 14:51:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:18:21.806 14:51:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:18:21.806 14:51:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1875549 ']' 00:18:21.806 14:51:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1875549 00:18:21.806 14:51:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 1875549 ']' 00:18:21.806 14:51:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 1875549 00:18:21.806 14:51:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:18:21.806 14:51:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:21.806 14:51:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1875549 00:18:21.806 14:51:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:21.806 14:51:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:21.806 14:51:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1875549' 00:18:21.806 killing process with pid 1875549 00:18:21.806 14:51:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 1875549 00:18:21.806 14:51:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 1875549 00:18:23.182 14:51:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:23.182 14:51:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:23.182 14:51:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:23.182 14:51:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:23.182 14:51:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:23.182 14:51:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:23.182 14:51:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:23.182 14:51:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:25.082 14:51:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:25.082 00:18:25.082 real 0m10.227s 00:18:25.082 user 0m30.902s 00:18:25.082 sys 0m4.222s 00:18:25.082 14:51:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:25.082 14:51:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:25.082 ************************************ 00:18:25.082 END TEST nvmf_bdev_io_wait 00:18:25.082 ************************************ 00:18:25.082 14:51:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:25.082 14:51:04 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:25.082 14:51:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:25.082 14:51:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:25.082 14:51:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:25.082 ************************************ 00:18:25.082 START TEST nvmf_queue_depth 00:18:25.082 ************************************ 00:18:25.082 14:51:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:25.082 * Looking for test storage... 00:18:25.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:25.082 14:51:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:25.082 14:51:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:18:25.082 14:51:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:25.082 14:51:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:25.082 14:51:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:25.082 14:51:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:25.082 14:51:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:25.082 14:51:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:25.082 14:51:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:25.082 14:51:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:25.082 14:51:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:25.082 14:51:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:25.082 14:51:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:25.082 14:51:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:25.082 14:51:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:25.082 14:51:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:25.082 14:51:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:25.082 14:51:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:25.082 14:51:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:25.082 14:51:04 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:25.082 14:51:04 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:25.082 14:51:04 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:25.082 14:51:04 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.082 14:51:04 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.082 14:51:04 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.082 14:51:04 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:18:25.082 14:51:04 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.082 14:51:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:18:25.082 14:51:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:25.082 14:51:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:25.082 14:51:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:25.082 14:51:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:25.082 14:51:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:25.082 14:51:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:25.082 14:51:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:25.082 14:51:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:25.082 14:51:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:18:25.082 14:51:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:18:25.082 14:51:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:25.082 14:51:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:18:25.082 14:51:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:25.082 14:51:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:25.082 14:51:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:25.082 14:51:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:25.082 14:51:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:25.082 14:51:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:25.082 14:51:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:25.082 14:51:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:25.082 14:51:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:25.082 14:51:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:25.082 14:51:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:18:25.082 14:51:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:27.620 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:27.620 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:27.620 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:27.620 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:27.620 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:27.621 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:27.621 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:27.621 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:27.621 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:27.621 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:27.621 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:27.621 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:27.621 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:18:27.621 00:18:27.621 --- 10.0.0.2 ping statistics --- 00:18:27.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.621 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:18:27.621 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:27.621 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:27.621 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:18:27.621 00:18:27.621 --- 10.0.0.1 ping statistics --- 00:18:27.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.621 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:18:27.621 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:27.621 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:18:27.621 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:27.621 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:27.621 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:27.621 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:27.621 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:27.621 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:27.621 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:27.621 14:51:06 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:18:27.621 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:27.621 14:51:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:27.621 14:51:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:27.621 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1878304 00:18:27.621 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:27.621 14:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1878304 00:18:27.621 14:51:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1878304 ']' 00:18:27.621 14:51:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.621 14:51:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:27.621 14:51:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.621 14:51:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:27.621 14:51:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:27.621 [2024-07-14 14:51:06.616873] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:27.621 [2024-07-14 14:51:06.617046] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:27.621 EAL: No free 2048 kB hugepages reported on node 1 00:18:27.621 [2024-07-14 14:51:06.761851] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.879 [2024-07-14 14:51:07.020119] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:27.879 [2024-07-14 14:51:07.020199] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:27.880 [2024-07-14 14:51:07.020223] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:27.880 [2024-07-14 14:51:07.020245] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:27.880 [2024-07-14 14:51:07.020262] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:27.880 [2024-07-14 14:51:07.020304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:28.446 14:51:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:28.446 14:51:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:18:28.446 14:51:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:28.446 14:51:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:28.446 14:51:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:28.447 14:51:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:28.447 14:51:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:28.447 14:51:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.447 14:51:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:28.447 [2024-07-14 14:51:07.554342] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:28.447 14:51:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.447 14:51:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:28.447 14:51:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.447 14:51:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:28.447 Malloc0 00:18:28.447 14:51:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.447 14:51:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:28.447 14:51:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.447 14:51:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:28.447 14:51:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.447 14:51:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:28.447 14:51:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.447 14:51:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:28.447 14:51:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.447 14:51:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:28.447 14:51:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.447 14:51:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:28.447 [2024-07-14 14:51:07.682001] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:28.447 14:51:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.447 14:51:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1878690 00:18:28.447 14:51:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:18:28.447 14:51:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:28.447 14:51:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1878690 /var/tmp/bdevperf.sock 00:18:28.447 14:51:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1878690 ']' 00:18:28.447 14:51:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:28.447 14:51:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:28.447 14:51:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:28.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:28.447 14:51:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:28.447 14:51:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:28.707 [2024-07-14 14:51:07.764034] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:28.707 [2024-07-14 14:51:07.764194] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1878690 ] 00:18:28.707 EAL: No free 2048 kB hugepages reported on node 1 00:18:28.707 [2024-07-14 14:51:07.902257] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.967 [2024-07-14 14:51:08.157654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.535 14:51:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:29.535 14:51:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:18:29.535 14:51:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:29.535 14:51:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.535 14:51:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:29.793 NVMe0n1 00:18:29.793 14:51:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.793 14:51:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:29.793 Running I/O for 10 seconds... 00:18:42.008 00:18:42.008 Latency(us) 00:18:42.008 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.008 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:42.008 Verification LBA range: start 0x0 length 0x4000 00:18:42.008 NVMe0n1 : 10.09 6208.03 24.25 0.00 0.00 163977.74 12524.66 98643.82 00:18:42.008 =================================================================================================================== 00:18:42.008 Total : 6208.03 24.25 0.00 0.00 163977.74 12524.66 98643.82 00:18:42.008 0 00:18:42.008 14:51:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1878690 00:18:42.008 14:51:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1878690 ']' 00:18:42.008 14:51:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1878690 00:18:42.008 14:51:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:18:42.008 14:51:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:42.008 14:51:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1878690 00:18:42.008 14:51:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:42.008 14:51:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:42.008 14:51:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1878690' 00:18:42.008 killing process with pid 1878690 00:18:42.008 14:51:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1878690 00:18:42.009 Received shutdown signal, test time was about 10.000000 seconds 00:18:42.009 00:18:42.009 Latency(us) 00:18:42.009 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.009 =================================================================================================================== 00:18:42.009 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:42.009 14:51:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1878690 00:18:42.009 14:51:20 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:42.009 14:51:20 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:42.009 14:51:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:42.009 14:51:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:18:42.009 14:51:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:42.009 14:51:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:18:42.009 14:51:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:42.009 14:51:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:42.009 rmmod nvme_tcp 00:18:42.009 rmmod nvme_fabrics 00:18:42.009 rmmod nvme_keyring 00:18:42.009 14:51:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:42.009 14:51:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:18:42.009 14:51:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:18:42.009 14:51:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1878304 ']' 00:18:42.009 14:51:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1878304 00:18:42.009 14:51:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1878304 ']' 00:18:42.009 14:51:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1878304 00:18:42.009 14:51:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:18:42.009 14:51:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:42.009 14:51:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1878304 00:18:42.009 14:51:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:42.009 14:51:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:42.009 14:51:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1878304' 00:18:42.009 killing process with pid 1878304 00:18:42.009 14:51:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1878304 00:18:42.009 14:51:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1878304 00:18:42.575 14:51:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:42.575 14:51:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:42.575 14:51:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:42.575 14:51:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:42.575 14:51:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:42.575 14:51:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:42.575 14:51:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:42.575 14:51:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.504 14:51:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:44.504 00:18:44.504 real 0m19.424s 00:18:44.504 user 0m27.698s 00:18:44.504 sys 0m3.240s 00:18:44.504 14:51:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:44.504 14:51:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:44.504 ************************************ 00:18:44.504 END TEST nvmf_queue_depth 00:18:44.504 ************************************ 00:18:44.504 14:51:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:44.504 14:51:23 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:44.504 14:51:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:44.504 14:51:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:44.504 14:51:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:44.504 ************************************ 00:18:44.504 START TEST nvmf_target_multipath 00:18:44.504 ************************************ 00:18:44.504 14:51:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:44.504 * Looking for test storage... 00:18:44.504 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:44.504 14:51:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:44.504 14:51:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:44.504 14:51:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:44.504 14:51:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:44.504 14:51:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:44.504 14:51:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:44.504 14:51:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:44.504 14:51:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:44.504 14:51:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:44.504 14:51:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:44.504 14:51:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:44.504 14:51:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:44.762 14:51:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:44.762 14:51:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:44.762 14:51:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:44.762 14:51:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:44.762 14:51:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:44.762 14:51:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:44.762 14:51:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:44.762 14:51:23 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:44.762 14:51:23 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:44.762 14:51:23 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:44.762 14:51:23 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.762 14:51:23 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.762 14:51:23 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.762 14:51:23 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:18:44.762 14:51:23 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.762 14:51:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:18:44.762 14:51:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:44.762 14:51:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:44.762 14:51:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:44.762 14:51:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:44.762 14:51:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:44.762 14:51:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:44.762 14:51:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:44.762 14:51:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:44.762 14:51:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:44.762 14:51:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:44.762 14:51:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:44.762 14:51:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:44.762 14:51:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:18:44.762 14:51:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:44.762 14:51:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:44.762 14:51:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:44.762 14:51:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:44.762 14:51:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:44.762 14:51:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.762 14:51:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:44.762 14:51:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.762 14:51:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:44.762 14:51:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:44.762 14:51:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:18:44.762 14:51:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:46.664 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:46.664 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:18:46.664 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:46.664 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:46.664 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:46.664 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:46.664 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:46.664 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:18:46.664 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:46.664 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:18:46.664 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:18:46.664 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:18:46.664 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:18:46.664 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:18:46.664 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:18:46.664 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:46.664 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:46.664 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:46.664 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:46.664 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:46.664 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:46.664 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:46.664 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:46.664 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:46.665 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:46.665 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:46.665 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:46.665 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:46.665 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:46.665 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:18:46.665 00:18:46.665 --- 10.0.0.2 ping statistics --- 00:18:46.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:46.665 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:46.665 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:46.665 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:18:46.665 00:18:46.665 --- 10.0.0.1 ping statistics --- 00:18:46.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:46.665 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:46.665 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:46.666 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:46.666 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:46.666 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:46.666 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:46.666 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:46.666 14:51:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:18:46.666 14:51:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:18:46.666 only one NIC for nvmf test 00:18:46.666 14:51:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:18:46.666 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:46.666 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:46.666 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:46.666 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:46.666 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:46.666 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:46.666 rmmod nvme_tcp 00:18:46.666 rmmod nvme_fabrics 00:18:46.666 rmmod nvme_keyring 00:18:46.666 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:46.666 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:46.666 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:46.666 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:46.666 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:46.666 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:46.666 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:46.666 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:46.666 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:46.666 14:51:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.666 14:51:25 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:46.666 14:51:25 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.205 14:51:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:49.205 14:51:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:18:49.205 14:51:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:18:49.205 14:51:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:49.205 14:51:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:49.205 14:51:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:49.205 14:51:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:49.205 14:51:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:49.205 14:51:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:49.205 14:51:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:49.205 14:51:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:49.205 14:51:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:49.205 14:51:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:49.205 14:51:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:49.205 14:51:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:49.205 14:51:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:49.205 14:51:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:49.205 14:51:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:49.205 14:51:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:49.205 14:51:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:49.205 14:51:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.205 14:51:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:49.205 00:18:49.205 real 0m4.239s 00:18:49.205 user 0m0.781s 00:18:49.205 sys 0m1.434s 00:18:49.205 14:51:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:49.205 14:51:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:49.205 ************************************ 00:18:49.205 END TEST nvmf_target_multipath 00:18:49.205 ************************************ 00:18:49.205 14:51:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:49.205 14:51:28 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:49.205 14:51:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:49.205 14:51:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:49.205 14:51:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:49.205 ************************************ 00:18:49.205 START TEST nvmf_zcopy 00:18:49.205 ************************************ 00:18:49.205 14:51:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:49.205 * Looking for test storage... 00:18:49.205 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:49.206 14:51:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:49.206 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:18:49.206 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:49.206 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:49.206 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:49.206 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:49.206 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:49.206 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:49.206 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:49.206 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:49.206 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:49.206 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:49.206 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:49.206 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:49.206 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:49.206 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:49.206 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:49.206 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:49.206 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:49.206 14:51:28 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:49.206 14:51:28 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:49.206 14:51:28 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:49.206 14:51:28 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.206 14:51:28 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.206 14:51:28 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.206 14:51:28 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:18:49.206 14:51:28 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.206 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:18:49.206 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:49.206 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:49.206 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:49.206 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:49.206 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:49.206 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:49.206 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:49.206 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:49.206 14:51:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:18:49.206 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:49.206 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:49.206 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:49.206 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:49.206 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:49.206 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:49.206 14:51:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:49.206 14:51:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.206 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:49.206 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:49.206 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:18:49.206 14:51:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:51.115 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:51.115 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:18:51.115 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:51.115 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:51.116 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:51.116 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:51.116 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:51.116 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:51.116 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:51.116 14:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:51.116 14:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:51.116 14:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:51.116 14:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:51.116 14:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:51.116 14:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:51.116 14:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:51.116 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:51.116 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:18:51.116 00:18:51.116 --- 10.0.0.2 ping statistics --- 00:18:51.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:51.116 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:18:51.116 14:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:51.116 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:51.116 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:18:51.116 00:18:51.116 --- 10.0.0.1 ping statistics --- 00:18:51.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:51.116 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:18:51.116 14:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:51.116 14:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:18:51.116 14:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:51.116 14:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:51.116 14:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:51.116 14:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:51.116 14:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:51.116 14:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:51.116 14:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:51.116 14:51:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:51.116 14:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:51.116 14:51:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:51.116 14:51:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:51.116 14:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1884385 00:18:51.116 14:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1884385 00:18:51.116 14:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:51.116 14:51:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 1884385 ']' 00:18:51.116 14:51:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:51.116 14:51:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:51.116 14:51:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:51.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:51.116 14:51:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:51.117 14:51:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:51.117 [2024-07-14 14:51:30.190688] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:51.117 [2024-07-14 14:51:30.190841] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:51.117 EAL: No free 2048 kB hugepages reported on node 1 00:18:51.117 [2024-07-14 14:51:30.328767] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.376 [2024-07-14 14:51:30.588312] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:51.376 [2024-07-14 14:51:30.588376] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:51.376 [2024-07-14 14:51:30.588413] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:51.376 [2024-07-14 14:51:30.588439] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:51.376 [2024-07-14 14:51:30.588461] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:51.376 [2024-07-14 14:51:30.588511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:51.944 14:51:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:51.944 14:51:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:18:51.944 14:51:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:51.944 14:51:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:51.944 14:51:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:51.944 14:51:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:51.944 14:51:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:51.944 14:51:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:51.944 14:51:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.944 14:51:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:51.944 [2024-07-14 14:51:31.180542] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:51.944 14:51:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.944 14:51:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:51.944 14:51:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.944 14:51:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:51.944 14:51:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.944 14:51:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:51.944 14:51:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.944 14:51:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:51.944 [2024-07-14 14:51:31.196814] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:51.944 14:51:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.944 14:51:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:51.944 14:51:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.944 14:51:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:51.944 14:51:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.944 14:51:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:51.944 14:51:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.944 14:51:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:52.205 malloc0 00:18:52.205 14:51:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.205 14:51:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:52.205 14:51:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.205 14:51:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:52.205 14:51:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.205 14:51:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:52.205 14:51:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:52.205 14:51:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:52.205 14:51:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:52.205 14:51:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:52.205 14:51:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:52.205 { 00:18:52.205 "params": { 00:18:52.205 "name": "Nvme$subsystem", 00:18:52.205 "trtype": "$TEST_TRANSPORT", 00:18:52.205 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:52.205 "adrfam": "ipv4", 00:18:52.205 "trsvcid": "$NVMF_PORT", 00:18:52.205 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:52.205 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:52.205 "hdgst": ${hdgst:-false}, 00:18:52.205 "ddgst": ${ddgst:-false} 00:18:52.205 }, 00:18:52.205 "method": "bdev_nvme_attach_controller" 00:18:52.205 } 00:18:52.205 EOF 00:18:52.205 )") 00:18:52.205 14:51:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:52.205 14:51:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:52.205 14:51:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:52.205 14:51:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:52.205 "params": { 00:18:52.205 "name": "Nvme1", 00:18:52.205 "trtype": "tcp", 00:18:52.205 "traddr": "10.0.0.2", 00:18:52.205 "adrfam": "ipv4", 00:18:52.205 "trsvcid": "4420", 00:18:52.205 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.205 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:52.205 "hdgst": false, 00:18:52.205 "ddgst": false 00:18:52.205 }, 00:18:52.205 "method": "bdev_nvme_attach_controller" 00:18:52.205 }' 00:18:52.205 [2024-07-14 14:51:31.356584] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:52.205 [2024-07-14 14:51:31.356720] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1884541 ] 00:18:52.205 EAL: No free 2048 kB hugepages reported on node 1 00:18:52.205 [2024-07-14 14:51:31.486166] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.464 [2024-07-14 14:51:31.741133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.035 Running I/O for 10 seconds... 00:19:03.019 00:19:03.019 Latency(us) 00:19:03.019 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:03.019 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:19:03.019 Verification LBA range: start 0x0 length 0x1000 00:19:03.019 Nvme1n1 : 10.02 4266.92 33.34 0.00 0.00 29913.67 4417.61 40195.41 00:19:03.019 =================================================================================================================== 00:19:03.019 Total : 4266.92 33.34 0.00 0.00 29913.67 4417.61 40195.41 00:19:03.957 14:51:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1885955 00:19:03.957 14:51:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:19:03.957 14:51:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:03.957 14:51:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:19:03.957 14:51:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:19:03.957 14:51:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:19:03.957 14:51:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:19:03.957 14:51:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:03.957 14:51:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:03.957 { 00:19:03.957 "params": { 00:19:03.957 "name": "Nvme$subsystem", 00:19:03.957 "trtype": "$TEST_TRANSPORT", 00:19:03.957 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:03.957 "adrfam": "ipv4", 00:19:03.957 "trsvcid": "$NVMF_PORT", 00:19:03.957 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:03.957 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:03.957 "hdgst": ${hdgst:-false}, 00:19:03.957 "ddgst": ${ddgst:-false} 00:19:03.957 }, 00:19:03.957 "method": "bdev_nvme_attach_controller" 00:19:03.957 } 00:19:03.957 EOF 00:19:03.957 )") 00:19:03.957 14:51:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:19:03.957 [2024-07-14 14:51:43.235485] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.957 [2024-07-14 14:51:43.235558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.957 14:51:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:19:03.957 14:51:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:19:03.957 14:51:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:03.957 "params": { 00:19:03.957 "name": "Nvme1", 00:19:03.957 "trtype": "tcp", 00:19:03.957 "traddr": "10.0.0.2", 00:19:03.957 "adrfam": "ipv4", 00:19:03.957 "trsvcid": "4420", 00:19:03.957 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:03.957 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:03.957 "hdgst": false, 00:19:03.957 "ddgst": false 00:19:03.957 }, 00:19:03.957 "method": "bdev_nvme_attach_controller" 00:19:03.957 }' 00:19:03.957 [2024-07-14 14:51:43.243389] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.957 [2024-07-14 14:51:43.243424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.957 [2024-07-14 14:51:43.251419] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.957 [2024-07-14 14:51:43.251447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.957 [2024-07-14 14:51:43.259446] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.957 [2024-07-14 14:51:43.259475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.214 [2024-07-14 14:51:43.267513] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.214 [2024-07-14 14:51:43.267561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.214 [2024-07-14 14:51:43.275493] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.214 [2024-07-14 14:51:43.275523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.214 [2024-07-14 14:51:43.283500] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.214 [2024-07-14 14:51:43.283526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.214 [2024-07-14 14:51:43.291505] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.214 [2024-07-14 14:51:43.291532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.214 [2024-07-14 14:51:43.299567] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.214 [2024-07-14 14:51:43.299593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.214 [2024-07-14 14:51:43.307556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.214 [2024-07-14 14:51:43.307583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.214 [2024-07-14 14:51:43.309411] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:04.214 [2024-07-14 14:51:43.309534] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1885955 ] 00:19:04.214 [2024-07-14 14:51:43.315620] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.214 [2024-07-14 14:51:43.315647] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.214 [2024-07-14 14:51:43.323618] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.214 [2024-07-14 14:51:43.323644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.214 [2024-07-14 14:51:43.331633] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.214 [2024-07-14 14:51:43.331661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.214 [2024-07-14 14:51:43.339676] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.214 [2024-07-14 14:51:43.339705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.214 [2024-07-14 14:51:43.347692] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.214 [2024-07-14 14:51:43.347720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.214 [2024-07-14 14:51:43.355695] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.214 [2024-07-14 14:51:43.355721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.214 [2024-07-14 14:51:43.363734] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.214 [2024-07-14 14:51:43.363760] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.214 [2024-07-14 14:51:43.371737] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.214 [2024-07-14 14:51:43.371763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.214 [2024-07-14 14:51:43.379773] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.214 [2024-07-14 14:51:43.379799] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.214 EAL: No free 2048 kB hugepages reported on node 1 00:19:04.214 [2024-07-14 14:51:43.387797] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.214 [2024-07-14 14:51:43.387824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.214 [2024-07-14 14:51:43.395858] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.214 [2024-07-14 14:51:43.395903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.214 [2024-07-14 14:51:43.403891] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.214 [2024-07-14 14:51:43.403937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.214 [2024-07-14 14:51:43.411904] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.214 [2024-07-14 14:51:43.411955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.214 [2024-07-14 14:51:43.419933] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.214 [2024-07-14 14:51:43.419961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.214 [2024-07-14 14:51:43.427963] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.214 [2024-07-14 14:51:43.427992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.214 [2024-07-14 14:51:43.435976] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.214 [2024-07-14 14:51:43.436012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.214 [2024-07-14 14:51:43.444015] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.214 [2024-07-14 14:51:43.444042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.214 [2024-07-14 14:51:43.450302] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.214 [2024-07-14 14:51:43.452032] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.214 [2024-07-14 14:51:43.452061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.214 [2024-07-14 14:51:43.460066] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.214 [2024-07-14 14:51:43.460104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.214 [2024-07-14 14:51:43.468132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.214 [2024-07-14 14:51:43.468193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.214 [2024-07-14 14:51:43.476087] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.214 [2024-07-14 14:51:43.476115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.214 [2024-07-14 14:51:43.484088] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.214 [2024-07-14 14:51:43.484114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.214 [2024-07-14 14:51:43.492139] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.214 [2024-07-14 14:51:43.492165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.214 [2024-07-14 14:51:43.500139] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.214 [2024-07-14 14:51:43.500185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.214 [2024-07-14 14:51:43.508201] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.214 [2024-07-14 14:51:43.508247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.214 [2024-07-14 14:51:43.516213] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.214 [2024-07-14 14:51:43.516260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.473 [2024-07-14 14:51:43.524238] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.473 [2024-07-14 14:51:43.524272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.473 [2024-07-14 14:51:43.532295] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.473 [2024-07-14 14:51:43.532328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.473 [2024-07-14 14:51:43.540308] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.473 [2024-07-14 14:51:43.540341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.473 [2024-07-14 14:51:43.548311] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.473 [2024-07-14 14:51:43.548343] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.473 [2024-07-14 14:51:43.556348] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.473 [2024-07-14 14:51:43.556381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.473 [2024-07-14 14:51:43.564353] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.473 [2024-07-14 14:51:43.564385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.473 [2024-07-14 14:51:43.572392] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.473 [2024-07-14 14:51:43.572424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.473 [2024-07-14 14:51:43.580420] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.473 [2024-07-14 14:51:43.580452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.473 [2024-07-14 14:51:43.588459] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.473 [2024-07-14 14:51:43.588491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.473 [2024-07-14 14:51:43.596494] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.473 [2024-07-14 14:51:43.596535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.473 [2024-07-14 14:51:43.608582] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.473 [2024-07-14 14:51:43.608637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.473 [2024-07-14 14:51:43.616512] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.473 [2024-07-14 14:51:43.616544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.473 [2024-07-14 14:51:43.624553] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.473 [2024-07-14 14:51:43.624586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.473 [2024-07-14 14:51:43.632556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.473 [2024-07-14 14:51:43.632588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.473 [2024-07-14 14:51:43.640596] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.473 [2024-07-14 14:51:43.640628] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.473 [2024-07-14 14:51:43.648619] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.474 [2024-07-14 14:51:43.648651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.474 [2024-07-14 14:51:43.656617] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.474 [2024-07-14 14:51:43.656649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.474 [2024-07-14 14:51:43.664665] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.474 [2024-07-14 14:51:43.664697] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.474 [2024-07-14 14:51:43.672684] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.474 [2024-07-14 14:51:43.672716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.474 [2024-07-14 14:51:43.680687] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.474 [2024-07-14 14:51:43.680718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.474 [2024-07-14 14:51:43.688753] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.474 [2024-07-14 14:51:43.688786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.474 [2024-07-14 14:51:43.696748] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.474 [2024-07-14 14:51:43.696780] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.474 [2024-07-14 14:51:43.704773] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.474 [2024-07-14 14:51:43.704804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.474 [2024-07-14 14:51:43.712795] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.474 [2024-07-14 14:51:43.712827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.474 [2024-07-14 14:51:43.720802] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.474 [2024-07-14 14:51:43.720833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.474 [2024-07-14 14:51:43.727822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.474 [2024-07-14 14:51:43.728851] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.474 [2024-07-14 14:51:43.728890] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.474 [2024-07-14 14:51:43.736867] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.474 [2024-07-14 14:51:43.736930] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.474 [2024-07-14 14:51:43.744943] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.474 [2024-07-14 14:51:43.744983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.474 [2024-07-14 14:51:43.753004] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.474 [2024-07-14 14:51:43.753051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.474 [2024-07-14 14:51:43.760962] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.474 [2024-07-14 14:51:43.760992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.474 [2024-07-14 14:51:43.768976] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.474 [2024-07-14 14:51:43.769004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.474 [2024-07-14 14:51:43.776995] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.474 [2024-07-14 14:51:43.777021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.734 [2024-07-14 14:51:43.785017] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.734 [2024-07-14 14:51:43.785045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.734 [2024-07-14 14:51:43.793026] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.734 [2024-07-14 14:51:43.793053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.734 [2024-07-14 14:51:43.801046] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.734 [2024-07-14 14:51:43.801072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.734 [2024-07-14 14:51:43.809061] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.734 [2024-07-14 14:51:43.809087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.734 [2024-07-14 14:51:43.817105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.734 [2024-07-14 14:51:43.817132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.734 [2024-07-14 14:51:43.825202] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.734 [2024-07-14 14:51:43.825252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.734 [2024-07-14 14:51:43.833232] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.734 [2024-07-14 14:51:43.833307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.734 [2024-07-14 14:51:43.841261] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.734 [2024-07-14 14:51:43.841312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.734 [2024-07-14 14:51:43.853321] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.734 [2024-07-14 14:51:43.853386] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.734 [2024-07-14 14:51:43.861256] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.734 [2024-07-14 14:51:43.861289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.734 [2024-07-14 14:51:43.869276] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.734 [2024-07-14 14:51:43.869308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.734 [2024-07-14 14:51:43.877282] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.734 [2024-07-14 14:51:43.877314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.734 [2024-07-14 14:51:43.885352] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.734 [2024-07-14 14:51:43.885383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.734 [2024-07-14 14:51:43.893338] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.734 [2024-07-14 14:51:43.893371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.734 [2024-07-14 14:51:43.901367] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.734 [2024-07-14 14:51:43.901399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.734 [2024-07-14 14:51:43.909398] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.734 [2024-07-14 14:51:43.909430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.734 [2024-07-14 14:51:43.917398] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.734 [2024-07-14 14:51:43.917430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.734 [2024-07-14 14:51:43.925441] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.734 [2024-07-14 14:51:43.925473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.734 [2024-07-14 14:51:43.933455] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.734 [2024-07-14 14:51:43.933499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.734 [2024-07-14 14:51:43.941466] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.734 [2024-07-14 14:51:43.941498] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.734 [2024-07-14 14:51:43.949507] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.734 [2024-07-14 14:51:43.949539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.734 [2024-07-14 14:51:43.957506] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.734 [2024-07-14 14:51:43.957537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.734 [2024-07-14 14:51:43.965547] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.734 [2024-07-14 14:51:43.965578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.734 [2024-07-14 14:51:43.973567] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.734 [2024-07-14 14:51:43.973598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.734 [2024-07-14 14:51:43.981601] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.734 [2024-07-14 14:51:43.981633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.734 [2024-07-14 14:51:43.989696] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.734 [2024-07-14 14:51:43.989741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.734 [2024-07-14 14:51:43.997716] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.734 [2024-07-14 14:51:43.997767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.734 [2024-07-14 14:51:44.005714] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.734 [2024-07-14 14:51:44.005765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.734 [2024-07-14 14:51:44.013727] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.734 [2024-07-14 14:51:44.013761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.734 [2024-07-14 14:51:44.021696] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.734 [2024-07-14 14:51:44.021729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.734 [2024-07-14 14:51:44.029740] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.734 [2024-07-14 14:51:44.029773] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.734 [2024-07-14 14:51:44.037767] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.734 [2024-07-14 14:51:44.037801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.993 [2024-07-14 14:51:44.045771] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.993 [2024-07-14 14:51:44.045804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.993 [2024-07-14 14:51:44.053808] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.993 [2024-07-14 14:51:44.053841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.993 [2024-07-14 14:51:44.061835] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.993 [2024-07-14 14:51:44.061868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.993 [2024-07-14 14:51:44.069842] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.993 [2024-07-14 14:51:44.069874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.993 [2024-07-14 14:51:44.077932] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.993 [2024-07-14 14:51:44.077960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.993 [2024-07-14 14:51:44.085893] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.993 [2024-07-14 14:51:44.085927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.993 [2024-07-14 14:51:44.093954] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.993 [2024-07-14 14:51:44.093982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.993 [2024-07-14 14:51:44.101972] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.993 [2024-07-14 14:51:44.102000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.993 [2024-07-14 14:51:44.109990] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.993 [2024-07-14 14:51:44.110020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.993 [2024-07-14 14:51:44.118025] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.993 [2024-07-14 14:51:44.118052] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.993 [2024-07-14 14:51:44.126060] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.993 [2024-07-14 14:51:44.126091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.993 [2024-07-14 14:51:44.134128] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.993 [2024-07-14 14:51:44.134160] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.993 [2024-07-14 14:51:44.142096] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.993 [2024-07-14 14:51:44.142126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.993 [2024-07-14 14:51:44.150068] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.993 [2024-07-14 14:51:44.150098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.993 [2024-07-14 14:51:44.158195] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.993 [2024-07-14 14:51:44.158246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.993 [2024-07-14 14:51:44.166133] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.993 [2024-07-14 14:51:44.166183] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.993 [2024-07-14 14:51:44.174157] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.993 [2024-07-14 14:51:44.174187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.993 [2024-07-14 14:51:44.182204] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.993 [2024-07-14 14:51:44.182238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.993 [2024-07-14 14:51:44.190471] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.993 [2024-07-14 14:51:44.190508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.993 [2024-07-14 14:51:44.198477] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.993 [2024-07-14 14:51:44.198511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.993 Running I/O for 5 seconds... 00:19:04.993 [2024-07-14 14:51:44.211549] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.993 [2024-07-14 14:51:44.211586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.993 [2024-07-14 14:51:44.226956] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.993 [2024-07-14 14:51:44.226994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.993 [2024-07-14 14:51:44.244454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.993 [2024-07-14 14:51:44.244490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.993 [2024-07-14 14:51:44.261376] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.993 [2024-07-14 14:51:44.261428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.993 [2024-07-14 14:51:44.278506] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.993 [2024-07-14 14:51:44.278558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.993 [2024-07-14 14:51:44.295297] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.993 [2024-07-14 14:51:44.295335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.251 [2024-07-14 14:51:44.311752] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.251 [2024-07-14 14:51:44.311788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.251 [2024-07-14 14:51:44.327869] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.251 [2024-07-14 14:51:44.327927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.251 [2024-07-14 14:51:44.341689] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.251 [2024-07-14 14:51:44.341724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.251 [2024-07-14 14:51:44.358130] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.251 [2024-07-14 14:51:44.358166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.251 [2024-07-14 14:51:44.372236] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.251 [2024-07-14 14:51:44.372285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.251 [2024-07-14 14:51:44.389338] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.251 [2024-07-14 14:51:44.389388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.251 [2024-07-14 14:51:44.406060] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.251 [2024-07-14 14:51:44.406096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.251 [2024-07-14 14:51:44.423171] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.251 [2024-07-14 14:51:44.423208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.251 [2024-07-14 14:51:44.439711] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.251 [2024-07-14 14:51:44.439746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.251 [2024-07-14 14:51:44.456657] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.251 [2024-07-14 14:51:44.456692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.251 [2024-07-14 14:51:44.473023] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.251 [2024-07-14 14:51:44.473059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.251 [2024-07-14 14:51:44.489099] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.251 [2024-07-14 14:51:44.489135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.251 [2024-07-14 14:51:44.505609] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.251 [2024-07-14 14:51:44.505644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.251 [2024-07-14 14:51:44.522324] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.251 [2024-07-14 14:51:44.522358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.251 [2024-07-14 14:51:44.538992] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.251 [2024-07-14 14:51:44.539032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.251 [2024-07-14 14:51:44.556109] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.251 [2024-07-14 14:51:44.556146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.510 [2024-07-14 14:51:44.572707] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.510 [2024-07-14 14:51:44.572742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.510 [2024-07-14 14:51:44.589138] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.510 [2024-07-14 14:51:44.589202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.510 [2024-07-14 14:51:44.603308] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.510 [2024-07-14 14:51:44.603342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.510 [2024-07-14 14:51:44.619643] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.510 [2024-07-14 14:51:44.619679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.510 [2024-07-14 14:51:44.634184] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.510 [2024-07-14 14:51:44.634234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.510 [2024-07-14 14:51:44.650927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.510 [2024-07-14 14:51:44.650964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.510 [2024-07-14 14:51:44.667360] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.510 [2024-07-14 14:51:44.667394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.510 [2024-07-14 14:51:44.684313] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.510 [2024-07-14 14:51:44.684355] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.510 [2024-07-14 14:51:44.701395] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.511 [2024-07-14 14:51:44.701429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.511 [2024-07-14 14:51:44.717819] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.511 [2024-07-14 14:51:44.717854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.511 [2024-07-14 14:51:44.733863] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.511 [2024-07-14 14:51:44.733922] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.511 [2024-07-14 14:51:44.747584] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.511 [2024-07-14 14:51:44.747620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.511 [2024-07-14 14:51:44.764209] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.511 [2024-07-14 14:51:44.764244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.511 [2024-07-14 14:51:44.779324] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.511 [2024-07-14 14:51:44.779374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.511 [2024-07-14 14:51:44.795373] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.511 [2024-07-14 14:51:44.795423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.511 [2024-07-14 14:51:44.811969] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.511 [2024-07-14 14:51:44.812005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.770 [2024-07-14 14:51:44.828233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.770 [2024-07-14 14:51:44.828268] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.770 [2024-07-14 14:51:44.844038] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.770 [2024-07-14 14:51:44.844090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.770 [2024-07-14 14:51:44.860084] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.770 [2024-07-14 14:51:44.860120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.770 [2024-07-14 14:51:44.875531] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.770 [2024-07-14 14:51:44.875582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.770 [2024-07-14 14:51:44.891706] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.770 [2024-07-14 14:51:44.891755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.770 [2024-07-14 14:51:44.906715] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.770 [2024-07-14 14:51:44.906751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.770 [2024-07-14 14:51:44.923718] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.770 [2024-07-14 14:51:44.923752] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.770 [2024-07-14 14:51:44.941055] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.770 [2024-07-14 14:51:44.941105] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.770 [2024-07-14 14:51:44.958395] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.770 [2024-07-14 14:51:44.958435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.770 [2024-07-14 14:51:44.974661] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.770 [2024-07-14 14:51:44.974711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.770 [2024-07-14 14:51:44.990002] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.771 [2024-07-14 14:51:44.990045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.771 [2024-07-14 14:51:45.006112] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.771 [2024-07-14 14:51:45.006147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.771 [2024-07-14 14:51:45.022333] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.771 [2024-07-14 14:51:45.022368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.771 [2024-07-14 14:51:45.038734] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.771 [2024-07-14 14:51:45.038769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.771 [2024-07-14 14:51:45.054978] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.771 [2024-07-14 14:51:45.055014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.771 [2024-07-14 14:51:45.070978] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.771 [2024-07-14 14:51:45.071012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.030 [2024-07-14 14:51:45.088133] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.030 [2024-07-14 14:51:45.088168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.030 [2024-07-14 14:51:45.105284] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.030 [2024-07-14 14:51:45.105325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.030 [2024-07-14 14:51:45.122804] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.030 [2024-07-14 14:51:45.122843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.030 [2024-07-14 14:51:45.140321] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.030 [2024-07-14 14:51:45.140361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.030 [2024-07-14 14:51:45.158148] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.030 [2024-07-14 14:51:45.158200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.030 [2024-07-14 14:51:45.175276] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.030 [2024-07-14 14:51:45.175315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.030 [2024-07-14 14:51:45.193091] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.030 [2024-07-14 14:51:45.193125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.030 [2024-07-14 14:51:45.210502] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.030 [2024-07-14 14:51:45.210541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.030 [2024-07-14 14:51:45.227536] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.030 [2024-07-14 14:51:45.227575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.030 [2024-07-14 14:51:45.245151] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.030 [2024-07-14 14:51:45.245202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.030 [2024-07-14 14:51:45.262683] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.030 [2024-07-14 14:51:45.262722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.030 [2024-07-14 14:51:45.279533] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.030 [2024-07-14 14:51:45.279572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.030 [2024-07-14 14:51:45.296722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.030 [2024-07-14 14:51:45.296761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.030 [2024-07-14 14:51:45.314181] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.030 [2024-07-14 14:51:45.314231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.030 [2024-07-14 14:51:45.331194] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.030 [2024-07-14 14:51:45.331244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.289 [2024-07-14 14:51:45.349133] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.289 [2024-07-14 14:51:45.349190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.290 [2024-07-14 14:51:45.366816] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.290 [2024-07-14 14:51:45.366855] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.290 [2024-07-14 14:51:45.385448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.290 [2024-07-14 14:51:45.385489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.290 [2024-07-14 14:51:45.402685] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.290 [2024-07-14 14:51:45.402725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.290 [2024-07-14 14:51:45.420244] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.290 [2024-07-14 14:51:45.420284] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.290 [2024-07-14 14:51:45.438311] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.290 [2024-07-14 14:51:45.438352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.290 [2024-07-14 14:51:45.455977] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.290 [2024-07-14 14:51:45.456013] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.290 [2024-07-14 14:51:45.472488] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.290 [2024-07-14 14:51:45.472528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.290 [2024-07-14 14:51:45.488025] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.290 [2024-07-14 14:51:45.488060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.290 [2024-07-14 14:51:45.505607] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.290 [2024-07-14 14:51:45.505647] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.290 [2024-07-14 14:51:45.522896] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.290 [2024-07-14 14:51:45.522948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.290 [2024-07-14 14:51:45.539606] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.290 [2024-07-14 14:51:45.539647] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.290 [2024-07-14 14:51:45.557615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.290 [2024-07-14 14:51:45.557658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.290 [2024-07-14 14:51:45.575780] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.290 [2024-07-14 14:51:45.575820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.290 [2024-07-14 14:51:45.594247] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.290 [2024-07-14 14:51:45.594287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.549 [2024-07-14 14:51:45.611061] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.549 [2024-07-14 14:51:45.611097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.549 [2024-07-14 14:51:45.628752] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.549 [2024-07-14 14:51:45.628792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.549 [2024-07-14 14:51:45.646260] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.549 [2024-07-14 14:51:45.646309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.549 [2024-07-14 14:51:45.664622] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.549 [2024-07-14 14:51:45.664662] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.549 [2024-07-14 14:51:45.682770] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.549 [2024-07-14 14:51:45.682809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.549 [2024-07-14 14:51:45.700637] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.549 [2024-07-14 14:51:45.700677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.549 [2024-07-14 14:51:45.718438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.549 [2024-07-14 14:51:45.718479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.549 [2024-07-14 14:51:45.736400] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.549 [2024-07-14 14:51:45.736440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.549 [2024-07-14 14:51:45.754285] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.549 [2024-07-14 14:51:45.754325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.549 [2024-07-14 14:51:45.771487] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.549 [2024-07-14 14:51:45.771528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.549 [2024-07-14 14:51:45.789563] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.549 [2024-07-14 14:51:45.789602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.549 [2024-07-14 14:51:45.807444] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.549 [2024-07-14 14:51:45.807484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.549 [2024-07-14 14:51:45.824484] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.549 [2024-07-14 14:51:45.824525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.549 [2024-07-14 14:51:45.842371] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.549 [2024-07-14 14:51:45.842411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.817 [2024-07-14 14:51:45.860693] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.817 [2024-07-14 14:51:45.860734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.817 [2024-07-14 14:51:45.877679] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.817 [2024-07-14 14:51:45.877718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.817 [2024-07-14 14:51:45.895898] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.817 [2024-07-14 14:51:45.895953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.817 [2024-07-14 14:51:45.913455] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.817 [2024-07-14 14:51:45.913495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.817 [2024-07-14 14:51:45.931495] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.817 [2024-07-14 14:51:45.931545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.817 [2024-07-14 14:51:45.949715] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.817 [2024-07-14 14:51:45.949770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.817 [2024-07-14 14:51:45.967803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.817 [2024-07-14 14:51:45.967842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.817 [2024-07-14 14:51:45.985047] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.817 [2024-07-14 14:51:45.985080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.817 [2024-07-14 14:51:46.002161] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.817 [2024-07-14 14:51:46.002194] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.817 [2024-07-14 14:51:46.019581] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.817 [2024-07-14 14:51:46.019621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.817 [2024-07-14 14:51:46.037933] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.817 [2024-07-14 14:51:46.037967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.817 [2024-07-14 14:51:46.055357] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.817 [2024-07-14 14:51:46.055397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.817 [2024-07-14 14:51:46.072425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.817 [2024-07-14 14:51:46.072465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.817 [2024-07-14 14:51:46.089623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.818 [2024-07-14 14:51:46.089662] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.818 [2024-07-14 14:51:46.107723] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.818 [2024-07-14 14:51:46.107763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.076 [2024-07-14 14:51:46.126044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.076 [2024-07-14 14:51:46.126078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.076 [2024-07-14 14:51:46.143178] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.076 [2024-07-14 14:51:46.143230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.076 [2024-07-14 14:51:46.160766] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.076 [2024-07-14 14:51:46.160804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.076 [2024-07-14 14:51:46.178496] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.076 [2024-07-14 14:51:46.178536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.076 [2024-07-14 14:51:46.196183] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.076 [2024-07-14 14:51:46.196222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.076 [2024-07-14 14:51:46.214336] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.076 [2024-07-14 14:51:46.214376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.076 [2024-07-14 14:51:46.231557] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.076 [2024-07-14 14:51:46.231597] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.076 [2024-07-14 14:51:46.248654] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.076 [2024-07-14 14:51:46.248695] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.076 [2024-07-14 14:51:46.266745] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.076 [2024-07-14 14:51:46.266784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.076 [2024-07-14 14:51:46.284207] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.076 [2024-07-14 14:51:46.284248] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.076 [2024-07-14 14:51:46.302095] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.076 [2024-07-14 14:51:46.302130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.076 [2024-07-14 14:51:46.319655] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.076 [2024-07-14 14:51:46.319694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.076 [2024-07-14 14:51:46.336959] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.076 [2024-07-14 14:51:46.336992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.076 [2024-07-14 14:51:46.355303] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.076 [2024-07-14 14:51:46.355343] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.076 [2024-07-14 14:51:46.373023] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.076 [2024-07-14 14:51:46.373058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.333 [2024-07-14 14:51:46.391332] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.333 [2024-07-14 14:51:46.391373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.333 [2024-07-14 14:51:46.408501] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.333 [2024-07-14 14:51:46.408540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.333 [2024-07-14 14:51:46.425145] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.333 [2024-07-14 14:51:46.425178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.333 [2024-07-14 14:51:46.442204] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.333 [2024-07-14 14:51:46.442254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.333 [2024-07-14 14:51:46.459931] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.333 [2024-07-14 14:51:46.459980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.333 [2024-07-14 14:51:46.478047] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.333 [2024-07-14 14:51:46.478081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.333 [2024-07-14 14:51:46.496210] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.333 [2024-07-14 14:51:46.496259] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.333 [2024-07-14 14:51:46.514233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.333 [2024-07-14 14:51:46.514273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.333 [2024-07-14 14:51:46.531396] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.333 [2024-07-14 14:51:46.531436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.333 [2024-07-14 14:51:46.548666] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.334 [2024-07-14 14:51:46.548705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.334 [2024-07-14 14:51:46.567013] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.334 [2024-07-14 14:51:46.567050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.334 [2024-07-14 14:51:46.585563] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.334 [2024-07-14 14:51:46.585604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.334 [2024-07-14 14:51:46.603653] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.334 [2024-07-14 14:51:46.603694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.334 [2024-07-14 14:51:46.619901] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.334 [2024-07-14 14:51:46.619936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.334 [2024-07-14 14:51:46.636648] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.334 [2024-07-14 14:51:46.636709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.593 [2024-07-14 14:51:46.654219] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.593 [2024-07-14 14:51:46.654270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.593 [2024-07-14 14:51:46.671870] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.593 [2024-07-14 14:51:46.671937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.593 [2024-07-14 14:51:46.689870] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.593 [2024-07-14 14:51:46.689946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.593 [2024-07-14 14:51:46.707766] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.593 [2024-07-14 14:51:46.707807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.593 [2024-07-14 14:51:46.725449] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.593 [2024-07-14 14:51:46.725488] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.593 [2024-07-14 14:51:46.742439] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.593 [2024-07-14 14:51:46.742479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.593 [2024-07-14 14:51:46.759676] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.593 [2024-07-14 14:51:46.759715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.593 [2024-07-14 14:51:46.777207] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.593 [2024-07-14 14:51:46.777262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.593 [2024-07-14 14:51:46.794230] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.593 [2024-07-14 14:51:46.794270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.593 [2024-07-14 14:51:46.811999] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.593 [2024-07-14 14:51:46.812032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.593 [2024-07-14 14:51:46.829076] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.593 [2024-07-14 14:51:46.829111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.593 [2024-07-14 14:51:46.847522] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.593 [2024-07-14 14:51:46.847562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.593 [2024-07-14 14:51:46.865492] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.593 [2024-07-14 14:51:46.865531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.593 [2024-07-14 14:51:46.882840] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.593 [2024-07-14 14:51:46.882892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.593 [2024-07-14 14:51:46.900407] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.593 [2024-07-14 14:51:46.900446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.854 [2024-07-14 14:51:46.919213] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.854 [2024-07-14 14:51:46.919264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.854 [2024-07-14 14:51:46.936817] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.854 [2024-07-14 14:51:46.936856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.854 [2024-07-14 14:51:46.955057] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.854 [2024-07-14 14:51:46.955110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.854 [2024-07-14 14:51:46.973324] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.854 [2024-07-14 14:51:46.973364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.854 [2024-07-14 14:51:46.990600] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.854 [2024-07-14 14:51:46.990640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.854 [2024-07-14 14:51:47.007900] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.854 [2024-07-14 14:51:47.007950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.854 [2024-07-14 14:51:47.025605] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.854 [2024-07-14 14:51:47.025645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.854 [2024-07-14 14:51:47.043427] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.854 [2024-07-14 14:51:47.043468] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.854 [2024-07-14 14:51:47.060541] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.854 [2024-07-14 14:51:47.060581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.854 [2024-07-14 14:51:47.078004] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.854 [2024-07-14 14:51:47.078038] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.854 [2024-07-14 14:51:47.095038] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.854 [2024-07-14 14:51:47.095073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.854 [2024-07-14 14:51:47.111818] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.854 [2024-07-14 14:51:47.111858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.854 [2024-07-14 14:51:47.129289] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.854 [2024-07-14 14:51:47.129328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.854 [2024-07-14 14:51:47.146454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.854 [2024-07-14 14:51:47.146495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.114 [2024-07-14 14:51:47.164777] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.114 [2024-07-14 14:51:47.164818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.114 [2024-07-14 14:51:47.182640] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.114 [2024-07-14 14:51:47.182679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.114 [2024-07-14 14:51:47.199828] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.114 [2024-07-14 14:51:47.199867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.114 [2024-07-14 14:51:47.217048] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.114 [2024-07-14 14:51:47.217083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.114 [2024-07-14 14:51:47.233762] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.114 [2024-07-14 14:51:47.233802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.114 [2024-07-14 14:51:47.251008] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.114 [2024-07-14 14:51:47.251041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.114 [2024-07-14 14:51:47.270040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.114 [2024-07-14 14:51:47.270076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.114 [2024-07-14 14:51:47.287948] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.114 [2024-07-14 14:51:47.287984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.114 [2024-07-14 14:51:47.305532] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.114 [2024-07-14 14:51:47.305582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.114 [2024-07-14 14:51:47.322341] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.114 [2024-07-14 14:51:47.322380] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.114 [2024-07-14 14:51:47.340094] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.114 [2024-07-14 14:51:47.340127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.114 [2024-07-14 14:51:47.357971] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.114 [2024-07-14 14:51:47.358019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.114 [2024-07-14 14:51:47.375405] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.114 [2024-07-14 14:51:47.375445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.114 [2024-07-14 14:51:47.392607] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.114 [2024-07-14 14:51:47.392647] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.114 [2024-07-14 14:51:47.409493] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.114 [2024-07-14 14:51:47.409543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.373 [2024-07-14 14:51:47.428063] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.373 [2024-07-14 14:51:47.428098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.373 [2024-07-14 14:51:47.445785] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.373 [2024-07-14 14:51:47.445825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.373 [2024-07-14 14:51:47.464191] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.373 [2024-07-14 14:51:47.464240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.373 [2024-07-14 14:51:47.481934] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.373 [2024-07-14 14:51:47.481967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.373 [2024-07-14 14:51:47.499661] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.373 [2024-07-14 14:51:47.499701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.373 [2024-07-14 14:51:47.516346] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.373 [2024-07-14 14:51:47.516385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.373 [2024-07-14 14:51:47.534376] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.373 [2024-07-14 14:51:47.534415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.373 [2024-07-14 14:51:47.551516] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.373 [2024-07-14 14:51:47.551557] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.373 [2024-07-14 14:51:47.568737] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.373 [2024-07-14 14:51:47.568777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.373 [2024-07-14 14:51:47.586128] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.373 [2024-07-14 14:51:47.586179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.373 [2024-07-14 14:51:47.604431] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.373 [2024-07-14 14:51:47.604470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.373 [2024-07-14 14:51:47.621647] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.373 [2024-07-14 14:51:47.621687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.373 [2024-07-14 14:51:47.639953] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.373 [2024-07-14 14:51:47.640017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.374 [2024-07-14 14:51:47.657977] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.374 [2024-07-14 14:51:47.658025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.374 [2024-07-14 14:51:47.677277] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.374 [2024-07-14 14:51:47.677318] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.634 [2024-07-14 14:51:47.695934] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.634 [2024-07-14 14:51:47.695972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.634 [2024-07-14 14:51:47.713386] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.634 [2024-07-14 14:51:47.713422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.634 [2024-07-14 14:51:47.730409] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.634 [2024-07-14 14:51:47.730444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.634 [2024-07-14 14:51:47.746797] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.634 [2024-07-14 14:51:47.746832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.634 [2024-07-14 14:51:47.762600] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.634 [2024-07-14 14:51:47.762635] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.634 [2024-07-14 14:51:47.779172] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.634 [2024-07-14 14:51:47.779223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.634 [2024-07-14 14:51:47.795539] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.634 [2024-07-14 14:51:47.795589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.634 [2024-07-14 14:51:47.812125] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.634 [2024-07-14 14:51:47.812162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.634 [2024-07-14 14:51:47.829420] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.634 [2024-07-14 14:51:47.829454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.634 [2024-07-14 14:51:47.846685] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.634 [2024-07-14 14:51:47.846733] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.634 [2024-07-14 14:51:47.864721] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.634 [2024-07-14 14:51:47.864770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.634 [2024-07-14 14:51:47.881447] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.634 [2024-07-14 14:51:47.881482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.634 [2024-07-14 14:51:47.897719] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.634 [2024-07-14 14:51:47.897754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.634 [2024-07-14 14:51:47.914454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.634 [2024-07-14 14:51:47.914504] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.634 [2024-07-14 14:51:47.931282] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.634 [2024-07-14 14:51:47.931331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.893 [2024-07-14 14:51:47.948408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.893 [2024-07-14 14:51:47.948458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.893 [2024-07-14 14:51:47.966007] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.893 [2024-07-14 14:51:47.966057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.893 [2024-07-14 14:51:47.982258] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.893 [2024-07-14 14:51:47.982308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.893 [2024-07-14 14:51:47.999387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.893 [2024-07-14 14:51:47.999422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.894 [2024-07-14 14:51:48.016217] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.894 [2024-07-14 14:51:48.016267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.894 [2024-07-14 14:51:48.033553] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.894 [2024-07-14 14:51:48.033589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.894 [2024-07-14 14:51:48.051568] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.894 [2024-07-14 14:51:48.051608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.894 [2024-07-14 14:51:48.069199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.894 [2024-07-14 14:51:48.069252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.894 [2024-07-14 14:51:48.086622] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.894 [2024-07-14 14:51:48.086662] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.894 [2024-07-14 14:51:48.104100] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.894 [2024-07-14 14:51:48.104136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.894 [2024-07-14 14:51:48.122543] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.894 [2024-07-14 14:51:48.122583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.894 [2024-07-14 14:51:48.140426] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.894 [2024-07-14 14:51:48.140466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.894 [2024-07-14 14:51:48.157914] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.894 [2024-07-14 14:51:48.157965] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.894 [2024-07-14 14:51:48.174992] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.894 [2024-07-14 14:51:48.175026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.894 [2024-07-14 14:51:48.192548] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.894 [2024-07-14 14:51:48.192588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.154 [2024-07-14 14:51:48.210101] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.154 [2024-07-14 14:51:48.210138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.154 [2024-07-14 14:51:48.227704] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.154 [2024-07-14 14:51:48.227745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.155 [2024-07-14 14:51:48.245690] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.155 [2024-07-14 14:51:48.245730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.155 [2024-07-14 14:51:48.262707] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.155 [2024-07-14 14:51:48.262746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.155 [2024-07-14 14:51:48.280326] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.155 [2024-07-14 14:51:48.280366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.155 [2024-07-14 14:51:48.297971] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.155 [2024-07-14 14:51:48.298027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.155 [2024-07-14 14:51:48.315809] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.155 [2024-07-14 14:51:48.315849] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.155 [2024-07-14 14:51:48.333054] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.155 [2024-07-14 14:51:48.333087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.155 [2024-07-14 14:51:48.350321] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.155 [2024-07-14 14:51:48.350361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.155 [2024-07-14 14:51:48.368339] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.155 [2024-07-14 14:51:48.368378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.155 [2024-07-14 14:51:48.385434] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.155 [2024-07-14 14:51:48.385474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.155 [2024-07-14 14:51:48.402430] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.155 [2024-07-14 14:51:48.402470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.155 [2024-07-14 14:51:48.419555] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.155 [2024-07-14 14:51:48.419595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.155 [2024-07-14 14:51:48.437436] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.155 [2024-07-14 14:51:48.437476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.155 [2024-07-14 14:51:48.455759] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.155 [2024-07-14 14:51:48.455798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.419 [2024-07-14 14:51:48.473321] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.419 [2024-07-14 14:51:48.473362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.419 [2024-07-14 14:51:48.491365] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.419 [2024-07-14 14:51:48.491404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.419 [2024-07-14 14:51:48.508631] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.419 [2024-07-14 14:51:48.508671] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.419 [2024-07-14 14:51:48.526482] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.419 [2024-07-14 14:51:48.526522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.419 [2024-07-14 14:51:48.544356] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.419 [2024-07-14 14:51:48.544396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.419 [2024-07-14 14:51:48.561465] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.419 [2024-07-14 14:51:48.561505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.419 [2024-07-14 14:51:48.578279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.419 [2024-07-14 14:51:48.578319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.419 [2024-07-14 14:51:48.595960] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.419 [2024-07-14 14:51:48.596012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.419 [2024-07-14 14:51:48.614203] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.419 [2024-07-14 14:51:48.614257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.419 [2024-07-14 14:51:48.631419] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.419 [2024-07-14 14:51:48.631458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.419 [2024-07-14 14:51:48.648236] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.419 [2024-07-14 14:51:48.648276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.419 [2024-07-14 14:51:48.665438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.419 [2024-07-14 14:51:48.665477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.419 [2024-07-14 14:51:48.682431] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.419 [2024-07-14 14:51:48.682471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.419 [2024-07-14 14:51:48.699178] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.419 [2024-07-14 14:51:48.699229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.419 [2024-07-14 14:51:48.716248] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.419 [2024-07-14 14:51:48.716287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.709 [2024-07-14 14:51:48.733420] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.709 [2024-07-14 14:51:48.733461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.709 [2024-07-14 14:51:48.751449] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.709 [2024-07-14 14:51:48.751489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.709 [2024-07-14 14:51:48.769559] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.709 [2024-07-14 14:51:48.769599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.709 [2024-07-14 14:51:48.786520] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.709 [2024-07-14 14:51:48.786560] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.709 [2024-07-14 14:51:48.803866] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.709 [2024-07-14 14:51:48.803927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.709 [2024-07-14 14:51:48.820891] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.709 [2024-07-14 14:51:48.820955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.709 [2024-07-14 14:51:48.839582] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.709 [2024-07-14 14:51:48.839623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.709 [2024-07-14 14:51:48.857435] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.709 [2024-07-14 14:51:48.857485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.709 [2024-07-14 14:51:48.875205] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.709 [2024-07-14 14:51:48.875259] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.709 [2024-07-14 14:51:48.892747] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.709 [2024-07-14 14:51:48.892788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.709 [2024-07-14 14:51:48.909548] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.709 [2024-07-14 14:51:48.909588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.709 [2024-07-14 14:51:48.927274] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.709 [2024-07-14 14:51:48.927314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.709 [2024-07-14 14:51:48.945211] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.709 [2024-07-14 14:51:48.945262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.709 [2024-07-14 14:51:48.962945] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.709 [2024-07-14 14:51:48.962980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.709 [2024-07-14 14:51:48.980141] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.709 [2024-07-14 14:51:48.980191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.969 [2024-07-14 14:51:48.998549] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.969 [2024-07-14 14:51:48.998589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.969 [2024-07-14 14:51:49.016252] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.969 [2024-07-14 14:51:49.016291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.969 [2024-07-14 14:51:49.034206] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.969 [2024-07-14 14:51:49.034239] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.969 [2024-07-14 14:51:49.051564] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.969 [2024-07-14 14:51:49.051604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.969 [2024-07-14 14:51:49.068970] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.969 [2024-07-14 14:51:49.069018] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.969 [2024-07-14 14:51:49.085570] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.969 [2024-07-14 14:51:49.085610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.969 [2024-07-14 14:51:49.101667] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.969 [2024-07-14 14:51:49.101706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.969 [2024-07-14 14:51:49.118829] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.969 [2024-07-14 14:51:49.118868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.969 [2024-07-14 14:51:49.136950] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.969 [2024-07-14 14:51:49.136996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.969 [2024-07-14 14:51:49.154611] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.969 [2024-07-14 14:51:49.154651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.969 [2024-07-14 14:51:49.172184] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.969 [2024-07-14 14:51:49.172218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.969 [2024-07-14 14:51:49.190193] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.969 [2024-07-14 14:51:49.190233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.969 [2024-07-14 14:51:49.207564] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.969 [2024-07-14 14:51:49.207604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.969 [2024-07-14 14:51:49.225450] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.969 [2024-07-14 14:51:49.225489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.969 [2024-07-14 14:51:49.231762] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.969 [2024-07-14 14:51:49.231800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.969 00:19:09.969 Latency(us) 00:19:09.970 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.970 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:19:09.970 Nvme1n1 : 5.02 7316.96 57.16 0.00 0.00 17456.33 5776.88 27767.85 00:19:09.970 =================================================================================================================== 00:19:09.970 Total : 7316.96 57.16 0.00 0.00 17456.33 5776.88 27767.85 00:19:09.970 [2024-07-14 14:51:49.239765] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.970 [2024-07-14 14:51:49.239802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.970 [2024-07-14 14:51:49.247800] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.970 [2024-07-14 14:51:49.247838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.970 [2024-07-14 14:51:49.255817] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.970 [2024-07-14 14:51:49.255851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.970 [2024-07-14 14:51:49.263814] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.970 [2024-07-14 14:51:49.263849] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.970 [2024-07-14 14:51:49.271891] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.970 [2024-07-14 14:51:49.271948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.230 [2024-07-14 14:51:49.279900] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.230 [2024-07-14 14:51:49.279952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.230 [2024-07-14 14:51:49.288076] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.230 [2024-07-14 14:51:49.288147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.230 [2024-07-14 14:51:49.296073] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.230 [2024-07-14 14:51:49.296125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.230 [2024-07-14 14:51:49.303960] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.230 [2024-07-14 14:51:49.303988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.230 [2024-07-14 14:51:49.311987] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.230 [2024-07-14 14:51:49.312016] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.230 [2024-07-14 14:51:49.320037] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.230 [2024-07-14 14:51:49.320068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.230 [2024-07-14 14:51:49.328038] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.230 [2024-07-14 14:51:49.328068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.230 [2024-07-14 14:51:49.336040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.230 [2024-07-14 14:51:49.336069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.230 [2024-07-14 14:51:49.344046] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.230 [2024-07-14 14:51:49.344075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.230 [2024-07-14 14:51:49.352083] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.230 [2024-07-14 14:51:49.352112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.230 [2024-07-14 14:51:49.360116] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.230 [2024-07-14 14:51:49.360145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.230 [2024-07-14 14:51:49.368242] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.230 [2024-07-14 14:51:49.368295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.230 [2024-07-14 14:51:49.376279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.230 [2024-07-14 14:51:49.376348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.230 [2024-07-14 14:51:49.384296] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.230 [2024-07-14 14:51:49.384351] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.230 [2024-07-14 14:51:49.392200] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.231 [2024-07-14 14:51:49.392232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.231 [2024-07-14 14:51:49.400256] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.231 [2024-07-14 14:51:49.400289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.231 [2024-07-14 14:51:49.408253] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.231 [2024-07-14 14:51:49.408285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.231 [2024-07-14 14:51:49.416335] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.231 [2024-07-14 14:51:49.416368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.231 [2024-07-14 14:51:49.424321] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.231 [2024-07-14 14:51:49.424352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.231 [2024-07-14 14:51:49.432326] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.231 [2024-07-14 14:51:49.432359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.231 [2024-07-14 14:51:49.440372] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.231 [2024-07-14 14:51:49.440404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.231 [2024-07-14 14:51:49.448394] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.231 [2024-07-14 14:51:49.448426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.231 [2024-07-14 14:51:49.456393] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.231 [2024-07-14 14:51:49.456426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.231 [2024-07-14 14:51:49.464455] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.231 [2024-07-14 14:51:49.464488] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.231 [2024-07-14 14:51:49.472447] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.231 [2024-07-14 14:51:49.472480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.231 [2024-07-14 14:51:49.480480] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.231 [2024-07-14 14:51:49.480513] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.231 [2024-07-14 14:51:49.488502] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.231 [2024-07-14 14:51:49.488534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.231 [2024-07-14 14:51:49.496508] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.231 [2024-07-14 14:51:49.496542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.231 [2024-07-14 14:51:49.504558] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.231 [2024-07-14 14:51:49.504592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.231 [2024-07-14 14:51:49.512623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.231 [2024-07-14 14:51:49.512656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.231 [2024-07-14 14:51:49.520573] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.231 [2024-07-14 14:51:49.520606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.231 [2024-07-14 14:51:49.528751] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.231 [2024-07-14 14:51:49.528821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.231 [2024-07-14 14:51:49.536729] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.231 [2024-07-14 14:51:49.536789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.491 [2024-07-14 14:51:49.544690] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.491 [2024-07-14 14:51:49.544723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.491 [2024-07-14 14:51:49.556672] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.491 [2024-07-14 14:51:49.556700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.491 [2024-07-14 14:51:49.564722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.491 [2024-07-14 14:51:49.564755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.491 [2024-07-14 14:51:49.572736] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.491 [2024-07-14 14:51:49.572768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.491 [2024-07-14 14:51:49.580761] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.491 [2024-07-14 14:51:49.580793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.491 [2024-07-14 14:51:49.588804] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.491 [2024-07-14 14:51:49.588849] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.491 [2024-07-14 14:51:49.596988] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.491 [2024-07-14 14:51:49.597049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.491 [2024-07-14 14:51:49.604953] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.491 [2024-07-14 14:51:49.605015] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.491 [2024-07-14 14:51:49.613018] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.491 [2024-07-14 14:51:49.613083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.491 [2024-07-14 14:51:49.620887] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.491 [2024-07-14 14:51:49.620920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.491 [2024-07-14 14:51:49.628888] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.491 [2024-07-14 14:51:49.628935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.491 [2024-07-14 14:51:49.636950] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.491 [2024-07-14 14:51:49.636982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.491 [2024-07-14 14:51:49.644959] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.491 [2024-07-14 14:51:49.644988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.491 [2024-07-14 14:51:49.652961] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.491 [2024-07-14 14:51:49.652990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.492 [2024-07-14 14:51:49.661019] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.492 [2024-07-14 14:51:49.661049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.492 [2024-07-14 14:51:49.669004] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.492 [2024-07-14 14:51:49.669033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.492 [2024-07-14 14:51:49.677038] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.492 [2024-07-14 14:51:49.677067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.492 [2024-07-14 14:51:49.685055] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.492 [2024-07-14 14:51:49.685095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.492 [2024-07-14 14:51:49.693097] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.492 [2024-07-14 14:51:49.693126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.492 [2024-07-14 14:51:49.701088] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.492 [2024-07-14 14:51:49.701118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.492 [2024-07-14 14:51:49.709118] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.492 [2024-07-14 14:51:49.709147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.492 [2024-07-14 14:51:49.717114] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.492 [2024-07-14 14:51:49.717143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.492 [2024-07-14 14:51:49.725178] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.492 [2024-07-14 14:51:49.725211] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.492 [2024-07-14 14:51:49.733182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.492 [2024-07-14 14:51:49.733215] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.492 [2024-07-14 14:51:49.741223] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.492 [2024-07-14 14:51:49.741255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.492 [2024-07-14 14:51:49.749256] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.492 [2024-07-14 14:51:49.749291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.492 [2024-07-14 14:51:49.757283] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.492 [2024-07-14 14:51:49.757316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.492 [2024-07-14 14:51:49.765325] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.492 [2024-07-14 14:51:49.765359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.492 [2024-07-14 14:51:49.773370] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.492 [2024-07-14 14:51:49.773418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.492 [2024-07-14 14:51:49.781422] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.492 [2024-07-14 14:51:49.781481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.492 [2024-07-14 14:51:49.789412] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.492 [2024-07-14 14:51:49.789452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.492 [2024-07-14 14:51:49.797361] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.492 [2024-07-14 14:51:49.797393] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.751 [2024-07-14 14:51:49.805399] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.751 [2024-07-14 14:51:49.805431] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.751 [2024-07-14 14:51:49.813429] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.751 [2024-07-14 14:51:49.813462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.751 [2024-07-14 14:51:49.821428] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.751 [2024-07-14 14:51:49.821461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.751 [2024-07-14 14:51:49.829468] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.751 [2024-07-14 14:51:49.829500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.751 [2024-07-14 14:51:49.837503] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.751 [2024-07-14 14:51:49.837545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.751 [2024-07-14 14:51:49.845501] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.751 [2024-07-14 14:51:49.845533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.751 [2024-07-14 14:51:49.853559] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.751 [2024-07-14 14:51:49.853591] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.751 [2024-07-14 14:51:49.861544] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.751 [2024-07-14 14:51:49.861576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.751 [2024-07-14 14:51:49.869591] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.751 [2024-07-14 14:51:49.869623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.751 [2024-07-14 14:51:49.877633] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.751 [2024-07-14 14:51:49.877666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.751 [2024-07-14 14:51:49.885612] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.751 [2024-07-14 14:51:49.885644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.751 [2024-07-14 14:51:49.893653] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.751 [2024-07-14 14:51:49.893685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.751 [2024-07-14 14:51:49.901794] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.751 [2024-07-14 14:51:49.901856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.751 [2024-07-14 14:51:49.909727] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.751 [2024-07-14 14:51:49.909771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.751 [2024-07-14 14:51:49.917738] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.751 [2024-07-14 14:51:49.917769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.751 [2024-07-14 14:51:49.925724] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.751 [2024-07-14 14:51:49.925756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.751 [2024-07-14 14:51:49.933773] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.751 [2024-07-14 14:51:49.933805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.751 [2024-07-14 14:51:49.941792] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.751 [2024-07-14 14:51:49.941824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.751 [2024-07-14 14:51:49.949819] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.751 [2024-07-14 14:51:49.949851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.751 [2024-07-14 14:51:49.957842] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.751 [2024-07-14 14:51:49.957874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.751 [2024-07-14 14:51:49.965856] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.751 [2024-07-14 14:51:49.965899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.752 [2024-07-14 14:51:49.973861] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.752 [2024-07-14 14:51:49.973904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.752 [2024-07-14 14:51:49.981934] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.752 [2024-07-14 14:51:49.981963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.752 [2024-07-14 14:51:49.989935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.752 [2024-07-14 14:51:49.989972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.752 [2024-07-14 14:51:49.998072] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.752 [2024-07-14 14:51:49.998134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.752 [2024-07-14 14:51:50.006044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.752 [2024-07-14 14:51:50.006090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.752 [2024-07-14 14:51:50.014020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.752 [2024-07-14 14:51:50.014059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.752 [2024-07-14 14:51:50.022047] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.752 [2024-07-14 14:51:50.022079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.752 [2024-07-14 14:51:50.030059] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.752 [2024-07-14 14:51:50.030090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.752 [2024-07-14 14:51:50.038050] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.752 [2024-07-14 14:51:50.038080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.752 [2024-07-14 14:51:50.046090] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.752 [2024-07-14 14:51:50.046120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.752 [2024-07-14 14:51:50.054095] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.752 [2024-07-14 14:51:50.054124] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.010 [2024-07-14 14:51:50.062146] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.010 [2024-07-14 14:51:50.062192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.010 [2024-07-14 14:51:50.070156] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.010 [2024-07-14 14:51:50.070206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.010 [2024-07-14 14:51:50.078175] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.010 [2024-07-14 14:51:50.078208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.010 [2024-07-14 14:51:50.086243] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.010 [2024-07-14 14:51:50.086276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.010 [2024-07-14 14:51:50.094238] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.010 [2024-07-14 14:51:50.094271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.010 [2024-07-14 14:51:50.102237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.010 [2024-07-14 14:51:50.102281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.010 [2024-07-14 14:51:50.110406] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.010 [2024-07-14 14:51:50.110468] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.010 [2024-07-14 14:51:50.118297] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.010 [2024-07-14 14:51:50.118329] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.010 [2024-07-14 14:51:50.126350] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.010 [2024-07-14 14:51:50.126383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.010 [2024-07-14 14:51:50.134372] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.010 [2024-07-14 14:51:50.134404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.010 [2024-07-14 14:51:50.142396] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.010 [2024-07-14 14:51:50.142428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.010 [2024-07-14 14:51:50.150385] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.010 [2024-07-14 14:51:50.150414] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.010 [2024-07-14 14:51:50.158528] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.010 [2024-07-14 14:51:50.158589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.010 [2024-07-14 14:51:50.166449] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.010 [2024-07-14 14:51:50.166482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.010 [2024-07-14 14:51:50.174481] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.010 [2024-07-14 14:51:50.174514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.010 [2024-07-14 14:51:50.182482] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.010 [2024-07-14 14:51:50.182514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.011 [2024-07-14 14:51:50.190525] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.011 [2024-07-14 14:51:50.190557] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.011 [2024-07-14 14:51:50.198546] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.011 [2024-07-14 14:51:50.198578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.011 [2024-07-14 14:51:50.206563] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.011 [2024-07-14 14:51:50.206595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.011 [2024-07-14 14:51:50.214597] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.011 [2024-07-14 14:51:50.214629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.011 [2024-07-14 14:51:50.222620] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.011 [2024-07-14 14:51:50.222652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.011 [2024-07-14 14:51:50.230619] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.011 [2024-07-14 14:51:50.230651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.011 [2024-07-14 14:51:50.238684] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.011 [2024-07-14 14:51:50.238716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.011 [2024-07-14 14:51:50.246665] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.011 [2024-07-14 14:51:50.246697] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.011 [2024-07-14 14:51:50.254704] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.011 [2024-07-14 14:51:50.254737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.011 [2024-07-14 14:51:50.262732] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.011 [2024-07-14 14:51:50.262766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.011 [2024-07-14 14:51:50.270737] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.011 [2024-07-14 14:51:50.270769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.011 [2024-07-14 14:51:50.278785] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.011 [2024-07-14 14:51:50.278819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.011 [2024-07-14 14:51:50.286825] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.011 [2024-07-14 14:51:50.286863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.011 [2024-07-14 14:51:50.294940] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.011 [2024-07-14 14:51:50.294969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.011 [2024-07-14 14:51:50.302846] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.011 [2024-07-14 14:51:50.302886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.011 [2024-07-14 14:51:50.310846] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.011 [2024-07-14 14:51:50.310891] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.011 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1885955) - No such process 00:19:11.011 14:51:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1885955 00:19:11.011 14:51:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:11.011 14:51:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.011 14:51:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:11.271 14:51:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.271 14:51:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:19:11.271 14:51:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.271 14:51:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:11.271 delay0 00:19:11.271 14:51:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.271 14:51:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:19:11.271 14:51:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.271 14:51:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:11.271 14:51:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.271 14:51:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:19:11.271 EAL: No free 2048 kB hugepages reported on node 1 00:19:11.271 [2024-07-14 14:51:50.479155] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:19:19.395 Initializing NVMe Controllers 00:19:19.395 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:19.395 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:19.395 Initialization complete. Launching workers. 00:19:19.395 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 267, failed: 11793 00:19:19.395 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 11969, failed to submit 91 00:19:19.395 success 11855, unsuccess 114, failed 0 00:19:19.395 14:51:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:19:19.395 14:51:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:19:19.395 14:51:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:19.395 14:51:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:19:19.395 14:51:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:19.395 14:51:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:19:19.395 14:51:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:19.395 14:51:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:19.395 rmmod nvme_tcp 00:19:19.395 rmmod nvme_fabrics 00:19:19.395 rmmod nvme_keyring 00:19:19.395 14:51:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:19.395 14:51:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:19:19.395 14:51:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:19:19.395 14:51:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1884385 ']' 00:19:19.395 14:51:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1884385 00:19:19.395 14:51:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 1884385 ']' 00:19:19.395 14:51:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 1884385 00:19:19.395 14:51:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:19:19.395 14:51:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:19.395 14:51:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1884385 00:19:19.395 14:51:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:19.395 14:51:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:19.395 14:51:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1884385' 00:19:19.395 killing process with pid 1884385 00:19:19.395 14:51:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 1884385 00:19:19.395 14:51:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 1884385 00:19:19.963 14:51:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:19.963 14:51:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:19.963 14:51:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:19.963 14:51:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:19.963 14:51:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:19.964 14:51:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:19.964 14:51:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:19.964 14:51:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:22.502 14:52:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:22.502 00:19:22.502 real 0m33.203s 00:19:22.502 user 0m49.100s 00:19:22.502 sys 0m8.917s 00:19:22.502 14:52:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:22.502 14:52:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:22.502 ************************************ 00:19:22.502 END TEST nvmf_zcopy 00:19:22.502 ************************************ 00:19:22.502 14:52:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:22.502 14:52:01 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:19:22.502 14:52:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:22.502 14:52:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:22.502 14:52:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:22.502 ************************************ 00:19:22.502 START TEST nvmf_nmic 00:19:22.502 ************************************ 00:19:22.503 14:52:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:19:22.503 * Looking for test storage... 00:19:22.503 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:22.503 14:52:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:22.503 14:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:19:22.503 14:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:22.503 14:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:22.503 14:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:22.503 14:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:22.503 14:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:22.503 14:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:22.503 14:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:22.503 14:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:22.503 14:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:22.503 14:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:22.503 14:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:22.503 14:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:22.503 14:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:22.503 14:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:22.503 14:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:22.503 14:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:22.503 14:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:22.503 14:52:01 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:22.503 14:52:01 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:22.503 14:52:01 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:22.503 14:52:01 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.503 14:52:01 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.503 14:52:01 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.503 14:52:01 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:19:22.503 14:52:01 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.503 14:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:19:22.503 14:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:22.503 14:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:22.503 14:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:22.503 14:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:22.503 14:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:22.503 14:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:22.503 14:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:22.503 14:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:22.503 14:52:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:22.503 14:52:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:22.503 14:52:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:19:22.503 14:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:22.503 14:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:22.503 14:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:22.503 14:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:22.503 14:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:22.503 14:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:22.503 14:52:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:22.503 14:52:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:22.503 14:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:22.503 14:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:22.503 14:52:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:19:22.503 14:52:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:24.403 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:24.403 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:24.403 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:24.403 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:24.403 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:24.403 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:19:24.403 00:19:24.403 --- 10.0.0.2 ping statistics --- 00:19:24.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.403 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:24.403 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:24.403 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:19:24.403 00:19:24.403 --- 10.0.0.1 ping statistics --- 00:19:24.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.403 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:24.403 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:24.404 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:24.404 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:24.404 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:24.404 14:52:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:19:24.404 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:24.404 14:52:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:24.404 14:52:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:24.404 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1889634 00:19:24.404 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:24.404 14:52:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1889634 00:19:24.404 14:52:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 1889634 ']' 00:19:24.404 14:52:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.404 14:52:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:24.404 14:52:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:24.404 14:52:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:24.404 14:52:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:24.404 [2024-07-14 14:52:03.474344] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:24.404 [2024-07-14 14:52:03.474491] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:24.404 EAL: No free 2048 kB hugepages reported on node 1 00:19:24.404 [2024-07-14 14:52:03.613484] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:24.662 [2024-07-14 14:52:03.880180] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:24.662 [2024-07-14 14:52:03.880260] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:24.662 [2024-07-14 14:52:03.880288] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:24.662 [2024-07-14 14:52:03.880309] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:24.662 [2024-07-14 14:52:03.880337] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:24.662 [2024-07-14 14:52:03.880468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:24.662 [2024-07-14 14:52:03.880534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:24.662 [2024-07-14 14:52:03.880743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.662 [2024-07-14 14:52:03.880751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:25.229 14:52:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:25.229 14:52:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:19:25.229 14:52:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:25.229 14:52:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:25.229 14:52:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:25.229 14:52:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:25.229 14:52:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:25.229 14:52:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.229 14:52:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:25.229 [2024-07-14 14:52:04.417084] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:25.229 14:52:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.229 14:52:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:25.229 14:52:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.229 14:52:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:25.229 Malloc0 00:19:25.229 14:52:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.229 14:52:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:25.229 14:52:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.229 14:52:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:25.229 14:52:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.229 14:52:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:25.229 14:52:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.229 14:52:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:25.229 14:52:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.229 14:52:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:25.229 14:52:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.229 14:52:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:25.229 [2024-07-14 14:52:04.522681] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:25.229 14:52:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.229 14:52:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:19:25.229 test case1: single bdev can't be used in multiple subsystems 00:19:25.229 14:52:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:19:25.229 14:52:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.229 14:52:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:25.229 14:52:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.229 14:52:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:25.229 14:52:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.229 14:52:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:25.488 14:52:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.488 14:52:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:19:25.488 14:52:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:19:25.488 14:52:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.488 14:52:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:25.488 [2024-07-14 14:52:04.546466] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:19:25.488 [2024-07-14 14:52:04.546525] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:19:25.488 [2024-07-14 14:52:04.546566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.488 request: 00:19:25.488 { 00:19:25.488 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:19:25.488 "namespace": { 00:19:25.488 "bdev_name": "Malloc0", 00:19:25.488 "no_auto_visible": false 00:19:25.488 }, 00:19:25.488 "method": "nvmf_subsystem_add_ns", 00:19:25.488 "req_id": 1 00:19:25.488 } 00:19:25.488 Got JSON-RPC error response 00:19:25.488 response: 00:19:25.488 { 00:19:25.488 "code": -32602, 00:19:25.488 "message": "Invalid parameters" 00:19:25.488 } 00:19:25.488 14:52:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:25.488 14:52:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:19:25.488 14:52:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:19:25.488 14:52:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:19:25.488 Adding namespace failed - expected result. 00:19:25.488 14:52:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:19:25.488 test case2: host connect to nvmf target in multiple paths 00:19:25.488 14:52:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:25.488 14:52:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.488 14:52:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:25.488 [2024-07-14 14:52:04.554606] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:25.488 14:52:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.488 14:52:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:26.053 14:52:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:19:26.621 14:52:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:19:26.621 14:52:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:19:26.621 14:52:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:26.621 14:52:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:26.621 14:52:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:19:29.152 14:52:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:29.152 14:52:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:29.152 14:52:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:29.152 14:52:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:29.152 14:52:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:29.152 14:52:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:19:29.152 14:52:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:29.152 [global] 00:19:29.152 thread=1 00:19:29.152 invalidate=1 00:19:29.152 rw=write 00:19:29.152 time_based=1 00:19:29.152 runtime=1 00:19:29.152 ioengine=libaio 00:19:29.152 direct=1 00:19:29.152 bs=4096 00:19:29.152 iodepth=1 00:19:29.152 norandommap=0 00:19:29.152 numjobs=1 00:19:29.152 00:19:29.152 verify_dump=1 00:19:29.152 verify_backlog=512 00:19:29.152 verify_state_save=0 00:19:29.152 do_verify=1 00:19:29.152 verify=crc32c-intel 00:19:29.152 [job0] 00:19:29.152 filename=/dev/nvme0n1 00:19:29.152 Could not set queue depth (nvme0n1) 00:19:29.153 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:29.153 fio-3.35 00:19:29.153 Starting 1 thread 00:19:30.093 00:19:30.093 job0: (groupid=0, jobs=1): err= 0: pid=1890279: Sun Jul 14 14:52:09 2024 00:19:30.093 read: IOPS=21, BW=87.7KiB/s (89.8kB/s)(88.0KiB/1003msec) 00:19:30.093 slat (nsec): min=6525, max=33024, avg=22827.64, stdev=10278.36 00:19:30.093 clat (usec): min=40822, max=41041, avg=40963.53, stdev=48.48 00:19:30.093 lat (usec): min=40828, max=41053, avg=40986.36, stdev=48.03 00:19:30.093 clat percentiles (usec): 00:19:30.093 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:19:30.093 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:30.093 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:30.093 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:30.093 | 99.99th=[41157] 00:19:30.093 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:19:30.093 slat (nsec): min=5427, max=34217, avg=7634.78, stdev=3474.32 00:19:30.093 clat (usec): min=153, max=389, avg=188.15, stdev=19.19 00:19:30.093 lat (usec): min=159, max=421, avg=195.78, stdev=20.43 00:19:30.093 clat percentiles (usec): 00:19:30.093 | 1.00th=[ 157], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 174], 00:19:30.093 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 192], 00:19:30.093 | 70.00th=[ 196], 80.00th=[ 202], 90.00th=[ 208], 95.00th=[ 217], 00:19:30.093 | 99.00th=[ 227], 99.50th=[ 247], 99.90th=[ 392], 99.95th=[ 392], 00:19:30.093 | 99.99th=[ 392] 00:19:30.093 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:19:30.093 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:30.093 lat (usec) : 250=95.51%, 500=0.37% 00:19:30.093 lat (msec) : 50=4.12% 00:19:30.093 cpu : usr=0.30%, sys=0.30%, ctx=534, majf=0, minf=2 00:19:30.093 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:30.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.093 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.093 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.093 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:30.093 00:19:30.093 Run status group 0 (all jobs): 00:19:30.093 READ: bw=87.7KiB/s (89.8kB/s), 87.7KiB/s-87.7KiB/s (89.8kB/s-89.8kB/s), io=88.0KiB (90.1kB), run=1003-1003msec 00:19:30.093 WRITE: bw=2042KiB/s (2091kB/s), 2042KiB/s-2042KiB/s (2091kB/s-2091kB/s), io=2048KiB (2097kB), run=1003-1003msec 00:19:30.093 00:19:30.093 Disk stats (read/write): 00:19:30.093 nvme0n1: ios=69/512, merge=0/0, ticks=805/96, in_queue=901, util=91.68% 00:19:30.093 14:52:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:30.351 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:19:30.351 14:52:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:30.351 14:52:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:19:30.351 14:52:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:30.351 14:52:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:30.351 14:52:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:30.351 14:52:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:30.351 14:52:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:19:30.351 14:52:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:30.351 14:52:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:19:30.351 14:52:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:30.351 14:52:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:19:30.351 14:52:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:30.351 14:52:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:19:30.351 14:52:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:30.351 14:52:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:30.351 rmmod nvme_tcp 00:19:30.351 rmmod nvme_fabrics 00:19:30.351 rmmod nvme_keyring 00:19:30.351 14:52:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:30.351 14:52:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:19:30.351 14:52:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:19:30.351 14:52:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1889634 ']' 00:19:30.351 14:52:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1889634 00:19:30.351 14:52:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 1889634 ']' 00:19:30.351 14:52:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 1889634 00:19:30.351 14:52:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:19:30.351 14:52:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:30.351 14:52:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1889634 00:19:30.351 14:52:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:30.351 14:52:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:30.351 14:52:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1889634' 00:19:30.351 killing process with pid 1889634 00:19:30.351 14:52:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 1889634 00:19:30.351 14:52:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 1889634 00:19:32.255 14:52:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:32.255 14:52:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:32.255 14:52:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:32.255 14:52:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:32.255 14:52:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:32.255 14:52:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.255 14:52:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:32.255 14:52:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.159 14:52:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:34.159 00:19:34.159 real 0m11.845s 00:19:34.159 user 0m28.103s 00:19:34.159 sys 0m2.387s 00:19:34.159 14:52:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:34.159 14:52:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:34.159 ************************************ 00:19:34.159 END TEST nvmf_nmic 00:19:34.159 ************************************ 00:19:34.159 14:52:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:34.159 14:52:13 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:34.159 14:52:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:34.159 14:52:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:34.159 14:52:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:34.159 ************************************ 00:19:34.159 START TEST nvmf_fio_target 00:19:34.159 ************************************ 00:19:34.159 14:52:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:34.159 * Looking for test storage... 00:19:34.159 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:34.159 14:52:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:34.159 14:52:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:19:34.159 14:52:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:34.159 14:52:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:34.159 14:52:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:34.159 14:52:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:34.159 14:52:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:34.159 14:52:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:34.159 14:52:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:34.159 14:52:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:34.159 14:52:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:34.159 14:52:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:34.159 14:52:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:34.159 14:52:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:34.159 14:52:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:34.159 14:52:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:34.159 14:52:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:34.159 14:52:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:34.159 14:52:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:34.159 14:52:13 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:34.159 14:52:13 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:34.159 14:52:13 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:34.159 14:52:13 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.159 14:52:13 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.159 14:52:13 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.160 14:52:13 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:19:34.160 14:52:13 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.160 14:52:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:19:34.160 14:52:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:34.160 14:52:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:34.160 14:52:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:34.160 14:52:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:34.160 14:52:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:34.160 14:52:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:34.160 14:52:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:34.160 14:52:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:34.160 14:52:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:34.160 14:52:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:34.160 14:52:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:34.160 14:52:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:19:34.160 14:52:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:34.160 14:52:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:34.160 14:52:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:34.160 14:52:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:34.160 14:52:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:34.160 14:52:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:34.160 14:52:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:34.160 14:52:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.160 14:52:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:34.160 14:52:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:34.160 14:52:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:34.160 14:52:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.097 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:36.097 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:19:36.097 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:36.097 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:36.097 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:36.097 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:36.097 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:36.098 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:36.098 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:36.098 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:36.098 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:36.098 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:36.098 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:19:36.098 00:19:36.098 --- 10.0.0.2 ping statistics --- 00:19:36.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.098 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:36.098 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:36.098 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:19:36.098 00:19:36.098 --- 10.0.0.1 ping statistics --- 00:19:36.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.098 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:36.098 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:19:36.099 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:36.099 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:36.099 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:36.099 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:36.099 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:36.099 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:36.099 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:36.099 14:52:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:19:36.099 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:36.099 14:52:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:36.099 14:52:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.099 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1892530 00:19:36.099 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:36.099 14:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1892530 00:19:36.099 14:52:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 1892530 ']' 00:19:36.099 14:52:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:36.099 14:52:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:36.099 14:52:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:36.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:36.099 14:52:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:36.099 14:52:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.099 [2024-07-14 14:52:15.321470] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:36.099 [2024-07-14 14:52:15.321605] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:36.099 EAL: No free 2048 kB hugepages reported on node 1 00:19:36.357 [2024-07-14 14:52:15.463539] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:36.615 [2024-07-14 14:52:15.728410] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:36.615 [2024-07-14 14:52:15.728483] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:36.615 [2024-07-14 14:52:15.728512] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:36.615 [2024-07-14 14:52:15.728534] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:36.615 [2024-07-14 14:52:15.728554] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:36.615 [2024-07-14 14:52:15.728672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:36.615 [2024-07-14 14:52:15.728731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:36.615 [2024-07-14 14:52:15.728955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:36.615 [2024-07-14 14:52:15.728962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:37.179 14:52:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:37.179 14:52:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:19:37.179 14:52:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:37.179 14:52:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:37.179 14:52:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.179 14:52:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:37.179 14:52:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:37.437 [2024-07-14 14:52:16.552031] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:37.437 14:52:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:37.695 14:52:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:19:37.695 14:52:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:37.953 14:52:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:19:37.953 14:52:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:38.210 14:52:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:19:38.468 14:52:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:38.725 14:52:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:19:38.725 14:52:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:19:38.982 14:52:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:39.240 14:52:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:19:39.240 14:52:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:39.497 14:52:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:19:39.497 14:52:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:39.755 14:52:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:19:39.755 14:52:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:19:40.013 14:52:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:40.271 14:52:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:40.271 14:52:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:40.530 14:52:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:40.530 14:52:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:40.788 14:52:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:41.045 [2024-07-14 14:52:20.253697] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:41.046 14:52:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:19:41.303 14:52:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:19:41.562 14:52:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:42.145 14:52:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:19:42.145 14:52:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:19:42.145 14:52:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:42.145 14:52:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:19:42.145 14:52:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:19:42.145 14:52:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:19:44.046 14:52:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:44.046 14:52:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:44.046 14:52:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:44.304 14:52:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:19:44.304 14:52:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:44.304 14:52:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:19:44.304 14:52:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:44.304 [global] 00:19:44.304 thread=1 00:19:44.304 invalidate=1 00:19:44.304 rw=write 00:19:44.304 time_based=1 00:19:44.304 runtime=1 00:19:44.304 ioengine=libaio 00:19:44.304 direct=1 00:19:44.304 bs=4096 00:19:44.304 iodepth=1 00:19:44.304 norandommap=0 00:19:44.304 numjobs=1 00:19:44.304 00:19:44.304 verify_dump=1 00:19:44.304 verify_backlog=512 00:19:44.304 verify_state_save=0 00:19:44.304 do_verify=1 00:19:44.304 verify=crc32c-intel 00:19:44.304 [job0] 00:19:44.304 filename=/dev/nvme0n1 00:19:44.304 [job1] 00:19:44.304 filename=/dev/nvme0n2 00:19:44.304 [job2] 00:19:44.304 filename=/dev/nvme0n3 00:19:44.304 [job3] 00:19:44.304 filename=/dev/nvme0n4 00:19:44.304 Could not set queue depth (nvme0n1) 00:19:44.304 Could not set queue depth (nvme0n2) 00:19:44.304 Could not set queue depth (nvme0n3) 00:19:44.304 Could not set queue depth (nvme0n4) 00:19:44.304 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:44.304 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:44.304 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:44.304 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:44.304 fio-3.35 00:19:44.304 Starting 4 threads 00:19:45.683 00:19:45.683 job0: (groupid=0, jobs=1): err= 0: pid=1893677: Sun Jul 14 14:52:24 2024 00:19:45.683 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:19:45.683 slat (nsec): min=5934, max=47283, avg=13510.63, stdev=5563.76 00:19:45.683 clat (usec): min=237, max=1858, avg=295.89, stdev=52.81 00:19:45.683 lat (usec): min=244, max=1867, avg=309.40, stdev=53.75 00:19:45.683 clat percentiles (usec): 00:19:45.683 | 1.00th=[ 249], 5.00th=[ 260], 10.00th=[ 265], 20.00th=[ 277], 00:19:45.683 | 30.00th=[ 281], 40.00th=[ 289], 50.00th=[ 293], 60.00th=[ 297], 00:19:45.683 | 70.00th=[ 302], 80.00th=[ 310], 90.00th=[ 326], 95.00th=[ 338], 00:19:45.683 | 99.00th=[ 363], 99.50th=[ 388], 99.90th=[ 889], 99.95th=[ 1860], 00:19:45.683 | 99.99th=[ 1860] 00:19:45.683 write: IOPS=1934, BW=7736KiB/s (7922kB/s)(7744KiB/1001msec); 0 zone resets 00:19:45.683 slat (usec): min=6, max=753, avg=18.74, stdev=17.87 00:19:45.683 clat (usec): min=183, max=1185, avg=244.27, stdev=46.17 00:19:45.683 lat (usec): min=192, max=1197, avg=263.01, stdev=50.68 00:19:45.683 clat percentiles (usec): 00:19:45.683 | 1.00th=[ 194], 5.00th=[ 208], 10.00th=[ 219], 20.00th=[ 225], 00:19:45.683 | 30.00th=[ 231], 40.00th=[ 233], 50.00th=[ 237], 60.00th=[ 241], 00:19:45.683 | 70.00th=[ 245], 80.00th=[ 249], 90.00th=[ 265], 95.00th=[ 306], 00:19:45.683 | 99.00th=[ 420], 99.50th=[ 461], 99.90th=[ 816], 99.95th=[ 1188], 00:19:45.683 | 99.99th=[ 1188] 00:19:45.683 bw ( KiB/s): min= 8192, max= 8192, per=31.95%, avg=8192.00, stdev= 0.00, samples=1 00:19:45.683 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:19:45.683 lat (usec) : 250=46.03%, 500=53.66%, 750=0.14%, 1000=0.12% 00:19:45.683 lat (msec) : 2=0.06% 00:19:45.683 cpu : usr=4.60%, sys=7.10%, ctx=3475, majf=0, minf=1 00:19:45.683 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:45.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.683 issued rwts: total=1536,1936,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:45.683 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:45.683 job1: (groupid=0, jobs=1): err= 0: pid=1893678: Sun Jul 14 14:52:24 2024 00:19:45.683 read: IOPS=1277, BW=5109KiB/s (5231kB/s)(5216KiB/1021msec) 00:19:45.683 slat (nsec): min=6348, max=53311, avg=14491.05, stdev=6045.34 00:19:45.683 clat (usec): min=272, max=41000, avg=431.66, stdev=1590.52 00:19:45.683 lat (usec): min=281, max=41016, avg=446.15, stdev=1590.51 00:19:45.683 clat percentiles (usec): 00:19:45.683 | 1.00th=[ 289], 5.00th=[ 297], 10.00th=[ 306], 20.00th=[ 318], 00:19:45.683 | 30.00th=[ 330], 40.00th=[ 338], 50.00th=[ 347], 60.00th=[ 355], 00:19:45.683 | 70.00th=[ 379], 80.00th=[ 429], 90.00th=[ 490], 95.00th=[ 498], 00:19:45.683 | 99.00th=[ 537], 99.50th=[ 570], 99.90th=[41157], 99.95th=[41157], 00:19:45.683 | 99.99th=[41157] 00:19:45.683 write: IOPS=1504, BW=6018KiB/s (6162kB/s)(6144KiB/1021msec); 0 zone resets 00:19:45.683 slat (usec): min=7, max=850, avg=19.48, stdev=22.16 00:19:45.683 clat (usec): min=202, max=623, avg=254.68, stdev=38.44 00:19:45.683 lat (usec): min=211, max=1354, avg=274.15, stdev=47.25 00:19:45.683 clat percentiles (usec): 00:19:45.683 | 1.00th=[ 210], 5.00th=[ 219], 10.00th=[ 227], 20.00th=[ 235], 00:19:45.683 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 251], 00:19:45.683 | 70.00th=[ 255], 80.00th=[ 265], 90.00th=[ 281], 95.00th=[ 318], 00:19:45.683 | 99.00th=[ 429], 99.50th=[ 457], 99.90th=[ 586], 99.95th=[ 627], 00:19:45.683 | 99.99th=[ 627] 00:19:45.683 bw ( KiB/s): min= 4440, max= 7848, per=23.96%, avg=6144.00, stdev=2409.82, samples=2 00:19:45.683 iops : min= 1110, max= 1962, avg=1536.00, stdev=602.45, samples=2 00:19:45.683 lat (usec) : 250=31.90%, 500=66.09%, 750=1.94% 00:19:45.683 lat (msec) : 50=0.07% 00:19:45.683 cpu : usr=2.35%, sys=7.35%, ctx=2843, majf=0, minf=2 00:19:45.683 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:45.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.683 issued rwts: total=1304,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:45.683 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:45.683 job2: (groupid=0, jobs=1): err= 0: pid=1893681: Sun Jul 14 14:52:24 2024 00:19:45.683 read: IOPS=1454, BW=5818KiB/s (5957kB/s)(5940KiB/1021msec) 00:19:45.683 slat (nsec): min=5981, max=56650, avg=13355.88, stdev=5314.46 00:19:45.683 clat (usec): min=271, max=41249, avg=386.81, stdev=1487.74 00:19:45.683 lat (usec): min=280, max=41264, avg=400.17, stdev=1487.76 00:19:45.683 clat percentiles (usec): 00:19:45.683 | 1.00th=[ 277], 5.00th=[ 289], 10.00th=[ 293], 20.00th=[ 306], 00:19:45.683 | 30.00th=[ 314], 40.00th=[ 322], 50.00th=[ 330], 60.00th=[ 338], 00:19:45.683 | 70.00th=[ 351], 80.00th=[ 359], 90.00th=[ 367], 95.00th=[ 375], 00:19:45.683 | 99.00th=[ 396], 99.50th=[ 412], 99.90th=[40633], 99.95th=[41157], 00:19:45.683 | 99.99th=[41157] 00:19:45.683 write: IOPS=1504, BW=6018KiB/s (6162kB/s)(6144KiB/1021msec); 0 zone resets 00:19:45.683 slat (nsec): min=7743, max=60845, avg=18798.69, stdev=6513.48 00:19:45.683 clat (usec): min=201, max=492, avg=249.36, stdev=18.14 00:19:45.683 lat (usec): min=209, max=513, avg=268.16, stdev=21.15 00:19:45.683 clat percentiles (usec): 00:19:45.683 | 1.00th=[ 210], 5.00th=[ 219], 10.00th=[ 229], 20.00th=[ 239], 00:19:45.683 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 253], 00:19:45.683 | 70.00th=[ 258], 80.00th=[ 262], 90.00th=[ 269], 95.00th=[ 277], 00:19:45.683 | 99.00th=[ 293], 99.50th=[ 310], 99.90th=[ 420], 99.95th=[ 494], 00:19:45.683 | 99.99th=[ 494] 00:19:45.683 bw ( KiB/s): min= 4096, max= 8192, per=23.96%, avg=6144.00, stdev=2896.31, samples=2 00:19:45.683 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:19:45.683 lat (usec) : 250=26.88%, 500=72.99%, 1000=0.03% 00:19:45.683 lat (msec) : 2=0.03%, 50=0.07% 00:19:45.683 cpu : usr=3.33%, sys=6.76%, ctx=3021, majf=0, minf=1 00:19:45.683 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:45.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.683 issued rwts: total=1485,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:45.683 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:45.683 job3: (groupid=0, jobs=1): err= 0: pid=1893682: Sun Jul 14 14:52:24 2024 00:19:45.683 read: IOPS=1432, BW=5730KiB/s (5868kB/s)(5736KiB/1001msec) 00:19:45.683 slat (nsec): min=6006, max=45447, avg=12885.21, stdev=5302.49 00:19:45.683 clat (usec): min=277, max=946, avg=377.06, stdev=63.47 00:19:45.683 lat (usec): min=285, max=955, avg=389.95, stdev=64.72 00:19:45.683 clat percentiles (usec): 00:19:45.683 | 1.00th=[ 289], 5.00th=[ 297], 10.00th=[ 310], 20.00th=[ 326], 00:19:45.683 | 30.00th=[ 343], 40.00th=[ 355], 50.00th=[ 363], 60.00th=[ 371], 00:19:45.683 | 70.00th=[ 383], 80.00th=[ 408], 90.00th=[ 490], 95.00th=[ 502], 00:19:45.683 | 99.00th=[ 529], 99.50th=[ 545], 99.90th=[ 570], 99.95th=[ 947], 00:19:45.683 | 99.99th=[ 947] 00:19:45.683 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:19:45.683 slat (nsec): min=7622, max=55499, avg=18330.64, stdev=6692.04 00:19:45.683 clat (usec): min=206, max=1136, avg=260.19, stdev=41.53 00:19:45.683 lat (usec): min=217, max=1144, avg=278.52, stdev=42.28 00:19:45.683 clat percentiles (usec): 00:19:45.683 | 1.00th=[ 217], 5.00th=[ 229], 10.00th=[ 235], 20.00th=[ 245], 00:19:45.683 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 255], 60.00th=[ 260], 00:19:45.683 | 70.00th=[ 265], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 293], 00:19:45.683 | 99.00th=[ 400], 99.50th=[ 433], 99.90th=[ 889], 99.95th=[ 1139], 00:19:45.683 | 99.99th=[ 1139] 00:19:45.683 bw ( KiB/s): min= 7624, max= 7624, per=29.74%, avg=7624.00, stdev= 0.00, samples=1 00:19:45.683 iops : min= 1906, max= 1906, avg=1906.00, stdev= 0.00, samples=1 00:19:45.683 lat (usec) : 250=17.21%, 500=79.93%, 750=2.73%, 1000=0.10% 00:19:45.683 lat (msec) : 2=0.03% 00:19:45.683 cpu : usr=4.00%, sys=5.90%, ctx=2971, majf=0, minf=1 00:19:45.683 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:45.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.683 issued rwts: total=1434,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:45.683 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:45.683 00:19:45.683 Run status group 0 (all jobs): 00:19:45.683 READ: bw=22.0MiB/s (23.1MB/s), 5109KiB/s-6138KiB/s (5231kB/s-6285kB/s), io=22.5MiB (23.6MB), run=1001-1021msec 00:19:45.683 WRITE: bw=25.0MiB/s (26.3MB/s), 6018KiB/s-7736KiB/s (6162kB/s-7922kB/s), io=25.6MiB (26.8MB), run=1001-1021msec 00:19:45.683 00:19:45.683 Disk stats (read/write): 00:19:45.683 nvme0n1: ios=1431/1536, merge=0/0, ticks=593/379, in_queue=972, util=97.29% 00:19:45.683 nvme0n2: ios=1127/1536, merge=0/0, ticks=606/355, in_queue=961, util=97.25% 00:19:45.683 nvme0n3: ios=1218/1536, merge=0/0, ticks=394/355, in_queue=749, util=88.98% 00:19:45.683 nvme0n4: ios=1024/1501, merge=0/0, ticks=396/363, in_queue=759, util=89.63% 00:19:45.684 14:52:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:45.684 [global] 00:19:45.684 thread=1 00:19:45.684 invalidate=1 00:19:45.684 rw=randwrite 00:19:45.684 time_based=1 00:19:45.684 runtime=1 00:19:45.684 ioengine=libaio 00:19:45.684 direct=1 00:19:45.684 bs=4096 00:19:45.684 iodepth=1 00:19:45.684 norandommap=0 00:19:45.684 numjobs=1 00:19:45.684 00:19:45.684 verify_dump=1 00:19:45.684 verify_backlog=512 00:19:45.684 verify_state_save=0 00:19:45.684 do_verify=1 00:19:45.684 verify=crc32c-intel 00:19:45.684 [job0] 00:19:45.684 filename=/dev/nvme0n1 00:19:45.684 [job1] 00:19:45.684 filename=/dev/nvme0n2 00:19:45.684 [job2] 00:19:45.684 filename=/dev/nvme0n3 00:19:45.684 [job3] 00:19:45.684 filename=/dev/nvme0n4 00:19:45.684 Could not set queue depth (nvme0n1) 00:19:45.684 Could not set queue depth (nvme0n2) 00:19:45.684 Could not set queue depth (nvme0n3) 00:19:45.684 Could not set queue depth (nvme0n4) 00:19:45.940 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:45.940 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:45.940 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:45.940 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:45.940 fio-3.35 00:19:45.940 Starting 4 threads 00:19:47.315 00:19:47.315 job0: (groupid=0, jobs=1): err= 0: pid=1893917: Sun Jul 14 14:52:26 2024 00:19:47.315 read: IOPS=351, BW=1405KiB/s (1439kB/s)(1412KiB/1005msec) 00:19:47.315 slat (nsec): min=5348, max=45024, avg=10497.33, stdev=5930.06 00:19:47.315 clat (usec): min=247, max=41178, avg=2403.32, stdev=8984.87 00:19:47.315 lat (usec): min=255, max=41193, avg=2413.81, stdev=8988.70 00:19:47.315 clat percentiles (usec): 00:19:47.315 | 1.00th=[ 251], 5.00th=[ 258], 10.00th=[ 265], 20.00th=[ 269], 00:19:47.315 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 285], 00:19:47.315 | 70.00th=[ 289], 80.00th=[ 310], 90.00th=[ 343], 95.00th=[40633], 00:19:47.315 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:47.315 | 99.99th=[41157] 00:19:47.315 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:19:47.315 slat (nsec): min=6518, max=37692, avg=11120.17, stdev=4190.58 00:19:47.315 clat (usec): min=173, max=451, avg=281.65, stdev=77.06 00:19:47.315 lat (usec): min=180, max=459, avg=292.77, stdev=76.77 00:19:47.315 clat percentiles (usec): 00:19:47.315 | 1.00th=[ 184], 5.00th=[ 190], 10.00th=[ 196], 20.00th=[ 206], 00:19:47.315 | 30.00th=[ 219], 40.00th=[ 237], 50.00th=[ 255], 60.00th=[ 289], 00:19:47.315 | 70.00th=[ 343], 80.00th=[ 388], 90.00th=[ 388], 95.00th=[ 396], 00:19:47.315 | 99.00th=[ 412], 99.50th=[ 420], 99.90th=[ 453], 99.95th=[ 453], 00:19:47.315 | 99.99th=[ 453] 00:19:47.315 bw ( KiB/s): min= 4096, max= 4096, per=30.25%, avg=4096.00, stdev= 0.00, samples=1 00:19:47.315 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:47.315 lat (usec) : 250=28.55%, 500=69.13%, 750=0.12% 00:19:47.315 lat (msec) : 20=0.12%, 50=2.08% 00:19:47.315 cpu : usr=0.40%, sys=1.10%, ctx=866, majf=0, minf=1 00:19:47.315 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:47.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.315 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.315 issued rwts: total=353,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:47.315 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:47.315 job1: (groupid=0, jobs=1): err= 0: pid=1893918: Sun Jul 14 14:52:26 2024 00:19:47.315 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:19:47.315 slat (nsec): min=6649, max=68675, avg=13082.20, stdev=6070.45 00:19:47.315 clat (usec): min=226, max=617, avg=295.38, stdev=70.92 00:19:47.315 lat (usec): min=233, max=639, avg=308.47, stdev=73.76 00:19:47.315 clat percentiles (usec): 00:19:47.315 | 1.00th=[ 233], 5.00th=[ 239], 10.00th=[ 245], 20.00th=[ 255], 00:19:47.315 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 281], 00:19:47.315 | 70.00th=[ 293], 80.00th=[ 306], 90.00th=[ 367], 95.00th=[ 490], 00:19:47.315 | 99.00th=[ 586], 99.50th=[ 603], 99.90th=[ 619], 99.95th=[ 619], 00:19:47.315 | 99.99th=[ 619] 00:19:47.315 write: IOPS=1986, BW=7944KiB/s (8135kB/s)(7952KiB/1001msec); 0 zone resets 00:19:47.315 slat (nsec): min=7024, max=49681, avg=17632.76, stdev=6001.53 00:19:47.315 clat (usec): min=172, max=488, avg=238.95, stdev=52.64 00:19:47.315 lat (usec): min=181, max=503, avg=256.58, stdev=51.37 00:19:47.315 clat percentiles (usec): 00:19:47.315 | 1.00th=[ 180], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 206], 00:19:47.315 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 227], 00:19:47.315 | 70.00th=[ 237], 80.00th=[ 265], 90.00th=[ 314], 95.00th=[ 379], 00:19:47.315 | 99.00th=[ 416], 99.50th=[ 424], 99.90th=[ 437], 99.95th=[ 490], 00:19:47.315 | 99.99th=[ 490] 00:19:47.315 bw ( KiB/s): min= 8192, max= 8192, per=60.50%, avg=8192.00, stdev= 0.00, samples=1 00:19:47.315 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:19:47.315 lat (usec) : 250=49.46%, 500=48.81%, 750=1.73% 00:19:47.315 cpu : usr=4.30%, sys=7.50%, ctx=3525, majf=0, minf=1 00:19:47.315 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:47.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.315 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.315 issued rwts: total=1536,1988,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:47.315 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:47.315 job2: (groupid=0, jobs=1): err= 0: pid=1893919: Sun Jul 14 14:52:26 2024 00:19:47.315 read: IOPS=20, BW=82.8KiB/s (84.7kB/s)(84.0KiB/1015msec) 00:19:47.315 slat (nsec): min=7440, max=34828, avg=24691.95, stdev=9760.06 00:19:47.315 clat (usec): min=40843, max=41954, avg=41033.69, stdev=233.37 00:19:47.315 lat (usec): min=40861, max=41969, avg=41058.38, stdev=229.68 00:19:47.315 clat percentiles (usec): 00:19:47.315 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:19:47.315 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:47.315 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:47.315 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:47.315 | 99.99th=[42206] 00:19:47.315 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:19:47.315 slat (nsec): min=7330, max=38385, avg=11181.55, stdev=4805.61 00:19:47.315 clat (usec): min=200, max=505, avg=282.91, stdev=61.70 00:19:47.315 lat (usec): min=208, max=514, avg=294.09, stdev=61.43 00:19:47.315 clat percentiles (usec): 00:19:47.315 | 1.00th=[ 206], 5.00th=[ 212], 10.00th=[ 221], 20.00th=[ 231], 00:19:47.315 | 30.00th=[ 241], 40.00th=[ 249], 50.00th=[ 262], 60.00th=[ 277], 00:19:47.315 | 70.00th=[ 310], 80.00th=[ 334], 90.00th=[ 392], 95.00th=[ 400], 00:19:47.315 | 99.00th=[ 420], 99.50th=[ 424], 99.90th=[ 506], 99.95th=[ 506], 00:19:47.315 | 99.99th=[ 506] 00:19:47.315 bw ( KiB/s): min= 4096, max= 4096, per=30.25%, avg=4096.00, stdev= 0.00, samples=1 00:19:47.315 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:47.315 lat (usec) : 250=39.40%, 500=56.47%, 750=0.19% 00:19:47.315 lat (msec) : 50=3.94% 00:19:47.315 cpu : usr=0.59%, sys=0.59%, ctx=533, majf=0, minf=2 00:19:47.315 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:47.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.315 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.315 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:47.315 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:47.315 job3: (groupid=0, jobs=1): err= 0: pid=1893920: Sun Jul 14 14:52:26 2024 00:19:47.315 read: IOPS=23, BW=92.2KiB/s (94.4kB/s)(96.0KiB/1041msec) 00:19:47.315 slat (nsec): min=7502, max=34885, avg=25859.25, stdev=9405.93 00:19:47.315 clat (usec): min=357, max=41331, avg=37581.89, stdev=11462.11 00:19:47.315 lat (usec): min=375, max=41345, avg=37607.75, stdev=11466.12 00:19:47.315 clat percentiles (usec): 00:19:47.315 | 1.00th=[ 359], 5.00th=[ 379], 10.00th=[40633], 20.00th=[41157], 00:19:47.315 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:47.315 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:47.315 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:47.315 | 99.99th=[41157] 00:19:47.315 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:19:47.315 slat (nsec): min=6314, max=38736, avg=9559.35, stdev=4424.51 00:19:47.315 clat (usec): min=185, max=461, avg=252.89, stdev=64.15 00:19:47.315 lat (usec): min=192, max=469, avg=262.45, stdev=65.10 00:19:47.315 clat percentiles (usec): 00:19:47.315 | 1.00th=[ 196], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 210], 00:19:47.315 | 30.00th=[ 215], 40.00th=[ 221], 50.00th=[ 225], 60.00th=[ 231], 00:19:47.315 | 70.00th=[ 245], 80.00th=[ 306], 90.00th=[ 375], 95.00th=[ 392], 00:19:47.315 | 99.00th=[ 441], 99.50th=[ 445], 99.90th=[ 461], 99.95th=[ 461], 00:19:47.315 | 99.99th=[ 461] 00:19:47.315 bw ( KiB/s): min= 4096, max= 4096, per=30.25%, avg=4096.00, stdev= 0.00, samples=1 00:19:47.315 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:47.315 lat (usec) : 250=70.15%, 500=25.75% 00:19:47.315 lat (msec) : 50=4.10% 00:19:47.315 cpu : usr=0.19%, sys=0.48%, ctx=541, majf=0, minf=1 00:19:47.315 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:47.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.315 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.315 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:47.315 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:47.315 00:19:47.315 Run status group 0 (all jobs): 00:19:47.315 READ: bw=7431KiB/s (7610kB/s), 82.8KiB/s-6138KiB/s (84.7kB/s-6285kB/s), io=7736KiB (7922kB), run=1001-1041msec 00:19:47.315 WRITE: bw=13.2MiB/s (13.9MB/s), 1967KiB/s-7944KiB/s (2015kB/s-8135kB/s), io=13.8MiB (14.4MB), run=1001-1041msec 00:19:47.315 00:19:47.315 Disk stats (read/write): 00:19:47.315 nvme0n1: ios=398/512, merge=0/0, ticks=695/142, in_queue=837, util=86.67% 00:19:47.316 nvme0n2: ios=1442/1536, merge=0/0, ticks=419/355, in_queue=774, util=86.79% 00:19:47.316 nvme0n3: ios=17/512, merge=0/0, ticks=698/138, in_queue=836, util=88.91% 00:19:47.316 nvme0n4: ios=42/512, merge=0/0, ticks=1641/124, in_queue=1765, util=97.68% 00:19:47.316 14:52:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:47.316 [global] 00:19:47.316 thread=1 00:19:47.316 invalidate=1 00:19:47.316 rw=write 00:19:47.316 time_based=1 00:19:47.316 runtime=1 00:19:47.316 ioengine=libaio 00:19:47.316 direct=1 00:19:47.316 bs=4096 00:19:47.316 iodepth=128 00:19:47.316 norandommap=0 00:19:47.316 numjobs=1 00:19:47.316 00:19:47.316 verify_dump=1 00:19:47.316 verify_backlog=512 00:19:47.316 verify_state_save=0 00:19:47.316 do_verify=1 00:19:47.316 verify=crc32c-intel 00:19:47.316 [job0] 00:19:47.316 filename=/dev/nvme0n1 00:19:47.316 [job1] 00:19:47.316 filename=/dev/nvme0n2 00:19:47.316 [job2] 00:19:47.316 filename=/dev/nvme0n3 00:19:47.316 [job3] 00:19:47.316 filename=/dev/nvme0n4 00:19:47.316 Could not set queue depth (nvme0n1) 00:19:47.316 Could not set queue depth (nvme0n2) 00:19:47.316 Could not set queue depth (nvme0n3) 00:19:47.316 Could not set queue depth (nvme0n4) 00:19:47.316 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:47.316 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:47.316 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:47.316 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:47.316 fio-3.35 00:19:47.316 Starting 4 threads 00:19:48.697 00:19:48.697 job0: (groupid=0, jobs=1): err= 0: pid=1894148: Sun Jul 14 14:52:27 2024 00:19:48.697 read: IOPS=3510, BW=13.7MiB/s (14.4MB/s)(14.3MiB/1045msec) 00:19:48.697 slat (usec): min=3, max=15703, avg=118.11, stdev=789.64 00:19:48.697 clat (usec): min=8760, max=55771, avg=16338.87, stdev=6374.99 00:19:48.697 lat (usec): min=8770, max=55778, avg=16456.98, stdev=6415.92 00:19:48.697 clat percentiles (usec): 00:19:48.697 | 1.00th=[ 8979], 5.00th=[11338], 10.00th=[12387], 20.00th=[13173], 00:19:48.697 | 30.00th=[13566], 40.00th=[13829], 50.00th=[14484], 60.00th=[15139], 00:19:48.697 | 70.00th=[16450], 80.00th=[17957], 90.00th=[22676], 95.00th=[26346], 00:19:48.697 | 99.00th=[51119], 99.50th=[52691], 99.90th=[54789], 99.95th=[54789], 00:19:48.697 | 99.99th=[55837] 00:19:48.697 write: IOPS=3919, BW=15.3MiB/s (16.1MB/s)(16.0MiB/1045msec); 0 zone resets 00:19:48.697 slat (usec): min=4, max=13297, avg=128.77, stdev=711.33 00:19:48.697 clat (usec): min=4799, max=60410, avg=17710.46, stdev=7753.39 00:19:48.697 lat (usec): min=4812, max=60417, avg=17839.23, stdev=7800.60 00:19:48.697 clat percentiles (usec): 00:19:48.697 | 1.00th=[ 8160], 5.00th=[10028], 10.00th=[11469], 20.00th=[13304], 00:19:48.697 | 30.00th=[13566], 40.00th=[14222], 50.00th=[14877], 60.00th=[15664], 00:19:48.697 | 70.00th=[17433], 80.00th=[23462], 90.00th=[27657], 95.00th=[33424], 00:19:48.697 | 99.00th=[55837], 99.50th=[57410], 99.90th=[59507], 99.95th=[60556], 00:19:48.697 | 99.99th=[60556] 00:19:48.697 bw ( KiB/s): min=15704, max=16712, per=26.80%, avg=16208.00, stdev=712.76, samples=2 00:19:48.697 iops : min= 3926, max= 4178, avg=4052.00, stdev=178.19, samples=2 00:19:48.697 lat (msec) : 10=3.79%, 20=77.38%, 50=17.75%, 100=1.08% 00:19:48.697 cpu : usr=4.21%, sys=8.33%, ctx=367, majf=0, minf=1 00:19:48.697 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:48.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.697 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:48.697 issued rwts: total=3668,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:48.697 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:48.697 job1: (groupid=0, jobs=1): err= 0: pid=1894158: Sun Jul 14 14:52:27 2024 00:19:48.697 read: IOPS=3390, BW=13.2MiB/s (13.9MB/s)(13.4MiB/1010msec) 00:19:48.697 slat (usec): min=2, max=29088, avg=156.09, stdev=1000.14 00:19:48.697 clat (usec): min=6323, max=74408, avg=19475.45, stdev=11269.79 00:19:48.697 lat (usec): min=6332, max=74417, avg=19631.54, stdev=11317.04 00:19:48.697 clat percentiles (usec): 00:19:48.697 | 1.00th=[ 9896], 5.00th=[11338], 10.00th=[12125], 20.00th=[13042], 00:19:48.697 | 30.00th=[13566], 40.00th=[14222], 50.00th=[14746], 60.00th=[15926], 00:19:48.697 | 70.00th=[19006], 80.00th=[26346], 90.00th=[30540], 95.00th=[39584], 00:19:48.697 | 99.00th=[67634], 99.50th=[73925], 99.90th=[73925], 99.95th=[73925], 00:19:48.697 | 99.99th=[73925] 00:19:48.697 write: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec); 0 zone resets 00:19:48.697 slat (usec): min=3, max=17172, avg=121.85, stdev=725.00 00:19:48.697 clat (usec): min=6188, max=71540, avg=16791.10, stdev=6344.85 00:19:48.697 lat (usec): min=6192, max=71545, avg=16912.95, stdev=6359.12 00:19:48.697 clat percentiles (usec): 00:19:48.697 | 1.00th=[ 8291], 5.00th=[11207], 10.00th=[12518], 20.00th=[13304], 00:19:48.697 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14353], 60.00th=[15139], 00:19:48.697 | 70.00th=[17695], 80.00th=[20317], 90.00th=[24511], 95.00th=[26084], 00:19:48.697 | 99.00th=[44303], 99.50th=[54264], 99.90th=[54264], 99.95th=[71828], 00:19:48.697 | 99.99th=[71828] 00:19:48.697 bw ( KiB/s): min=12288, max=16384, per=23.71%, avg=14336.00, stdev=2896.31, samples=2 00:19:48.697 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:19:48.697 lat (msec) : 10=1.94%, 20=72.87%, 50=22.99%, 100=2.20% 00:19:48.697 cpu : usr=3.67%, sys=6.34%, ctx=394, majf=0, minf=1 00:19:48.697 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:19:48.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.698 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:48.698 issued rwts: total=3424,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:48.698 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:48.698 job2: (groupid=0, jobs=1): err= 0: pid=1894187: Sun Jul 14 14:52:27 2024 00:19:48.698 read: IOPS=3630, BW=14.2MiB/s (14.9MB/s)(14.4MiB/1013msec) 00:19:48.698 slat (usec): min=3, max=16806, avg=138.33, stdev=952.73 00:19:48.698 clat (usec): min=6342, max=35006, avg=17838.11, stdev=4635.98 00:19:48.698 lat (usec): min=6360, max=35029, avg=17976.45, stdev=4691.22 00:19:48.698 clat percentiles (usec): 00:19:48.698 | 1.00th=[ 7898], 5.00th=[12256], 10.00th=[13042], 20.00th=[14746], 00:19:48.698 | 30.00th=[15664], 40.00th=[16188], 50.00th=[16712], 60.00th=[17433], 00:19:48.698 | 70.00th=[18220], 80.00th=[20841], 90.00th=[24511], 95.00th=[27132], 00:19:48.698 | 99.00th=[33162], 99.50th=[33817], 99.90th=[34866], 99.95th=[34866], 00:19:48.698 | 99.99th=[34866] 00:19:48.698 write: IOPS=4043, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1013msec); 0 zone resets 00:19:48.698 slat (usec): min=4, max=12761, avg=107.29, stdev=558.31 00:19:48.698 clat (usec): min=1412, max=35010, avg=15370.36, stdev=3686.86 00:19:48.698 lat (usec): min=1497, max=35031, avg=15477.65, stdev=3736.14 00:19:48.698 clat percentiles (usec): 00:19:48.698 | 1.00th=[ 6325], 5.00th=[ 8094], 10.00th=[10290], 20.00th=[13173], 00:19:48.698 | 30.00th=[14091], 40.00th=[14877], 50.00th=[15401], 60.00th=[15795], 00:19:48.698 | 70.00th=[17695], 80.00th=[18220], 90.00th=[19006], 95.00th=[20055], 00:19:48.698 | 99.00th=[26870], 99.50th=[28181], 99.90th=[33424], 99.95th=[34866], 00:19:48.698 | 99.99th=[34866] 00:19:48.698 bw ( KiB/s): min=16120, max=16384, per=26.87%, avg=16252.00, stdev=186.68, samples=2 00:19:48.698 iops : min= 4030, max= 4096, avg=4063.00, stdev=46.67, samples=2 00:19:48.698 lat (msec) : 2=0.01%, 10=5.71%, 20=81.18%, 50=13.09% 00:19:48.698 cpu : usr=5.82%, sys=11.35%, ctx=468, majf=0, minf=1 00:19:48.698 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:48.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.698 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:48.698 issued rwts: total=3678,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:48.698 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:48.698 job3: (groupid=0, jobs=1): err= 0: pid=1894199: Sun Jul 14 14:52:27 2024 00:19:48.698 read: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec) 00:19:48.698 slat (usec): min=4, max=7232, avg=124.27, stdev=667.74 00:19:48.698 clat (usec): min=7692, max=29336, avg=16218.68, stdev=2734.39 00:19:48.698 lat (usec): min=7701, max=30678, avg=16342.94, stdev=2785.80 00:19:48.698 clat percentiles (usec): 00:19:48.698 | 1.00th=[10552], 5.00th=[12649], 10.00th=[13566], 20.00th=[14746], 00:19:48.698 | 30.00th=[15139], 40.00th=[15401], 50.00th=[15664], 60.00th=[16057], 00:19:48.698 | 70.00th=[16712], 80.00th=[17695], 90.00th=[19006], 95.00th=[20841], 00:19:48.698 | 99.00th=[27132], 99.50th=[27919], 99.90th=[29230], 99.95th=[29230], 00:19:48.698 | 99.99th=[29230] 00:19:48.698 write: IOPS=3995, BW=15.6MiB/s (16.4MB/s)(15.7MiB/1007msec); 0 zone resets 00:19:48.698 slat (usec): min=3, max=28554, avg=124.39, stdev=718.11 00:19:48.698 clat (usec): min=6699, max=42653, avg=16320.84, stdev=2281.27 00:19:48.698 lat (usec): min=7978, max=42692, avg=16445.23, stdev=2360.19 00:19:48.698 clat percentiles (usec): 00:19:48.698 | 1.00th=[ 9765], 5.00th=[12780], 10.00th=[14222], 20.00th=[15008], 00:19:48.698 | 30.00th=[15533], 40.00th=[15795], 50.00th=[16057], 60.00th=[16319], 00:19:48.698 | 70.00th=[16712], 80.00th=[17957], 90.00th=[19006], 95.00th=[19792], 00:19:48.698 | 99.00th=[22938], 99.50th=[23462], 99.90th=[25822], 99.95th=[25822], 00:19:48.698 | 99.99th=[42730] 00:19:48.698 bw ( KiB/s): min=14784, max=16384, per=25.77%, avg=15584.00, stdev=1131.37, samples=2 00:19:48.698 iops : min= 3696, max= 4096, avg=3896.00, stdev=282.84, samples=2 00:19:48.698 lat (msec) : 10=0.97%, 20=93.19%, 50=5.84% 00:19:48.698 cpu : usr=9.05%, sys=9.15%, ctx=441, majf=0, minf=1 00:19:48.698 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:48.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.698 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:48.698 issued rwts: total=3584,4023,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:48.698 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:48.698 00:19:48.698 Run status group 0 (all jobs): 00:19:48.698 READ: bw=53.7MiB/s (56.3MB/s), 13.2MiB/s-14.2MiB/s (13.9MB/s-14.9MB/s), io=56.1MiB (58.8MB), run=1007-1045msec 00:19:48.698 WRITE: bw=59.1MiB/s (61.9MB/s), 13.9MiB/s-15.8MiB/s (14.5MB/s-16.6MB/s), io=61.7MiB (64.7MB), run=1007-1045msec 00:19:48.698 00:19:48.698 Disk stats (read/write): 00:19:48.698 nvme0n1: ios=3112/3463, merge=0/0, ticks=29445/38647, in_queue=68092, util=99.50% 00:19:48.698 nvme0n2: ios=2711/3072, merge=0/0, ticks=17347/16489, in_queue=33836, util=97.36% 00:19:48.698 nvme0n3: ios=3072/3431, merge=0/0, ticks=52248/50194, in_queue=102442, util=88.80% 00:19:48.698 nvme0n4: ios=3130/3447, merge=0/0, ticks=24223/25588, in_queue=49811, util=97.46% 00:19:48.698 14:52:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:48.698 [global] 00:19:48.698 thread=1 00:19:48.698 invalidate=1 00:19:48.698 rw=randwrite 00:19:48.698 time_based=1 00:19:48.698 runtime=1 00:19:48.698 ioengine=libaio 00:19:48.698 direct=1 00:19:48.698 bs=4096 00:19:48.698 iodepth=128 00:19:48.698 norandommap=0 00:19:48.698 numjobs=1 00:19:48.698 00:19:48.698 verify_dump=1 00:19:48.698 verify_backlog=512 00:19:48.698 verify_state_save=0 00:19:48.698 do_verify=1 00:19:48.698 verify=crc32c-intel 00:19:48.698 [job0] 00:19:48.698 filename=/dev/nvme0n1 00:19:48.698 [job1] 00:19:48.698 filename=/dev/nvme0n2 00:19:48.698 [job2] 00:19:48.698 filename=/dev/nvme0n3 00:19:48.698 [job3] 00:19:48.698 filename=/dev/nvme0n4 00:19:48.698 Could not set queue depth (nvme0n1) 00:19:48.698 Could not set queue depth (nvme0n2) 00:19:48.698 Could not set queue depth (nvme0n3) 00:19:48.698 Could not set queue depth (nvme0n4) 00:19:48.698 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:48.698 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:48.698 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:48.698 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:48.698 fio-3.35 00:19:48.698 Starting 4 threads 00:19:50.072 00:19:50.072 job0: (groupid=0, jobs=1): err= 0: pid=1894487: Sun Jul 14 14:52:29 2024 00:19:50.072 read: IOPS=3236, BW=12.6MiB/s (13.3MB/s)(12.7MiB/1004msec) 00:19:50.072 slat (usec): min=2, max=13550, avg=162.84, stdev=1029.79 00:19:50.072 clat (usec): min=2972, max=46483, avg=20245.91, stdev=8396.11 00:19:50.072 lat (usec): min=4615, max=46487, avg=20408.75, stdev=8448.26 00:19:50.072 clat percentiles (usec): 00:19:50.072 | 1.00th=[ 9634], 5.00th=[11469], 10.00th=[12911], 20.00th=[13435], 00:19:50.072 | 30.00th=[13829], 40.00th=[15926], 50.00th=[17695], 60.00th=[19530], 00:19:50.072 | 70.00th=[21365], 80.00th=[28705], 90.00th=[34341], 95.00th=[37487], 00:19:50.072 | 99.00th=[42206], 99.50th=[46400], 99.90th=[46400], 99.95th=[46400], 00:19:50.072 | 99.99th=[46400] 00:19:50.072 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:19:50.072 slat (usec): min=3, max=8128, avg=125.59, stdev=799.96 00:19:50.072 clat (usec): min=6834, max=43247, avg=17019.66, stdev=6171.04 00:19:50.072 lat (usec): min=6901, max=43259, avg=17145.25, stdev=6187.93 00:19:50.072 clat percentiles (usec): 00:19:50.072 | 1.00th=[10159], 5.00th=[12387], 10.00th=[12518], 20.00th=[12780], 00:19:50.072 | 30.00th=[13435], 40.00th=[13960], 50.00th=[14746], 60.00th=[15008], 00:19:50.072 | 70.00th=[17957], 80.00th=[20055], 90.00th=[26084], 95.00th=[32375], 00:19:50.072 | 99.00th=[38536], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:19:50.072 | 99.99th=[43254] 00:19:50.072 bw ( KiB/s): min=12288, max=16384, per=30.01%, avg=14336.00, stdev=2896.31, samples=2 00:19:50.072 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:19:50.072 lat (msec) : 4=0.01%, 10=1.46%, 20=70.58%, 50=27.94% 00:19:50.072 cpu : usr=2.09%, sys=4.39%, ctx=193, majf=0, minf=1 00:19:50.072 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:19:50.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:50.072 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:50.072 issued rwts: total=3249,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:50.072 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:50.072 job1: (groupid=0, jobs=1): err= 0: pid=1894488: Sun Jul 14 14:52:29 2024 00:19:50.072 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:19:50.072 slat (usec): min=2, max=10055, avg=137.42, stdev=777.96 00:19:50.072 clat (usec): min=8309, max=35579, avg=17221.42, stdev=4725.10 00:19:50.072 lat (usec): min=8319, max=35616, avg=17358.83, stdev=4789.99 00:19:50.072 clat percentiles (usec): 00:19:50.072 | 1.00th=[ 9372], 5.00th=[12125], 10.00th=[12518], 20.00th=[13042], 00:19:50.072 | 30.00th=[13304], 40.00th=[15401], 50.00th=[15795], 60.00th=[17957], 00:19:50.072 | 70.00th=[19006], 80.00th=[22152], 90.00th=[24249], 95.00th=[25560], 00:19:50.072 | 99.00th=[29492], 99.50th=[30278], 99.90th=[30540], 99.95th=[35390], 00:19:50.072 | 99.99th=[35390] 00:19:50.072 write: IOPS=3714, BW=14.5MiB/s (15.2MB/s)(14.6MiB/1005msec); 0 zone resets 00:19:50.072 slat (usec): min=3, max=10632, avg=128.23, stdev=738.64 00:19:50.072 clat (usec): min=4671, max=35439, avg=17499.80, stdev=5294.00 00:19:50.072 lat (usec): min=4694, max=35456, avg=17628.03, stdev=5357.82 00:19:50.072 clat percentiles (usec): 00:19:50.072 | 1.00th=[ 5604], 5.00th=[10159], 10.00th=[12649], 20.00th=[14091], 00:19:50.072 | 30.00th=[14353], 40.00th=[14877], 50.00th=[15139], 60.00th=[17957], 00:19:50.072 | 70.00th=[20055], 80.00th=[21627], 90.00th=[23725], 95.00th=[28181], 00:19:50.072 | 99.00th=[33817], 99.50th=[34341], 99.90th=[34341], 99.95th=[34341], 00:19:50.072 | 99.99th=[35390] 00:19:50.072 bw ( KiB/s): min=13184, max=15664, per=30.19%, avg=14424.00, stdev=1753.62, samples=2 00:19:50.072 iops : min= 3296, max= 3916, avg=3606.00, stdev=438.41, samples=2 00:19:50.072 lat (msec) : 10=3.59%, 20=68.81%, 50=27.59% 00:19:50.072 cpu : usr=3.59%, sys=5.58%, ctx=397, majf=0, minf=1 00:19:50.072 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:19:50.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:50.072 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:50.072 issued rwts: total=3584,3733,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:50.072 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:50.072 job2: (groupid=0, jobs=1): err= 0: pid=1894494: Sun Jul 14 14:52:29 2024 00:19:50.072 read: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec) 00:19:50.072 slat (usec): min=2, max=16232, avg=285.88, stdev=1659.86 00:19:50.072 clat (usec): min=11606, max=61798, avg=35766.62, stdev=13183.68 00:19:50.072 lat (usec): min=11610, max=61807, avg=36052.50, stdev=13202.57 00:19:50.072 clat percentiles (usec): 00:19:50.072 | 1.00th=[16319], 5.00th=[16581], 10.00th=[19530], 20.00th=[22152], 00:19:50.072 | 30.00th=[25297], 40.00th=[30802], 50.00th=[34866], 60.00th=[41681], 00:19:50.072 | 70.00th=[45876], 80.00th=[48497], 90.00th=[54789], 95.00th=[58459], 00:19:50.072 | 99.00th=[60031], 99.50th=[60031], 99.90th=[61604], 99.95th=[61604], 00:19:50.072 | 99.99th=[61604] 00:19:50.072 write: IOPS=2081, BW=8327KiB/s (8527kB/s)(8360KiB/1004msec); 0 zone resets 00:19:50.072 slat (usec): min=3, max=17465, avg=193.17, stdev=1150.28 00:19:50.072 clat (usec): min=1953, max=54762, avg=25196.10, stdev=11763.64 00:19:50.072 lat (usec): min=3536, max=54768, avg=25389.27, stdev=11773.31 00:19:50.072 clat percentiles (usec): 00:19:50.072 | 1.00th=[ 8455], 5.00th=[13042], 10.00th=[15664], 20.00th=[16188], 00:19:50.072 | 30.00th=[16450], 40.00th=[17433], 50.00th=[19792], 60.00th=[21890], 00:19:50.072 | 70.00th=[30278], 80.00th=[38536], 90.00th=[43254], 95.00th=[48497], 00:19:50.072 | 99.00th=[54789], 99.50th=[54789], 99.90th=[54789], 99.95th=[54789], 00:19:50.072 | 99.99th=[54789] 00:19:50.072 bw ( KiB/s): min= 7528, max= 8856, per=17.15%, avg=8192.00, stdev=939.04, samples=2 00:19:50.072 iops : min= 1882, max= 2214, avg=2048.00, stdev=234.76, samples=2 00:19:50.073 lat (msec) : 2=0.02%, 4=0.41%, 10=0.17%, 20=30.91%, 50=59.86% 00:19:50.073 lat (msec) : 100=8.63% 00:19:50.073 cpu : usr=1.20%, sys=2.19%, ctx=202, majf=0, minf=1 00:19:50.073 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:19:50.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:50.073 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:50.073 issued rwts: total=2048,2090,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:50.073 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:50.073 job3: (groupid=0, jobs=1): err= 0: pid=1894496: Sun Jul 14 14:52:29 2024 00:19:50.073 read: IOPS=2534, BW=9.90MiB/s (10.4MB/s)(10.0MiB/1010msec) 00:19:50.073 slat (usec): min=2, max=14421, avg=194.76, stdev=1189.98 00:19:50.073 clat (usec): min=5916, max=43288, avg=26003.52, stdev=7146.49 00:19:50.073 lat (usec): min=5922, max=43294, avg=26198.28, stdev=7150.93 00:19:50.073 clat percentiles (usec): 00:19:50.073 | 1.00th=[ 5997], 5.00th=[12911], 10.00th=[16188], 20.00th=[21627], 00:19:50.073 | 30.00th=[23987], 40.00th=[24249], 50.00th=[25560], 60.00th=[27657], 00:19:50.073 | 70.00th=[29230], 80.00th=[31851], 90.00th=[34866], 95.00th=[38011], 00:19:50.073 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:19:50.073 | 99.99th=[43254] 00:19:50.073 write: IOPS=2628, BW=10.3MiB/s (10.8MB/s)(10.4MiB/1010msec); 0 zone resets 00:19:50.073 slat (usec): min=3, max=10340, avg=174.87, stdev=1028.67 00:19:50.073 clat (usec): min=1055, max=50673, avg=23173.36, stdev=6578.72 00:19:50.073 lat (usec): min=1065, max=50681, avg=23348.23, stdev=6574.31 00:19:50.073 clat percentiles (usec): 00:19:50.073 | 1.00th=[ 2966], 5.00th=[ 6718], 10.00th=[15795], 20.00th=[21103], 00:19:50.073 | 30.00th=[22938], 40.00th=[22938], 50.00th=[23200], 60.00th=[23725], 00:19:50.073 | 70.00th=[25035], 80.00th=[26870], 90.00th=[28705], 95.00th=[30278], 00:19:50.073 | 99.00th=[46400], 99.50th=[47449], 99.90th=[50594], 99.95th=[50594], 00:19:50.073 | 99.99th=[50594] 00:19:50.073 bw ( KiB/s): min= 8192, max=12288, per=21.44%, avg=10240.00, stdev=2896.31, samples=2 00:19:50.073 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:19:50.073 lat (msec) : 2=0.21%, 4=0.65%, 10=2.63%, 20=13.88%, 50=82.51% 00:19:50.073 lat (msec) : 100=0.12% 00:19:50.073 cpu : usr=1.78%, sys=3.77%, ctx=189, majf=0, minf=1 00:19:50.073 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:19:50.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:50.073 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:50.073 issued rwts: total=2560,2655,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:50.073 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:50.073 00:19:50.073 Run status group 0 (all jobs): 00:19:50.073 READ: bw=44.2MiB/s (46.4MB/s), 8159KiB/s-13.9MiB/s (8355kB/s-14.6MB/s), io=44.7MiB (46.9MB), run=1004-1010msec 00:19:50.073 WRITE: bw=46.7MiB/s (48.9MB/s), 8327KiB/s-14.5MiB/s (8527kB/s-15.2MB/s), io=47.1MiB (49.4MB), run=1004-1010msec 00:19:50.073 00:19:50.073 Disk stats (read/write): 00:19:50.073 nvme0n1: ios=2610/3040, merge=0/0, ticks=18782/16620, in_queue=35402, util=87.17% 00:19:50.073 nvme0n2: ios=3092/3364, merge=0/0, ticks=24469/27180, in_queue=51649, util=86.79% 00:19:50.073 nvme0n3: ios=1707/2048, merge=0/0, ticks=15426/12291, in_queue=27717, util=97.70% 00:19:50.073 nvme0n4: ios=2074/2336, merge=0/0, ticks=18095/21415, in_queue=39510, util=97.79% 00:19:50.073 14:52:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:19:50.073 14:52:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1894632 00:19:50.073 14:52:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:50.073 14:52:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:19:50.073 [global] 00:19:50.073 thread=1 00:19:50.073 invalidate=1 00:19:50.073 rw=read 00:19:50.073 time_based=1 00:19:50.073 runtime=10 00:19:50.073 ioengine=libaio 00:19:50.073 direct=1 00:19:50.073 bs=4096 00:19:50.073 iodepth=1 00:19:50.073 norandommap=1 00:19:50.073 numjobs=1 00:19:50.073 00:19:50.073 [job0] 00:19:50.073 filename=/dev/nvme0n1 00:19:50.073 [job1] 00:19:50.073 filename=/dev/nvme0n2 00:19:50.073 [job2] 00:19:50.073 filename=/dev/nvme0n3 00:19:50.073 [job3] 00:19:50.073 filename=/dev/nvme0n4 00:19:50.073 Could not set queue depth (nvme0n1) 00:19:50.073 Could not set queue depth (nvme0n2) 00:19:50.073 Could not set queue depth (nvme0n3) 00:19:50.073 Could not set queue depth (nvme0n4) 00:19:50.331 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:50.331 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:50.331 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:50.331 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:50.331 fio-3.35 00:19:50.331 Starting 4 threads 00:19:53.615 14:52:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:53.615 14:52:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:53.615 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=6598656, buflen=4096 00:19:53.615 fio: pid=1894728, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:53.615 14:52:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:53.615 14:52:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:53.615 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=37867520, buflen=4096 00:19:53.615 fio: pid=1894726, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:53.872 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=2465792, buflen=4096 00:19:53.872 fio: pid=1894724, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:53.872 14:52:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:53.872 14:52:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:54.132 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=4407296, buflen=4096 00:19:54.132 fio: pid=1894725, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:54.132 00:19:54.132 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1894724: Sun Jul 14 14:52:33 2024 00:19:54.132 read: IOPS=173, BW=694KiB/s (711kB/s)(2408KiB/3470msec) 00:19:54.133 slat (usec): min=4, max=16898, avg=60.06, stdev=888.29 00:19:54.133 clat (usec): min=220, max=41297, avg=5661.99, stdev=13750.40 00:19:54.133 lat (usec): min=225, max=58082, avg=5722.09, stdev=13914.07 00:19:54.133 clat percentiles (usec): 00:19:54.133 | 1.00th=[ 229], 5.00th=[ 235], 10.00th=[ 239], 20.00th=[ 245], 00:19:54.133 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 262], 60.00th=[ 273], 00:19:54.133 | 70.00th=[ 334], 80.00th=[ 429], 90.00th=[41157], 95.00th=[41157], 00:19:54.133 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:54.133 | 99.99th=[41157] 00:19:54.133 bw ( KiB/s): min= 96, max= 2176, per=5.88%, avg=788.00, stdev=1038.96, samples=6 00:19:54.133 iops : min= 24, max= 544, avg=197.00, stdev=259.74, samples=6 00:19:54.133 lat (usec) : 250=32.67%, 500=50.91%, 750=2.32%, 1000=0.33% 00:19:54.133 lat (msec) : 2=0.17%, 10=0.17%, 20=0.17%, 50=13.10% 00:19:54.133 cpu : usr=0.12%, sys=0.14%, ctx=605, majf=0, minf=1 00:19:54.133 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:54.133 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.133 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.133 issued rwts: total=603,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:54.133 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:54.133 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1894725: Sun Jul 14 14:52:33 2024 00:19:54.133 read: IOPS=287, BW=1150KiB/s (1177kB/s)(4304KiB/3743msec) 00:19:54.133 slat (usec): min=4, max=22901, avg=38.53, stdev=722.67 00:19:54.133 clat (usec): min=215, max=42901, avg=3417.37, stdev=10870.46 00:19:54.133 lat (usec): min=222, max=45995, avg=3455.93, stdev=10920.25 00:19:54.133 clat percentiles (usec): 00:19:54.133 | 1.00th=[ 225], 5.00th=[ 233], 10.00th=[ 239], 20.00th=[ 245], 00:19:54.133 | 30.00th=[ 251], 40.00th=[ 260], 50.00th=[ 269], 60.00th=[ 293], 00:19:54.133 | 70.00th=[ 334], 80.00th=[ 383], 90.00th=[ 486], 95.00th=[41157], 00:19:54.133 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:19:54.133 | 99.99th=[42730] 00:19:54.133 bw ( KiB/s): min= 96, max= 2280, per=4.81%, avg=644.57, stdev=946.86, samples=7 00:19:54.133 iops : min= 24, max= 570, avg=161.14, stdev=236.72, samples=7 00:19:54.133 lat (usec) : 250=28.69%, 500=62.49%, 750=1.11% 00:19:54.133 lat (msec) : 50=7.61% 00:19:54.133 cpu : usr=0.08%, sys=0.29%, ctx=1083, majf=0, minf=1 00:19:54.133 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:54.133 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.133 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.133 issued rwts: total=1077,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:54.133 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:54.133 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1894726: Sun Jul 14 14:52:33 2024 00:19:54.133 read: IOPS=2913, BW=11.4MiB/s (11.9MB/s)(36.1MiB/3174msec) 00:19:54.133 slat (nsec): min=4509, max=67585, avg=12378.76, stdev=9328.41 00:19:54.133 clat (usec): min=221, max=41175, avg=325.59, stdev=1038.80 00:19:54.133 lat (usec): min=227, max=41190, avg=337.96, stdev=1039.39 00:19:54.133 clat percentiles (usec): 00:19:54.133 | 1.00th=[ 229], 5.00th=[ 235], 10.00th=[ 239], 20.00th=[ 245], 00:19:54.133 | 30.00th=[ 251], 40.00th=[ 258], 50.00th=[ 269], 60.00th=[ 285], 00:19:54.133 | 70.00th=[ 310], 80.00th=[ 351], 90.00th=[ 400], 95.00th=[ 457], 00:19:54.133 | 99.00th=[ 529], 99.50th=[ 537], 99.90th=[ 611], 99.95th=[41157], 00:19:54.133 | 99.99th=[41157] 00:19:54.133 bw ( KiB/s): min= 7448, max=15200, per=86.40%, avg=11573.33, stdev=2956.49, samples=6 00:19:54.133 iops : min= 1862, max= 3800, avg=2893.33, stdev=739.12, samples=6 00:19:54.133 lat (usec) : 250=29.59%, 500=67.38%, 750=2.94% 00:19:54.133 lat (msec) : 4=0.01%, 50=0.06% 00:19:54.133 cpu : usr=1.45%, sys=4.16%, ctx=9246, majf=0, minf=1 00:19:54.133 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:54.133 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.133 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.133 issued rwts: total=9246,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:54.133 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:54.133 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1894728: Sun Jul 14 14:52:33 2024 00:19:54.133 read: IOPS=553, BW=2212KiB/s (2265kB/s)(6444KiB/2913msec) 00:19:54.133 slat (nsec): min=4741, max=64177, avg=18289.83, stdev=9753.13 00:19:54.133 clat (usec): min=231, max=41412, avg=1768.03, stdev=7512.24 00:19:54.133 lat (usec): min=239, max=41445, avg=1786.32, stdev=7512.42 00:19:54.133 clat percentiles (usec): 00:19:54.133 | 1.00th=[ 241], 5.00th=[ 269], 10.00th=[ 277], 20.00th=[ 289], 00:19:54.133 | 30.00th=[ 302], 40.00th=[ 310], 50.00th=[ 318], 60.00th=[ 334], 00:19:54.133 | 70.00th=[ 351], 80.00th=[ 379], 90.00th=[ 412], 95.00th=[ 506], 00:19:54.133 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:54.133 | 99.99th=[41157] 00:19:54.133 bw ( KiB/s): min= 104, max= 6608, per=19.12%, avg=2561.60, stdev=2905.72, samples=5 00:19:54.133 iops : min= 26, max= 1652, avg=640.40, stdev=726.43, samples=5 00:19:54.133 lat (usec) : 250=2.79%, 500=92.06%, 750=1.55% 00:19:54.133 lat (msec) : 50=3.54% 00:19:54.133 cpu : usr=0.45%, sys=1.17%, ctx=1612, majf=0, minf=1 00:19:54.133 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:54.133 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.133 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.133 issued rwts: total=1612,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:54.133 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:54.133 00:19:54.133 Run status group 0 (all jobs): 00:19:54.133 READ: bw=13.1MiB/s (13.7MB/s), 694KiB/s-11.4MiB/s (711kB/s-11.9MB/s), io=49.0MiB (51.3MB), run=2913-3743msec 00:19:54.133 00:19:54.133 Disk stats (read/write): 00:19:54.133 nvme0n1: ios=600/0, merge=0/0, ticks=3322/0, in_queue=3322, util=95.42% 00:19:54.133 nvme0n2: ios=666/0, merge=0/0, ticks=4447/0, in_queue=4447, util=98.93% 00:19:54.133 nvme0n3: ios=9059/0, merge=0/0, ticks=2846/0, in_queue=2846, util=96.79% 00:19:54.133 nvme0n4: ios=1610/0, merge=0/0, ticks=2791/0, in_queue=2791, util=96.71% 00:19:54.133 14:52:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:54.133 14:52:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:54.700 14:52:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:54.700 14:52:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:54.965 14:52:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:54.965 14:52:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:55.229 14:52:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:55.229 14:52:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:55.487 14:52:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:55.487 14:52:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:55.745 14:52:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:19:55.745 14:52:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 1894632 00:19:55.745 14:52:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:19:55.745 14:52:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:56.678 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:56.678 14:52:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:56.678 14:52:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:19:56.678 14:52:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:56.678 14:52:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:56.678 14:52:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:56.678 14:52:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:56.678 14:52:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:19:56.678 14:52:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:56.678 14:52:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:56.678 nvmf hotplug test: fio failed as expected 00:19:56.678 14:52:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:56.936 14:52:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:56.936 14:52:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:56.936 14:52:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:56.936 14:52:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:56.936 14:52:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:19:56.936 14:52:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:56.936 14:52:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:19:56.936 14:52:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:56.936 14:52:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:19:56.936 14:52:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:56.936 14:52:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:56.936 rmmod nvme_tcp 00:19:56.936 rmmod nvme_fabrics 00:19:56.936 rmmod nvme_keyring 00:19:56.936 14:52:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:56.936 14:52:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:19:56.936 14:52:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:19:56.936 14:52:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1892530 ']' 00:19:56.936 14:52:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1892530 00:19:56.936 14:52:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 1892530 ']' 00:19:56.936 14:52:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 1892530 00:19:56.936 14:52:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:19:56.936 14:52:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:56.936 14:52:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1892530 00:19:56.936 14:52:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:56.936 14:52:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:56.936 14:52:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1892530' 00:19:56.936 killing process with pid 1892530 00:19:56.936 14:52:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 1892530 00:19:56.936 14:52:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 1892530 00:19:58.309 14:52:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:58.309 14:52:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:58.309 14:52:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:58.309 14:52:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:58.309 14:52:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:58.309 14:52:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:58.309 14:52:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:58.309 14:52:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.213 14:52:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:00.213 00:20:00.213 real 0m26.271s 00:20:00.213 user 1m31.060s 00:20:00.213 sys 0m6.862s 00:20:00.213 14:52:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:00.213 14:52:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.213 ************************************ 00:20:00.213 END TEST nvmf_fio_target 00:20:00.213 ************************************ 00:20:00.213 14:52:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:00.213 14:52:39 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:20:00.213 14:52:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:00.213 14:52:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:00.213 14:52:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:00.213 ************************************ 00:20:00.213 START TEST nvmf_bdevio 00:20:00.213 ************************************ 00:20:00.213 14:52:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:20:00.472 * Looking for test storage... 00:20:00.472 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:00.472 14:52:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:00.472 14:52:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:20:00.472 14:52:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:00.472 14:52:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:00.472 14:52:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:00.472 14:52:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:00.472 14:52:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:00.472 14:52:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:00.472 14:52:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:00.472 14:52:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:00.472 14:52:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:00.472 14:52:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:00.472 14:52:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:00.472 14:52:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:00.472 14:52:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:00.472 14:52:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:00.472 14:52:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:00.472 14:52:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:00.472 14:52:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:00.472 14:52:39 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:00.472 14:52:39 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:00.472 14:52:39 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:00.472 14:52:39 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.472 14:52:39 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.472 14:52:39 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.472 14:52:39 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:20:00.472 14:52:39 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.472 14:52:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:20:00.472 14:52:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:00.472 14:52:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:00.472 14:52:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:00.472 14:52:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:00.472 14:52:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:00.472 14:52:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:00.472 14:52:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:00.472 14:52:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:00.472 14:52:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:00.472 14:52:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:00.473 14:52:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:20:00.473 14:52:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:00.473 14:52:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:00.473 14:52:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:00.473 14:52:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:00.473 14:52:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:00.473 14:52:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.473 14:52:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:00.473 14:52:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.473 14:52:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:00.473 14:52:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:00.473 14:52:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:20:00.473 14:52:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:02.377 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:02.377 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:20:02.377 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:02.377 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:02.377 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:02.377 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:02.377 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:02.377 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:20:02.377 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:02.377 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:20:02.377 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:20:02.377 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:20:02.377 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:20:02.377 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:20:02.377 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:20:02.377 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:02.377 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:02.377 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:02.377 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:02.377 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:02.377 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:02.377 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:02.377 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:02.377 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:02.377 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:02.377 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:02.377 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:02.377 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:02.377 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:02.377 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:02.377 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:02.377 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:02.377 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:02.377 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:02.377 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:02.377 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:02.377 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:02.377 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:02.377 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:02.377 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:02.377 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:02.377 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:02.377 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:02.377 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:02.377 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:02.377 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:02.377 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:02.377 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:02.377 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:02.377 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:02.377 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:02.378 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:02.378 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:02.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:02.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:20:02.378 00:20:02.378 --- 10.0.0.2 ping statistics --- 00:20:02.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:02.378 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:02.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:02.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:20:02.378 00:20:02.378 --- 10.0.0.1 ping statistics --- 00:20:02.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:02.378 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1897597 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1897597 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 1897597 ']' 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:02.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:02.378 14:52:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:02.638 [2024-07-14 14:52:41.728641] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:02.638 [2024-07-14 14:52:41.728780] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:02.638 EAL: No free 2048 kB hugepages reported on node 1 00:20:02.638 [2024-07-14 14:52:41.865185] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:02.897 [2024-07-14 14:52:42.128173] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:02.897 [2024-07-14 14:52:42.128250] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:02.897 [2024-07-14 14:52:42.128278] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:02.897 [2024-07-14 14:52:42.128299] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:02.897 [2024-07-14 14:52:42.128329] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:02.897 [2024-07-14 14:52:42.128469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:02.897 [2024-07-14 14:52:42.128537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:20:02.897 [2024-07-14 14:52:42.128584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:02.897 [2024-07-14 14:52:42.128596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:20:03.465 14:52:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:03.465 14:52:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:20:03.465 14:52:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:03.465 14:52:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:03.465 14:52:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:03.465 14:52:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:03.465 14:52:42 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:03.465 14:52:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.465 14:52:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:03.465 [2024-07-14 14:52:42.679282] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:03.465 14:52:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.465 14:52:42 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:03.465 14:52:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.465 14:52:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:03.465 Malloc0 00:20:03.465 14:52:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.465 14:52:42 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:03.465 14:52:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.465 14:52:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:03.465 14:52:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.465 14:52:42 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:03.465 14:52:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.725 14:52:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:03.725 14:52:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.725 14:52:42 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:03.725 14:52:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.725 14:52:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:03.725 [2024-07-14 14:52:42.785241] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:03.725 14:52:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.725 14:52:42 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:20:03.725 14:52:42 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:03.725 14:52:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:20:03.725 14:52:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:20:03.725 14:52:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.725 14:52:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.725 { 00:20:03.725 "params": { 00:20:03.725 "name": "Nvme$subsystem", 00:20:03.725 "trtype": "$TEST_TRANSPORT", 00:20:03.725 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.725 "adrfam": "ipv4", 00:20:03.725 "trsvcid": "$NVMF_PORT", 00:20:03.725 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.725 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.725 "hdgst": ${hdgst:-false}, 00:20:03.725 "ddgst": ${ddgst:-false} 00:20:03.725 }, 00:20:03.725 "method": "bdev_nvme_attach_controller" 00:20:03.725 } 00:20:03.725 EOF 00:20:03.725 )") 00:20:03.725 14:52:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:20:03.725 14:52:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:20:03.725 14:52:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:20:03.725 14:52:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:03.725 "params": { 00:20:03.725 "name": "Nvme1", 00:20:03.725 "trtype": "tcp", 00:20:03.725 "traddr": "10.0.0.2", 00:20:03.725 "adrfam": "ipv4", 00:20:03.725 "trsvcid": "4420", 00:20:03.725 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.725 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:03.725 "hdgst": false, 00:20:03.725 "ddgst": false 00:20:03.725 }, 00:20:03.725 "method": "bdev_nvme_attach_controller" 00:20:03.725 }' 00:20:03.725 [2024-07-14 14:52:42.867086] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:03.725 [2024-07-14 14:52:42.867241] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1897755 ] 00:20:03.725 EAL: No free 2048 kB hugepages reported on node 1 00:20:03.725 [2024-07-14 14:52:42.992090] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:03.985 [2024-07-14 14:52:43.235534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:03.985 [2024-07-14 14:52:43.235586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:03.985 [2024-07-14 14:52:43.235577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.554 I/O targets: 00:20:04.554 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:04.554 00:20:04.554 00:20:04.554 CUnit - A unit testing framework for C - Version 2.1-3 00:20:04.554 http://cunit.sourceforge.net/ 00:20:04.554 00:20:04.554 00:20:04.554 Suite: bdevio tests on: Nvme1n1 00:20:04.554 Test: blockdev write read block ...passed 00:20:04.554 Test: blockdev write zeroes read block ...passed 00:20:04.554 Test: blockdev write zeroes read no split ...passed 00:20:04.554 Test: blockdev write zeroes read split ...passed 00:20:04.554 Test: blockdev write zeroes read split partial ...passed 00:20:04.554 Test: blockdev reset ...[2024-07-14 14:52:43.855639] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:04.554 [2024-07-14 14:52:43.855827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:20:04.813 [2024-07-14 14:52:43.869108] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:04.813 passed 00:20:04.813 Test: blockdev write read 8 blocks ...passed 00:20:04.813 Test: blockdev write read size > 128k ...passed 00:20:04.813 Test: blockdev write read invalid size ...passed 00:20:04.813 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:04.813 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:04.813 Test: blockdev write read max offset ...passed 00:20:04.813 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:04.813 Test: blockdev writev readv 8 blocks ...passed 00:20:04.813 Test: blockdev writev readv 30 x 1block ...passed 00:20:04.813 Test: blockdev writev readv block ...passed 00:20:04.813 Test: blockdev writev readv size > 128k ...passed 00:20:04.813 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:04.813 Test: blockdev comparev and writev ...[2024-07-14 14:52:44.045952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:04.813 [2024-07-14 14:52:44.046022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:04.813 [2024-07-14 14:52:44.046062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:04.813 [2024-07-14 14:52:44.046089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:04.813 [2024-07-14 14:52:44.046560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:04.813 [2024-07-14 14:52:44.046594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:04.813 [2024-07-14 14:52:44.046629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:04.813 [2024-07-14 14:52:44.046655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:04.813 [2024-07-14 14:52:44.047115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:04.813 [2024-07-14 14:52:44.047149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:04.813 [2024-07-14 14:52:44.047187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:04.813 [2024-07-14 14:52:44.047214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:04.813 [2024-07-14 14:52:44.047649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:04.813 [2024-07-14 14:52:44.047682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:04.813 [2024-07-14 14:52:44.047721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:04.813 [2024-07-14 14:52:44.047748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:04.813 passed 00:20:05.071 Test: blockdev nvme passthru rw ...passed 00:20:05.071 Test: blockdev nvme passthru vendor specific ...[2024-07-14 14:52:44.131267] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:05.071 [2024-07-14 14:52:44.131323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:05.071 [2024-07-14 14:52:44.131559] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:05.071 [2024-07-14 14:52:44.131602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:05.071 [2024-07-14 14:52:44.131804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:05.071 [2024-07-14 14:52:44.131842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:05.071 [2024-07-14 14:52:44.132045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:05.071 [2024-07-14 14:52:44.132078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:05.071 passed 00:20:05.071 Test: blockdev nvme admin passthru ...passed 00:20:05.071 Test: blockdev copy ...passed 00:20:05.071 00:20:05.071 Run Summary: Type Total Ran Passed Failed Inactive 00:20:05.071 suites 1 1 n/a 0 0 00:20:05.071 tests 23 23 23 0 0 00:20:05.071 asserts 152 152 152 0 n/a 00:20:05.071 00:20:05.071 Elapsed time = 1.035 seconds 00:20:06.011 14:52:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:06.011 14:52:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.011 14:52:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:06.011 14:52:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.011 14:52:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:06.011 14:52:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:20:06.011 14:52:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:06.011 14:52:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:20:06.011 14:52:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:06.011 14:52:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:20:06.011 14:52:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:06.011 14:52:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:06.011 rmmod nvme_tcp 00:20:06.011 rmmod nvme_fabrics 00:20:06.011 rmmod nvme_keyring 00:20:06.011 14:52:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:06.011 14:52:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:20:06.011 14:52:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:20:06.011 14:52:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1897597 ']' 00:20:06.011 14:52:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1897597 00:20:06.011 14:52:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 1897597 ']' 00:20:06.011 14:52:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 1897597 00:20:06.011 14:52:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:20:06.011 14:52:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:06.011 14:52:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1897597 00:20:06.011 14:52:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:20:06.011 14:52:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:20:06.011 14:52:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1897597' 00:20:06.011 killing process with pid 1897597 00:20:06.011 14:52:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 1897597 00:20:06.011 14:52:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 1897597 00:20:07.388 14:52:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:07.388 14:52:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:07.388 14:52:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:07.388 14:52:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:07.388 14:52:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:07.388 14:52:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:07.388 14:52:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:07.388 14:52:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:09.925 14:52:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:09.926 00:20:09.926 real 0m9.224s 00:20:09.926 user 0m21.775s 00:20:09.926 sys 0m2.358s 00:20:09.926 14:52:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:09.926 14:52:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:09.926 ************************************ 00:20:09.926 END TEST nvmf_bdevio 00:20:09.926 ************************************ 00:20:09.926 14:52:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:09.926 14:52:48 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:09.926 14:52:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:09.926 14:52:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:09.926 14:52:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:09.926 ************************************ 00:20:09.926 START TEST nvmf_auth_target 00:20:09.926 ************************************ 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:09.926 * Looking for test storage... 00:20:09.926 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:20:09.926 14:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:11.831 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:11.831 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:11.831 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:11.831 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:11.831 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:11.832 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:11.832 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:11.832 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:20:11.832 00:20:11.832 --- 10.0.0.2 ping statistics --- 00:20:11.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.832 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:20:11.832 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:11.832 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:11.832 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:20:11.832 00:20:11.832 --- 10.0.0.1 ping statistics --- 00:20:11.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.832 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:20:11.832 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:11.832 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:20:11.832 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:11.832 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:11.832 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:11.832 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:11.832 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:11.832 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:11.832 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:11.832 14:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:20:11.832 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:11.832 14:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:11.832 14:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.832 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1900087 00:20:11.832 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:20:11.832 14:52:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1900087 00:20:11.832 14:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1900087 ']' 00:20:11.832 14:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:11.832 14:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:11.832 14:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:11.832 14:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:11.832 14:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1900238 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=de8759625aff862185f446fb6540a14a0a7bda949fd9a4c9 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.u13 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key de8759625aff862185f446fb6540a14a0a7bda949fd9a4c9 0 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 de8759625aff862185f446fb6540a14a0a7bda949fd9a4c9 0 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=de8759625aff862185f446fb6540a14a0a7bda949fd9a4c9 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.u13 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.u13 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.u13 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=257f029f58bed42901ae719e0815b391d931a3aaf950d09749a87ecb59d5a4da 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.4qX 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 257f029f58bed42901ae719e0815b391d931a3aaf950d09749a87ecb59d5a4da 3 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 257f029f58bed42901ae719e0815b391d931a3aaf950d09749a87ecb59d5a4da 3 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=257f029f58bed42901ae719e0815b391d931a3aaf950d09749a87ecb59d5a4da 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.4qX 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.4qX 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.4qX 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=fb9a6263770dc0850f875be21c477a1e 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.uxX 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key fb9a6263770dc0850f875be21c477a1e 1 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 fb9a6263770dc0850f875be21c477a1e 1 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=fb9a6263770dc0850f875be21c477a1e 00:20:12.769 14:52:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:20:12.770 14:52:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:12.770 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.uxX 00:20:12.770 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.uxX 00:20:12.770 14:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.uxX 00:20:12.770 14:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:20:12.770 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:12.770 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:12.770 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:12.770 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:20:12.770 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:20:12.770 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:12.770 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a95fa4dbd6e1743477701f1bfebd89df783f5a4cb6776627 00:20:12.770 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:20:12.770 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.G9Z 00:20:12.770 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a95fa4dbd6e1743477701f1bfebd89df783f5a4cb6776627 2 00:20:12.770 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a95fa4dbd6e1743477701f1bfebd89df783f5a4cb6776627 2 00:20:12.770 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:12.770 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:12.770 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a95fa4dbd6e1743477701f1bfebd89df783f5a4cb6776627 00:20:12.770 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:20:12.770 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:12.770 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.G9Z 00:20:12.770 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.G9Z 00:20:12.770 14:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.G9Z 00:20:12.770 14:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:20:12.770 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:12.770 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:12.770 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:12.770 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:20:12.770 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:20:12.770 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:12.770 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=38e60d5aead5751345e9774e56c5aebb36b702194a0869ac 00:20:12.770 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:20:12.770 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.B0u 00:20:12.770 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 38e60d5aead5751345e9774e56c5aebb36b702194a0869ac 2 00:20:12.770 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 38e60d5aead5751345e9774e56c5aebb36b702194a0869ac 2 00:20:12.770 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:12.770 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:12.770 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=38e60d5aead5751345e9774e56c5aebb36b702194a0869ac 00:20:12.770 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:20:12.770 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:13.028 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.B0u 00:20:13.028 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.B0u 00:20:13.028 14:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.B0u 00:20:13.028 14:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:20:13.028 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:13.028 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:13.028 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:13.028 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:20:13.028 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:20:13.028 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:13.028 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a41f98fdc0a993639e2cd420a9f94921 00:20:13.028 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:20:13.028 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.GwX 00:20:13.028 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a41f98fdc0a993639e2cd420a9f94921 1 00:20:13.028 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a41f98fdc0a993639e2cd420a9f94921 1 00:20:13.028 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:13.028 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:13.028 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a41f98fdc0a993639e2cd420a9f94921 00:20:13.028 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:20:13.028 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:13.028 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.GwX 00:20:13.028 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.GwX 00:20:13.028 14:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.GwX 00:20:13.028 14:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:20:13.028 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:13.028 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:13.028 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:13.028 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:20:13.028 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:20:13.028 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:13.028 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=735ee84200d0d6a6341e9648d91c5defc21b6ad8dc7b872bb9fd68716833985c 00:20:13.028 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:20:13.028 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.339 00:20:13.028 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 735ee84200d0d6a6341e9648d91c5defc21b6ad8dc7b872bb9fd68716833985c 3 00:20:13.028 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 735ee84200d0d6a6341e9648d91c5defc21b6ad8dc7b872bb9fd68716833985c 3 00:20:13.028 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:13.028 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:13.028 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=735ee84200d0d6a6341e9648d91c5defc21b6ad8dc7b872bb9fd68716833985c 00:20:13.028 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:20:13.028 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:13.028 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.339 00:20:13.028 14:52:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.339 00:20:13.028 14:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.339 00:20:13.029 14:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:20:13.029 14:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1900087 00:20:13.029 14:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1900087 ']' 00:20:13.029 14:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.029 14:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:13.029 14:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.029 14:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:13.029 14:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.286 14:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:13.286 14:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:20:13.286 14:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1900238 /var/tmp/host.sock 00:20:13.286 14:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1900238 ']' 00:20:13.286 14:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:20:13.286 14:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:13.286 14:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:20:13.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:20:13.286 14:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:13.286 14:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.219 14:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:14.219 14:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:20:14.219 14:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:20:14.219 14:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.219 14:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.219 14:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.219 14:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:20:14.219 14:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.u13 00:20:14.219 14:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.219 14:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.219 14:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.219 14:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.u13 00:20:14.219 14:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.u13 00:20:14.219 14:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.4qX ]] 00:20:14.219 14:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.4qX 00:20:14.219 14:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.219 14:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.219 14:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.219 14:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.4qX 00:20:14.219 14:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.4qX 00:20:14.476 14:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:20:14.476 14:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.uxX 00:20:14.476 14:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.476 14:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.476 14:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.476 14:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.uxX 00:20:14.476 14:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.uxX 00:20:14.733 14:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.G9Z ]] 00:20:14.733 14:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.G9Z 00:20:14.733 14:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.733 14:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.733 14:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.733 14:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.G9Z 00:20:14.733 14:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.G9Z 00:20:14.989 14:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:20:14.989 14:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.B0u 00:20:14.989 14:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.989 14:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.989 14:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.989 14:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.B0u 00:20:14.989 14:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.B0u 00:20:15.246 14:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.GwX ]] 00:20:15.246 14:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.GwX 00:20:15.246 14:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.246 14:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.246 14:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.246 14:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.GwX 00:20:15.246 14:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.GwX 00:20:15.536 14:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:20:15.536 14:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.339 00:20:15.536 14:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.536 14:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.536 14:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.536 14:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.339 00:20:15.536 14:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.339 00:20:15.794 14:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:20:15.794 14:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:15.794 14:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:15.794 14:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:15.794 14:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:15.794 14:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:16.053 14:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:20:16.053 14:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:16.053 14:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:16.053 14:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:16.053 14:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:16.053 14:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.053 14:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.053 14:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.053 14:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.053 14:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.053 14:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.053 14:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.310 00:20:16.310 14:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:16.310 14:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.310 14:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:16.568 14:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.568 14:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.568 14:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.568 14:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.568 14:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.568 14:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:16.568 { 00:20:16.568 "cntlid": 1, 00:20:16.568 "qid": 0, 00:20:16.568 "state": "enabled", 00:20:16.568 "thread": "nvmf_tgt_poll_group_000", 00:20:16.568 "listen_address": { 00:20:16.568 "trtype": "TCP", 00:20:16.568 "adrfam": "IPv4", 00:20:16.568 "traddr": "10.0.0.2", 00:20:16.568 "trsvcid": "4420" 00:20:16.568 }, 00:20:16.568 "peer_address": { 00:20:16.568 "trtype": "TCP", 00:20:16.568 "adrfam": "IPv4", 00:20:16.568 "traddr": "10.0.0.1", 00:20:16.568 "trsvcid": "60702" 00:20:16.568 }, 00:20:16.568 "auth": { 00:20:16.568 "state": "completed", 00:20:16.568 "digest": "sha256", 00:20:16.568 "dhgroup": "null" 00:20:16.568 } 00:20:16.568 } 00:20:16.568 ]' 00:20:16.568 14:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:16.826 14:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:16.826 14:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:16.826 14:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:16.826 14:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:16.826 14:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.826 14:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.826 14:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.083 14:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZGU4NzU5NjI1YWZmODYyMTg1ZjQ0NmZiNjU0MGExNGEwYTdiZGE5NDlmZDlhNGM5oMwwQQ==: --dhchap-ctrl-secret DHHC-1:03:MjU3ZjAyOWY1OGJlZDQyOTAxYWU3MTllMDgxNWIzOTFkOTMxYTNhYWY5NTBkMDk3NDlhODdlY2I1OWQ1YTRkYTuqLZA=: 00:20:18.021 14:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.021 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.021 14:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:18.021 14:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.021 14:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.021 14:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.021 14:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:18.021 14:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:18.021 14:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:18.280 14:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:20:18.280 14:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:18.280 14:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:18.280 14:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:18.280 14:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:18.280 14:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.280 14:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.280 14:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.280 14:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.280 14:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.280 14:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.280 14:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.538 00:20:18.538 14:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:18.538 14:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:18.538 14:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.796 14:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.796 14:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.796 14:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.796 14:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.796 14:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.796 14:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:18.796 { 00:20:18.796 "cntlid": 3, 00:20:18.796 "qid": 0, 00:20:18.796 "state": "enabled", 00:20:18.796 "thread": "nvmf_tgt_poll_group_000", 00:20:18.796 "listen_address": { 00:20:18.796 "trtype": "TCP", 00:20:18.796 "adrfam": "IPv4", 00:20:18.796 "traddr": "10.0.0.2", 00:20:18.796 "trsvcid": "4420" 00:20:18.796 }, 00:20:18.796 "peer_address": { 00:20:18.796 "trtype": "TCP", 00:20:18.796 "adrfam": "IPv4", 00:20:18.796 "traddr": "10.0.0.1", 00:20:18.796 "trsvcid": "60712" 00:20:18.796 }, 00:20:18.796 "auth": { 00:20:18.796 "state": "completed", 00:20:18.796 "digest": "sha256", 00:20:18.796 "dhgroup": "null" 00:20:18.796 } 00:20:18.796 } 00:20:18.796 ]' 00:20:18.796 14:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:18.796 14:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:18.796 14:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:18.796 14:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:18.796 14:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:19.055 14:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.055 14:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.055 14:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.315 14:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZmI5YTYyNjM3NzBkYzA4NTBmODc1YmUyMWM0NzdhMWVi24Rl: --dhchap-ctrl-secret DHHC-1:02:YTk1ZmE0ZGJkNmUxNzQzNDc3NzAxZjFiZmViZDg5ZGY3ODNmNWE0Y2I2Nzc2NjI3W8ciIA==: 00:20:20.250 14:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.250 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.250 14:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:20.250 14:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.250 14:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.250 14:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.250 14:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:20.250 14:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:20.250 14:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:20.508 14:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:20:20.508 14:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:20.508 14:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:20.508 14:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:20.508 14:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:20.508 14:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.508 14:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.508 14:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.508 14:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.508 14:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.508 14:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.508 14:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.768 00:20:20.768 14:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:20.768 14:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:20.768 14:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.026 14:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.026 14:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.026 14:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.026 14:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.026 14:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.026 14:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:21.026 { 00:20:21.026 "cntlid": 5, 00:20:21.026 "qid": 0, 00:20:21.026 "state": "enabled", 00:20:21.026 "thread": "nvmf_tgt_poll_group_000", 00:20:21.026 "listen_address": { 00:20:21.026 "trtype": "TCP", 00:20:21.026 "adrfam": "IPv4", 00:20:21.026 "traddr": "10.0.0.2", 00:20:21.026 "trsvcid": "4420" 00:20:21.026 }, 00:20:21.026 "peer_address": { 00:20:21.026 "trtype": "TCP", 00:20:21.026 "adrfam": "IPv4", 00:20:21.026 "traddr": "10.0.0.1", 00:20:21.026 "trsvcid": "60728" 00:20:21.026 }, 00:20:21.026 "auth": { 00:20:21.026 "state": "completed", 00:20:21.026 "digest": "sha256", 00:20:21.026 "dhgroup": "null" 00:20:21.026 } 00:20:21.026 } 00:20:21.026 ]' 00:20:21.027 14:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:21.027 14:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:21.027 14:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:21.027 14:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:21.027 14:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:21.027 14:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.027 14:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.027 14:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.285 14:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzhlNjBkNWFlYWQ1NzUxMzQ1ZTk3NzRlNTZjNWFlYmIzNmI3MDIxOTRhMDg2OWFj9fRvyA==: --dhchap-ctrl-secret DHHC-1:01:YTQxZjk4ZmRjMGE5OTM2MzllMmNkNDIwYTlmOTQ5MjH+Wu2x: 00:20:22.661 14:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.661 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.661 14:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:22.661 14:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.661 14:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.661 14:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.661 14:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:22.661 14:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:22.661 14:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:22.661 14:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:20:22.661 14:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:22.661 14:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:22.661 14:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:22.661 14:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:22.661 14:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.661 14:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:22.661 14:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.661 14:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.661 14:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.661 14:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:22.661 14:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:23.229 00:20:23.229 14:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:23.229 14:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:23.229 14:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.229 14:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.229 14:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.229 14:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.229 14:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.229 14:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.229 14:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:23.229 { 00:20:23.229 "cntlid": 7, 00:20:23.230 "qid": 0, 00:20:23.230 "state": "enabled", 00:20:23.230 "thread": "nvmf_tgt_poll_group_000", 00:20:23.230 "listen_address": { 00:20:23.230 "trtype": "TCP", 00:20:23.230 "adrfam": "IPv4", 00:20:23.230 "traddr": "10.0.0.2", 00:20:23.230 "trsvcid": "4420" 00:20:23.230 }, 00:20:23.230 "peer_address": { 00:20:23.230 "trtype": "TCP", 00:20:23.230 "adrfam": "IPv4", 00:20:23.230 "traddr": "10.0.0.1", 00:20:23.230 "trsvcid": "41342" 00:20:23.230 }, 00:20:23.230 "auth": { 00:20:23.230 "state": "completed", 00:20:23.230 "digest": "sha256", 00:20:23.230 "dhgroup": "null" 00:20:23.230 } 00:20:23.230 } 00:20:23.230 ]' 00:20:23.230 14:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:23.488 14:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:23.488 14:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:23.488 14:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:23.488 14:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:23.488 14:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.488 14:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.488 14:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.746 14:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NzM1ZWU4NDIwMGQwZDZhNjM0MWU5NjQ4ZDkxYzVkZWZjMjFiNmFkOGRjN2I4NzJiYjlmZDY4NzE2ODMzOTg1Y9/LaFo=: 00:20:24.680 14:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.680 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.680 14:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:24.680 14:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.680 14:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.680 14:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.680 14:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:24.680 14:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:24.680 14:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:24.680 14:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:24.937 14:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:20:24.937 14:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:24.937 14:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:24.937 14:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:24.937 14:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:24.937 14:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.937 14:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.937 14:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.937 14:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.937 14:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.937 14:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.937 14:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.195 00:20:25.195 14:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:25.195 14:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.195 14:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:25.453 14:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.453 14:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.453 14:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.453 14:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.453 14:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.453 14:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:25.453 { 00:20:25.453 "cntlid": 9, 00:20:25.453 "qid": 0, 00:20:25.453 "state": "enabled", 00:20:25.453 "thread": "nvmf_tgt_poll_group_000", 00:20:25.454 "listen_address": { 00:20:25.454 "trtype": "TCP", 00:20:25.454 "adrfam": "IPv4", 00:20:25.454 "traddr": "10.0.0.2", 00:20:25.454 "trsvcid": "4420" 00:20:25.454 }, 00:20:25.454 "peer_address": { 00:20:25.454 "trtype": "TCP", 00:20:25.454 "adrfam": "IPv4", 00:20:25.454 "traddr": "10.0.0.1", 00:20:25.454 "trsvcid": "41370" 00:20:25.454 }, 00:20:25.454 "auth": { 00:20:25.454 "state": "completed", 00:20:25.454 "digest": "sha256", 00:20:25.454 "dhgroup": "ffdhe2048" 00:20:25.454 } 00:20:25.454 } 00:20:25.454 ]' 00:20:25.454 14:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:25.454 14:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:25.454 14:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:25.454 14:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:25.454 14:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:25.712 14:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.712 14:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.712 14:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.971 14:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZGU4NzU5NjI1YWZmODYyMTg1ZjQ0NmZiNjU0MGExNGEwYTdiZGE5NDlmZDlhNGM5oMwwQQ==: --dhchap-ctrl-secret DHHC-1:03:MjU3ZjAyOWY1OGJlZDQyOTAxYWU3MTllMDgxNWIzOTFkOTMxYTNhYWY5NTBkMDk3NDlhODdlY2I1OWQ1YTRkYTuqLZA=: 00:20:26.910 14:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.910 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.910 14:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:26.910 14:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.910 14:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.910 14:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.910 14:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:26.910 14:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:26.910 14:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:27.168 14:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:20:27.168 14:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:27.168 14:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:27.168 14:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:27.168 14:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:27.168 14:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.168 14:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.168 14:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.168 14:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.168 14:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.168 14:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.168 14:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.426 00:20:27.426 14:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:27.426 14:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:27.426 14:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.684 14:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.684 14:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.684 14:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.684 14:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.684 14:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.684 14:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:27.684 { 00:20:27.684 "cntlid": 11, 00:20:27.684 "qid": 0, 00:20:27.684 "state": "enabled", 00:20:27.684 "thread": "nvmf_tgt_poll_group_000", 00:20:27.684 "listen_address": { 00:20:27.684 "trtype": "TCP", 00:20:27.684 "adrfam": "IPv4", 00:20:27.684 "traddr": "10.0.0.2", 00:20:27.684 "trsvcid": "4420" 00:20:27.684 }, 00:20:27.684 "peer_address": { 00:20:27.684 "trtype": "TCP", 00:20:27.684 "adrfam": "IPv4", 00:20:27.684 "traddr": "10.0.0.1", 00:20:27.684 "trsvcid": "41398" 00:20:27.684 }, 00:20:27.684 "auth": { 00:20:27.684 "state": "completed", 00:20:27.684 "digest": "sha256", 00:20:27.684 "dhgroup": "ffdhe2048" 00:20:27.684 } 00:20:27.684 } 00:20:27.684 ]' 00:20:27.684 14:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:27.684 14:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:27.684 14:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:27.684 14:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:27.684 14:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:27.942 14:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.942 14:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.942 14:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.200 14:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZmI5YTYyNjM3NzBkYzA4NTBmODc1YmUyMWM0NzdhMWVi24Rl: --dhchap-ctrl-secret DHHC-1:02:YTk1ZmE0ZGJkNmUxNzQzNDc3NzAxZjFiZmViZDg5ZGY3ODNmNWE0Y2I2Nzc2NjI3W8ciIA==: 00:20:29.135 14:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.135 14:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:29.135 14:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.135 14:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.135 14:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.135 14:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:29.135 14:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:29.136 14:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:29.393 14:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:20:29.393 14:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:29.393 14:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:29.393 14:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:29.393 14:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:29.393 14:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.393 14:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.393 14:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.393 14:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.393 14:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.393 14:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.393 14:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.650 00:20:29.650 14:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:29.650 14:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:29.650 14:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.908 14:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.908 14:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.908 14:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.908 14:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.908 14:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.908 14:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:29.908 { 00:20:29.908 "cntlid": 13, 00:20:29.908 "qid": 0, 00:20:29.908 "state": "enabled", 00:20:29.908 "thread": "nvmf_tgt_poll_group_000", 00:20:29.908 "listen_address": { 00:20:29.908 "trtype": "TCP", 00:20:29.908 "adrfam": "IPv4", 00:20:29.908 "traddr": "10.0.0.2", 00:20:29.908 "trsvcid": "4420" 00:20:29.908 }, 00:20:29.908 "peer_address": { 00:20:29.908 "trtype": "TCP", 00:20:29.908 "adrfam": "IPv4", 00:20:29.908 "traddr": "10.0.0.1", 00:20:29.908 "trsvcid": "41428" 00:20:29.908 }, 00:20:29.908 "auth": { 00:20:29.908 "state": "completed", 00:20:29.908 "digest": "sha256", 00:20:29.908 "dhgroup": "ffdhe2048" 00:20:29.908 } 00:20:29.908 } 00:20:29.908 ]' 00:20:29.908 14:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:30.166 14:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:30.166 14:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:30.166 14:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:30.166 14:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:30.166 14:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.166 14:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.166 14:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.424 14:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzhlNjBkNWFlYWQ1NzUxMzQ1ZTk3NzRlNTZjNWFlYmIzNmI3MDIxOTRhMDg2OWFj9fRvyA==: --dhchap-ctrl-secret DHHC-1:01:YTQxZjk4ZmRjMGE5OTM2MzllMmNkNDIwYTlmOTQ5MjH+Wu2x: 00:20:31.432 14:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.432 14:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:31.432 14:53:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.432 14:53:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.432 14:53:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.432 14:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:31.432 14:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:31.432 14:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:31.690 14:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:20:31.690 14:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:31.690 14:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:31.690 14:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:31.690 14:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:31.690 14:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.690 14:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:31.690 14:53:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.690 14:53:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.690 14:53:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.690 14:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:31.690 14:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:31.947 00:20:31.947 14:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:31.947 14:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:31.947 14:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.204 14:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.204 14:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.204 14:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.204 14:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.204 14:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.204 14:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:32.204 { 00:20:32.204 "cntlid": 15, 00:20:32.204 "qid": 0, 00:20:32.204 "state": "enabled", 00:20:32.204 "thread": "nvmf_tgt_poll_group_000", 00:20:32.204 "listen_address": { 00:20:32.204 "trtype": "TCP", 00:20:32.204 "adrfam": "IPv4", 00:20:32.204 "traddr": "10.0.0.2", 00:20:32.204 "trsvcid": "4420" 00:20:32.204 }, 00:20:32.204 "peer_address": { 00:20:32.204 "trtype": "TCP", 00:20:32.204 "adrfam": "IPv4", 00:20:32.204 "traddr": "10.0.0.1", 00:20:32.204 "trsvcid": "39558" 00:20:32.204 }, 00:20:32.204 "auth": { 00:20:32.204 "state": "completed", 00:20:32.204 "digest": "sha256", 00:20:32.204 "dhgroup": "ffdhe2048" 00:20:32.204 } 00:20:32.204 } 00:20:32.204 ]' 00:20:32.204 14:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:32.204 14:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:32.204 14:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:32.461 14:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:32.461 14:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:32.461 14:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.461 14:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.461 14:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.717 14:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NzM1ZWU4NDIwMGQwZDZhNjM0MWU5NjQ4ZDkxYzVkZWZjMjFiNmFkOGRjN2I4NzJiYjlmZDY4NzE2ODMzOTg1Y9/LaFo=: 00:20:33.653 14:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.653 14:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:33.653 14:53:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.653 14:53:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.653 14:53:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.653 14:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:33.653 14:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:33.653 14:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:33.653 14:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:33.910 14:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:20:33.910 14:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:33.910 14:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:33.910 14:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:33.910 14:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:33.910 14:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.910 14:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.911 14:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.911 14:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.911 14:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.911 14:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.911 14:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.168 00:20:34.168 14:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:34.168 14:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.168 14:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:34.425 14:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.425 14:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.425 14:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.425 14:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.425 14:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.425 14:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:34.425 { 00:20:34.425 "cntlid": 17, 00:20:34.425 "qid": 0, 00:20:34.425 "state": "enabled", 00:20:34.425 "thread": "nvmf_tgt_poll_group_000", 00:20:34.425 "listen_address": { 00:20:34.425 "trtype": "TCP", 00:20:34.425 "adrfam": "IPv4", 00:20:34.425 "traddr": "10.0.0.2", 00:20:34.425 "trsvcid": "4420" 00:20:34.425 }, 00:20:34.425 "peer_address": { 00:20:34.425 "trtype": "TCP", 00:20:34.425 "adrfam": "IPv4", 00:20:34.425 "traddr": "10.0.0.1", 00:20:34.425 "trsvcid": "39584" 00:20:34.425 }, 00:20:34.425 "auth": { 00:20:34.425 "state": "completed", 00:20:34.425 "digest": "sha256", 00:20:34.425 "dhgroup": "ffdhe3072" 00:20:34.425 } 00:20:34.425 } 00:20:34.425 ]' 00:20:34.425 14:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:34.425 14:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:34.425 14:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:34.425 14:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:34.425 14:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:34.683 14:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.683 14:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.683 14:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.683 14:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZGU4NzU5NjI1YWZmODYyMTg1ZjQ0NmZiNjU0MGExNGEwYTdiZGE5NDlmZDlhNGM5oMwwQQ==: --dhchap-ctrl-secret DHHC-1:03:MjU3ZjAyOWY1OGJlZDQyOTAxYWU3MTllMDgxNWIzOTFkOTMxYTNhYWY5NTBkMDk3NDlhODdlY2I1OWQ1YTRkYTuqLZA=: 00:20:36.057 14:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.057 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.057 14:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:36.057 14:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.057 14:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.057 14:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.057 14:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:36.057 14:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:36.057 14:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:36.057 14:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:20:36.057 14:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:36.057 14:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:36.057 14:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:36.057 14:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:36.057 14:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.057 14:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.057 14:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.057 14:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.057 14:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.057 14:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.057 14:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.316 00:20:36.575 14:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:36.575 14:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:36.575 14:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.575 14:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.575 14:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.575 14:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.575 14:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.831 14:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.832 14:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:36.832 { 00:20:36.832 "cntlid": 19, 00:20:36.832 "qid": 0, 00:20:36.832 "state": "enabled", 00:20:36.832 "thread": "nvmf_tgt_poll_group_000", 00:20:36.832 "listen_address": { 00:20:36.832 "trtype": "TCP", 00:20:36.832 "adrfam": "IPv4", 00:20:36.832 "traddr": "10.0.0.2", 00:20:36.832 "trsvcid": "4420" 00:20:36.832 }, 00:20:36.832 "peer_address": { 00:20:36.832 "trtype": "TCP", 00:20:36.832 "adrfam": "IPv4", 00:20:36.832 "traddr": "10.0.0.1", 00:20:36.832 "trsvcid": "39624" 00:20:36.832 }, 00:20:36.832 "auth": { 00:20:36.832 "state": "completed", 00:20:36.832 "digest": "sha256", 00:20:36.832 "dhgroup": "ffdhe3072" 00:20:36.832 } 00:20:36.832 } 00:20:36.832 ]' 00:20:36.832 14:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:36.832 14:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:36.832 14:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:36.832 14:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:36.832 14:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:36.832 14:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.832 14:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.832 14:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.088 14:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZmI5YTYyNjM3NzBkYzA4NTBmODc1YmUyMWM0NzdhMWVi24Rl: --dhchap-ctrl-secret DHHC-1:02:YTk1ZmE0ZGJkNmUxNzQzNDc3NzAxZjFiZmViZDg5ZGY3ODNmNWE0Y2I2Nzc2NjI3W8ciIA==: 00:20:38.069 14:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.069 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.069 14:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:38.069 14:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.069 14:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.069 14:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.069 14:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:38.069 14:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:38.069 14:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:38.326 14:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:20:38.326 14:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:38.326 14:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:38.326 14:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:38.326 14:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:38.326 14:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.326 14:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.326 14:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.326 14:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.327 14:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.327 14:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.327 14:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.894 00:20:38.894 14:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:38.894 14:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:38.894 14:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.894 14:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.894 14:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.894 14:53:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.894 14:53:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.894 14:53:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.894 14:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:38.894 { 00:20:38.894 "cntlid": 21, 00:20:38.894 "qid": 0, 00:20:38.894 "state": "enabled", 00:20:38.894 "thread": "nvmf_tgt_poll_group_000", 00:20:38.894 "listen_address": { 00:20:38.894 "trtype": "TCP", 00:20:38.894 "adrfam": "IPv4", 00:20:38.894 "traddr": "10.0.0.2", 00:20:38.894 "trsvcid": "4420" 00:20:38.894 }, 00:20:38.894 "peer_address": { 00:20:38.894 "trtype": "TCP", 00:20:38.894 "adrfam": "IPv4", 00:20:38.894 "traddr": "10.0.0.1", 00:20:38.894 "trsvcid": "39650" 00:20:38.894 }, 00:20:38.894 "auth": { 00:20:38.894 "state": "completed", 00:20:38.894 "digest": "sha256", 00:20:38.894 "dhgroup": "ffdhe3072" 00:20:38.894 } 00:20:38.894 } 00:20:38.894 ]' 00:20:38.894 14:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:39.152 14:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:39.152 14:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:39.152 14:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:39.152 14:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:39.152 14:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.152 14:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.152 14:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.410 14:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzhlNjBkNWFlYWQ1NzUxMzQ1ZTk3NzRlNTZjNWFlYmIzNmI3MDIxOTRhMDg2OWFj9fRvyA==: --dhchap-ctrl-secret DHHC-1:01:YTQxZjk4ZmRjMGE5OTM2MzllMmNkNDIwYTlmOTQ5MjH+Wu2x: 00:20:40.346 14:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.347 14:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:40.347 14:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.347 14:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.347 14:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.347 14:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:40.347 14:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:40.347 14:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:40.605 14:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:20:40.605 14:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:40.605 14:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:40.605 14:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:40.605 14:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:40.605 14:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.605 14:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:40.605 14:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.605 14:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.865 14:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.865 14:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:40.865 14:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:41.123 00:20:41.123 14:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:41.123 14:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:41.123 14:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.381 14:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.381 14:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.381 14:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.381 14:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.381 14:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.381 14:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:41.381 { 00:20:41.381 "cntlid": 23, 00:20:41.381 "qid": 0, 00:20:41.381 "state": "enabled", 00:20:41.381 "thread": "nvmf_tgt_poll_group_000", 00:20:41.381 "listen_address": { 00:20:41.381 "trtype": "TCP", 00:20:41.381 "adrfam": "IPv4", 00:20:41.381 "traddr": "10.0.0.2", 00:20:41.381 "trsvcid": "4420" 00:20:41.381 }, 00:20:41.381 "peer_address": { 00:20:41.381 "trtype": "TCP", 00:20:41.381 "adrfam": "IPv4", 00:20:41.381 "traddr": "10.0.0.1", 00:20:41.381 "trsvcid": "39670" 00:20:41.381 }, 00:20:41.381 "auth": { 00:20:41.381 "state": "completed", 00:20:41.381 "digest": "sha256", 00:20:41.381 "dhgroup": "ffdhe3072" 00:20:41.381 } 00:20:41.381 } 00:20:41.381 ]' 00:20:41.381 14:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:41.381 14:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:41.381 14:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:41.381 14:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:41.381 14:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:41.381 14:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.381 14:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.381 14:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.640 14:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NzM1ZWU4NDIwMGQwZDZhNjM0MWU5NjQ4ZDkxYzVkZWZjMjFiNmFkOGRjN2I4NzJiYjlmZDY4NzE2ODMzOTg1Y9/LaFo=: 00:20:42.577 14:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.577 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.577 14:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:42.577 14:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.577 14:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.835 14:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.835 14:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:42.835 14:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:42.835 14:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:42.835 14:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:43.093 14:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:20:43.093 14:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:43.093 14:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:43.093 14:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:43.093 14:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:43.093 14:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.093 14:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.093 14:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.093 14:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.093 14:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.093 14:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.093 14:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.350 00:20:43.350 14:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:43.350 14:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:43.350 14:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.608 14:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.608 14:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.608 14:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.608 14:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.608 14:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.608 14:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:43.608 { 00:20:43.608 "cntlid": 25, 00:20:43.608 "qid": 0, 00:20:43.608 "state": "enabled", 00:20:43.608 "thread": "nvmf_tgt_poll_group_000", 00:20:43.608 "listen_address": { 00:20:43.608 "trtype": "TCP", 00:20:43.608 "adrfam": "IPv4", 00:20:43.608 "traddr": "10.0.0.2", 00:20:43.608 "trsvcid": "4420" 00:20:43.608 }, 00:20:43.608 "peer_address": { 00:20:43.608 "trtype": "TCP", 00:20:43.608 "adrfam": "IPv4", 00:20:43.608 "traddr": "10.0.0.1", 00:20:43.608 "trsvcid": "52792" 00:20:43.608 }, 00:20:43.608 "auth": { 00:20:43.608 "state": "completed", 00:20:43.608 "digest": "sha256", 00:20:43.608 "dhgroup": "ffdhe4096" 00:20:43.608 } 00:20:43.608 } 00:20:43.608 ]' 00:20:43.608 14:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:43.608 14:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:43.608 14:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:43.867 14:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:43.867 14:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:43.867 14:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.867 14:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.867 14:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.124 14:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZGU4NzU5NjI1YWZmODYyMTg1ZjQ0NmZiNjU0MGExNGEwYTdiZGE5NDlmZDlhNGM5oMwwQQ==: --dhchap-ctrl-secret DHHC-1:03:MjU3ZjAyOWY1OGJlZDQyOTAxYWU3MTllMDgxNWIzOTFkOTMxYTNhYWY5NTBkMDk3NDlhODdlY2I1OWQ1YTRkYTuqLZA=: 00:20:45.058 14:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.058 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.058 14:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:45.058 14:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.058 14:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.058 14:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.058 14:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:45.058 14:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:45.058 14:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:45.316 14:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:20:45.316 14:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:45.316 14:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:45.316 14:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:45.316 14:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:45.316 14:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.316 14:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.316 14:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.316 14:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.316 14:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.316 14:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.316 14:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.885 00:20:45.885 14:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:45.885 14:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:45.885 14:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.885 14:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.885 14:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.885 14:53:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.885 14:53:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.144 14:53:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.144 14:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:46.144 { 00:20:46.144 "cntlid": 27, 00:20:46.144 "qid": 0, 00:20:46.144 "state": "enabled", 00:20:46.144 "thread": "nvmf_tgt_poll_group_000", 00:20:46.144 "listen_address": { 00:20:46.145 "trtype": "TCP", 00:20:46.145 "adrfam": "IPv4", 00:20:46.145 "traddr": "10.0.0.2", 00:20:46.145 "trsvcid": "4420" 00:20:46.145 }, 00:20:46.145 "peer_address": { 00:20:46.145 "trtype": "TCP", 00:20:46.145 "adrfam": "IPv4", 00:20:46.145 "traddr": "10.0.0.1", 00:20:46.145 "trsvcid": "52824" 00:20:46.145 }, 00:20:46.145 "auth": { 00:20:46.145 "state": "completed", 00:20:46.145 "digest": "sha256", 00:20:46.145 "dhgroup": "ffdhe4096" 00:20:46.145 } 00:20:46.145 } 00:20:46.145 ]' 00:20:46.145 14:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:46.145 14:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:46.145 14:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:46.145 14:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:46.145 14:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:46.145 14:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.145 14:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.145 14:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.403 14:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZmI5YTYyNjM3NzBkYzA4NTBmODc1YmUyMWM0NzdhMWVi24Rl: --dhchap-ctrl-secret DHHC-1:02:YTk1ZmE0ZGJkNmUxNzQzNDc3NzAxZjFiZmViZDg5ZGY3ODNmNWE0Y2I2Nzc2NjI3W8ciIA==: 00:20:47.337 14:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.337 14:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:47.337 14:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.337 14:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.337 14:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.337 14:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:47.337 14:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:47.337 14:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:47.596 14:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:20:47.596 14:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:47.596 14:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:47.596 14:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:47.596 14:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:47.596 14:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.596 14:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.596 14:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.596 14:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.596 14:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.596 14:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.596 14:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.226 00:20:48.226 14:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:48.226 14:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:48.226 14:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.226 14:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.226 14:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.226 14:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.226 14:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.226 14:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.226 14:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:48.226 { 00:20:48.226 "cntlid": 29, 00:20:48.226 "qid": 0, 00:20:48.226 "state": "enabled", 00:20:48.226 "thread": "nvmf_tgt_poll_group_000", 00:20:48.226 "listen_address": { 00:20:48.226 "trtype": "TCP", 00:20:48.226 "adrfam": "IPv4", 00:20:48.226 "traddr": "10.0.0.2", 00:20:48.226 "trsvcid": "4420" 00:20:48.226 }, 00:20:48.226 "peer_address": { 00:20:48.226 "trtype": "TCP", 00:20:48.226 "adrfam": "IPv4", 00:20:48.226 "traddr": "10.0.0.1", 00:20:48.226 "trsvcid": "52850" 00:20:48.226 }, 00:20:48.226 "auth": { 00:20:48.226 "state": "completed", 00:20:48.226 "digest": "sha256", 00:20:48.226 "dhgroup": "ffdhe4096" 00:20:48.226 } 00:20:48.226 } 00:20:48.226 ]' 00:20:48.226 14:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:48.226 14:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:48.226 14:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:48.483 14:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:48.483 14:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:48.483 14:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.483 14:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.483 14:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.741 14:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzhlNjBkNWFlYWQ1NzUxMzQ1ZTk3NzRlNTZjNWFlYmIzNmI3MDIxOTRhMDg2OWFj9fRvyA==: --dhchap-ctrl-secret DHHC-1:01:YTQxZjk4ZmRjMGE5OTM2MzllMmNkNDIwYTlmOTQ5MjH+Wu2x: 00:20:49.675 14:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.675 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.675 14:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:49.675 14:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.675 14:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.675 14:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.675 14:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:49.675 14:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:49.675 14:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:49.932 14:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:20:49.932 14:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:49.932 14:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:49.932 14:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:49.932 14:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:49.933 14:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.933 14:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:49.933 14:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.933 14:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.933 14:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.933 14:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:49.933 14:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:50.189 00:20:50.189 14:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:50.189 14:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:50.189 14:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.446 14:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.446 14:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.446 14:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.446 14:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.446 14:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.446 14:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:50.446 { 00:20:50.446 "cntlid": 31, 00:20:50.446 "qid": 0, 00:20:50.446 "state": "enabled", 00:20:50.446 "thread": "nvmf_tgt_poll_group_000", 00:20:50.446 "listen_address": { 00:20:50.446 "trtype": "TCP", 00:20:50.446 "adrfam": "IPv4", 00:20:50.446 "traddr": "10.0.0.2", 00:20:50.446 "trsvcid": "4420" 00:20:50.446 }, 00:20:50.446 "peer_address": { 00:20:50.446 "trtype": "TCP", 00:20:50.446 "adrfam": "IPv4", 00:20:50.446 "traddr": "10.0.0.1", 00:20:50.446 "trsvcid": "52884" 00:20:50.446 }, 00:20:50.446 "auth": { 00:20:50.446 "state": "completed", 00:20:50.446 "digest": "sha256", 00:20:50.446 "dhgroup": "ffdhe4096" 00:20:50.446 } 00:20:50.446 } 00:20:50.446 ]' 00:20:50.446 14:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:50.446 14:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:50.446 14:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:50.705 14:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:50.705 14:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:50.705 14:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.705 14:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.705 14:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.963 14:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NzM1ZWU4NDIwMGQwZDZhNjM0MWU5NjQ4ZDkxYzVkZWZjMjFiNmFkOGRjN2I4NzJiYjlmZDY4NzE2ODMzOTg1Y9/LaFo=: 00:20:51.901 14:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.901 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.901 14:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:51.901 14:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.901 14:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.901 14:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.901 14:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:51.901 14:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:51.901 14:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:51.901 14:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:52.159 14:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:20:52.159 14:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:52.159 14:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:52.159 14:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:52.159 14:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:52.159 14:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.159 14:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.159 14:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.159 14:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.159 14:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.159 14:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.159 14:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.727 00:20:52.727 14:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:52.727 14:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:52.727 14:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.985 14:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.985 14:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.985 14:53:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.985 14:53:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.985 14:53:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.985 14:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:52.985 { 00:20:52.985 "cntlid": 33, 00:20:52.985 "qid": 0, 00:20:52.985 "state": "enabled", 00:20:52.985 "thread": "nvmf_tgt_poll_group_000", 00:20:52.985 "listen_address": { 00:20:52.985 "trtype": "TCP", 00:20:52.985 "adrfam": "IPv4", 00:20:52.985 "traddr": "10.0.0.2", 00:20:52.985 "trsvcid": "4420" 00:20:52.985 }, 00:20:52.985 "peer_address": { 00:20:52.985 "trtype": "TCP", 00:20:52.985 "adrfam": "IPv4", 00:20:52.985 "traddr": "10.0.0.1", 00:20:52.985 "trsvcid": "59672" 00:20:52.985 }, 00:20:52.985 "auth": { 00:20:52.985 "state": "completed", 00:20:52.985 "digest": "sha256", 00:20:52.985 "dhgroup": "ffdhe6144" 00:20:52.985 } 00:20:52.985 } 00:20:52.985 ]' 00:20:52.985 14:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:52.985 14:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:52.985 14:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:52.985 14:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:52.985 14:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:53.242 14:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.242 14:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.242 14:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.501 14:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZGU4NzU5NjI1YWZmODYyMTg1ZjQ0NmZiNjU0MGExNGEwYTdiZGE5NDlmZDlhNGM5oMwwQQ==: --dhchap-ctrl-secret DHHC-1:03:MjU3ZjAyOWY1OGJlZDQyOTAxYWU3MTllMDgxNWIzOTFkOTMxYTNhYWY5NTBkMDk3NDlhODdlY2I1OWQ1YTRkYTuqLZA=: 00:20:54.434 14:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.434 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.434 14:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:54.434 14:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.434 14:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.434 14:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.434 14:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:54.434 14:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:54.434 14:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:54.691 14:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:20:54.691 14:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:54.691 14:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:54.691 14:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:54.691 14:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:54.691 14:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.691 14:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.691 14:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.691 14:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.691 14:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.691 14:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.691 14:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.258 00:20:55.258 14:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:55.258 14:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:55.258 14:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.517 14:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.517 14:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.517 14:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.517 14:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.517 14:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.517 14:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:55.517 { 00:20:55.517 "cntlid": 35, 00:20:55.517 "qid": 0, 00:20:55.517 "state": "enabled", 00:20:55.517 "thread": "nvmf_tgt_poll_group_000", 00:20:55.517 "listen_address": { 00:20:55.517 "trtype": "TCP", 00:20:55.517 "adrfam": "IPv4", 00:20:55.517 "traddr": "10.0.0.2", 00:20:55.517 "trsvcid": "4420" 00:20:55.517 }, 00:20:55.517 "peer_address": { 00:20:55.517 "trtype": "TCP", 00:20:55.517 "adrfam": "IPv4", 00:20:55.517 "traddr": "10.0.0.1", 00:20:55.517 "trsvcid": "59694" 00:20:55.517 }, 00:20:55.517 "auth": { 00:20:55.517 "state": "completed", 00:20:55.517 "digest": "sha256", 00:20:55.517 "dhgroup": "ffdhe6144" 00:20:55.517 } 00:20:55.517 } 00:20:55.517 ]' 00:20:55.517 14:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:55.517 14:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:55.517 14:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:55.517 14:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:55.517 14:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:55.517 14:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.517 14:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.517 14:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.775 14:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZmI5YTYyNjM3NzBkYzA4NTBmODc1YmUyMWM0NzdhMWVi24Rl: --dhchap-ctrl-secret DHHC-1:02:YTk1ZmE0ZGJkNmUxNzQzNDc3NzAxZjFiZmViZDg5ZGY3ODNmNWE0Y2I2Nzc2NjI3W8ciIA==: 00:20:56.712 14:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.712 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.712 14:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:56.712 14:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.712 14:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.712 14:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.712 14:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:56.712 14:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:56.712 14:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:56.970 14:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:20:56.970 14:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:56.970 14:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:56.970 14:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:56.970 14:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:56.970 14:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.970 14:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.970 14:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.970 14:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.970 14:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.970 14:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.971 14:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.535 00:20:57.535 14:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:57.535 14:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.535 14:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:57.792 14:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.792 14:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.792 14:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.792 14:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.792 14:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.792 14:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:57.792 { 00:20:57.792 "cntlid": 37, 00:20:57.792 "qid": 0, 00:20:57.792 "state": "enabled", 00:20:57.792 "thread": "nvmf_tgt_poll_group_000", 00:20:57.792 "listen_address": { 00:20:57.792 "trtype": "TCP", 00:20:57.792 "adrfam": "IPv4", 00:20:57.792 "traddr": "10.0.0.2", 00:20:57.792 "trsvcid": "4420" 00:20:57.792 }, 00:20:57.792 "peer_address": { 00:20:57.792 "trtype": "TCP", 00:20:57.792 "adrfam": "IPv4", 00:20:57.792 "traddr": "10.0.0.1", 00:20:57.792 "trsvcid": "59724" 00:20:57.792 }, 00:20:57.792 "auth": { 00:20:57.792 "state": "completed", 00:20:57.792 "digest": "sha256", 00:20:57.792 "dhgroup": "ffdhe6144" 00:20:57.792 } 00:20:57.792 } 00:20:57.792 ]' 00:20:57.792 14:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:57.792 14:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:57.792 14:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:57.792 14:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:57.792 14:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:58.050 14:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.050 14:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.050 14:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.308 14:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzhlNjBkNWFlYWQ1NzUxMzQ1ZTk3NzRlNTZjNWFlYmIzNmI3MDIxOTRhMDg2OWFj9fRvyA==: --dhchap-ctrl-secret DHHC-1:01:YTQxZjk4ZmRjMGE5OTM2MzllMmNkNDIwYTlmOTQ5MjH+Wu2x: 00:20:59.243 14:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.243 14:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:59.243 14:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.243 14:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.243 14:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.243 14:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:59.243 14:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:59.243 14:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:59.501 14:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:20:59.501 14:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:59.501 14:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:59.501 14:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:59.501 14:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:59.501 14:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.501 14:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:59.501 14:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.501 14:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.501 14:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.501 14:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:59.501 14:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:00.068 00:21:00.068 14:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:00.068 14:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:00.068 14:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.325 14:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.325 14:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.325 14:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.325 14:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.325 14:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.325 14:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:00.325 { 00:21:00.325 "cntlid": 39, 00:21:00.325 "qid": 0, 00:21:00.325 "state": "enabled", 00:21:00.325 "thread": "nvmf_tgt_poll_group_000", 00:21:00.325 "listen_address": { 00:21:00.325 "trtype": "TCP", 00:21:00.325 "adrfam": "IPv4", 00:21:00.325 "traddr": "10.0.0.2", 00:21:00.325 "trsvcid": "4420" 00:21:00.325 }, 00:21:00.325 "peer_address": { 00:21:00.325 "trtype": "TCP", 00:21:00.325 "adrfam": "IPv4", 00:21:00.325 "traddr": "10.0.0.1", 00:21:00.325 "trsvcid": "59742" 00:21:00.325 }, 00:21:00.325 "auth": { 00:21:00.325 "state": "completed", 00:21:00.325 "digest": "sha256", 00:21:00.325 "dhgroup": "ffdhe6144" 00:21:00.325 } 00:21:00.325 } 00:21:00.325 ]' 00:21:00.325 14:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:00.583 14:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:00.583 14:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:00.583 14:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:00.583 14:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:00.583 14:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.583 14:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.583 14:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.841 14:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NzM1ZWU4NDIwMGQwZDZhNjM0MWU5NjQ4ZDkxYzVkZWZjMjFiNmFkOGRjN2I4NzJiYjlmZDY4NzE2ODMzOTg1Y9/LaFo=: 00:21:01.777 14:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.777 14:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:01.777 14:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.777 14:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.777 14:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.777 14:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:01.777 14:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:01.777 14:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:01.777 14:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:02.034 14:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:21:02.035 14:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:02.035 14:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:02.035 14:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:02.035 14:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:02.035 14:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.035 14:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.035 14:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.035 14:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.035 14:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.035 14:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.035 14:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.973 00:21:02.973 14:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:02.973 14:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:02.973 14:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.231 14:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.231 14:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.231 14:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.231 14:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.231 14:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.231 14:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:03.231 { 00:21:03.231 "cntlid": 41, 00:21:03.231 "qid": 0, 00:21:03.231 "state": "enabled", 00:21:03.231 "thread": "nvmf_tgt_poll_group_000", 00:21:03.231 "listen_address": { 00:21:03.231 "trtype": "TCP", 00:21:03.231 "adrfam": "IPv4", 00:21:03.231 "traddr": "10.0.0.2", 00:21:03.231 "trsvcid": "4420" 00:21:03.231 }, 00:21:03.231 "peer_address": { 00:21:03.231 "trtype": "TCP", 00:21:03.231 "adrfam": "IPv4", 00:21:03.231 "traddr": "10.0.0.1", 00:21:03.231 "trsvcid": "57270" 00:21:03.231 }, 00:21:03.231 "auth": { 00:21:03.231 "state": "completed", 00:21:03.231 "digest": "sha256", 00:21:03.231 "dhgroup": "ffdhe8192" 00:21:03.231 } 00:21:03.231 } 00:21:03.232 ]' 00:21:03.232 14:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:03.232 14:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:03.232 14:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:03.232 14:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:03.232 14:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:03.232 14:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.232 14:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.232 14:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.491 14:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZGU4NzU5NjI1YWZmODYyMTg1ZjQ0NmZiNjU0MGExNGEwYTdiZGE5NDlmZDlhNGM5oMwwQQ==: --dhchap-ctrl-secret DHHC-1:03:MjU3ZjAyOWY1OGJlZDQyOTAxYWU3MTllMDgxNWIzOTFkOTMxYTNhYWY5NTBkMDk3NDlhODdlY2I1OWQ1YTRkYTuqLZA=: 00:21:04.452 14:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.452 14:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:04.452 14:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.452 14:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.452 14:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.452 14:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:04.452 14:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:04.452 14:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:04.715 14:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:21:04.715 14:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:04.715 14:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:04.715 14:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:04.715 14:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:04.715 14:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.715 14:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.715 14:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.715 14:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.715 14:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.715 14:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.715 14:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.653 00:21:05.653 14:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:05.653 14:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.653 14:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:05.912 14:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.912 14:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.913 14:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.913 14:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.913 14:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.913 14:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:05.913 { 00:21:05.913 "cntlid": 43, 00:21:05.913 "qid": 0, 00:21:05.913 "state": "enabled", 00:21:05.913 "thread": "nvmf_tgt_poll_group_000", 00:21:05.913 "listen_address": { 00:21:05.913 "trtype": "TCP", 00:21:05.913 "adrfam": "IPv4", 00:21:05.913 "traddr": "10.0.0.2", 00:21:05.913 "trsvcid": "4420" 00:21:05.913 }, 00:21:05.913 "peer_address": { 00:21:05.913 "trtype": "TCP", 00:21:05.913 "adrfam": "IPv4", 00:21:05.913 "traddr": "10.0.0.1", 00:21:05.913 "trsvcid": "57298" 00:21:05.913 }, 00:21:05.913 "auth": { 00:21:05.913 "state": "completed", 00:21:05.913 "digest": "sha256", 00:21:05.913 "dhgroup": "ffdhe8192" 00:21:05.913 } 00:21:05.913 } 00:21:05.913 ]' 00:21:05.913 14:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:05.913 14:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:05.913 14:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:06.172 14:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:06.172 14:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:06.172 14:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.172 14:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.172 14:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.431 14:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZmI5YTYyNjM3NzBkYzA4NTBmODc1YmUyMWM0NzdhMWVi24Rl: --dhchap-ctrl-secret DHHC-1:02:YTk1ZmE0ZGJkNmUxNzQzNDc3NzAxZjFiZmViZDg5ZGY3ODNmNWE0Y2I2Nzc2NjI3W8ciIA==: 00:21:07.366 14:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.366 14:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:07.366 14:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.366 14:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.366 14:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.366 14:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:07.366 14:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:07.366 14:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:07.624 14:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:21:07.624 14:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:07.624 14:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:07.624 14:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:07.624 14:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:07.624 14:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.624 14:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.624 14:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.624 14:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.624 14:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.624 14:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.624 14:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.560 00:21:08.560 14:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:08.560 14:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:08.560 14:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.560 14:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.560 14:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.560 14:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.560 14:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.818 14:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.818 14:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:08.818 { 00:21:08.818 "cntlid": 45, 00:21:08.818 "qid": 0, 00:21:08.818 "state": "enabled", 00:21:08.818 "thread": "nvmf_tgt_poll_group_000", 00:21:08.818 "listen_address": { 00:21:08.818 "trtype": "TCP", 00:21:08.818 "adrfam": "IPv4", 00:21:08.818 "traddr": "10.0.0.2", 00:21:08.818 "trsvcid": "4420" 00:21:08.818 }, 00:21:08.818 "peer_address": { 00:21:08.818 "trtype": "TCP", 00:21:08.818 "adrfam": "IPv4", 00:21:08.818 "traddr": "10.0.0.1", 00:21:08.818 "trsvcid": "57316" 00:21:08.819 }, 00:21:08.819 "auth": { 00:21:08.819 "state": "completed", 00:21:08.819 "digest": "sha256", 00:21:08.819 "dhgroup": "ffdhe8192" 00:21:08.819 } 00:21:08.819 } 00:21:08.819 ]' 00:21:08.819 14:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:08.819 14:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:08.819 14:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:08.819 14:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:08.819 14:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:08.819 14:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.819 14:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.819 14:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.076 14:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzhlNjBkNWFlYWQ1NzUxMzQ1ZTk3NzRlNTZjNWFlYmIzNmI3MDIxOTRhMDg2OWFj9fRvyA==: --dhchap-ctrl-secret DHHC-1:01:YTQxZjk4ZmRjMGE5OTM2MzllMmNkNDIwYTlmOTQ5MjH+Wu2x: 00:21:10.010 14:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.010 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.010 14:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:10.010 14:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.010 14:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.010 14:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.010 14:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:10.010 14:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:10.010 14:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:10.267 14:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:21:10.267 14:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:10.267 14:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:10.267 14:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:10.267 14:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:10.267 14:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.267 14:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:10.267 14:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.267 14:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.267 14:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.267 14:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:10.267 14:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:11.200 00:21:11.200 14:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:11.200 14:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.200 14:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:11.458 14:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.458 14:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.458 14:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.458 14:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.458 14:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.458 14:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:11.458 { 00:21:11.458 "cntlid": 47, 00:21:11.458 "qid": 0, 00:21:11.458 "state": "enabled", 00:21:11.458 "thread": "nvmf_tgt_poll_group_000", 00:21:11.458 "listen_address": { 00:21:11.458 "trtype": "TCP", 00:21:11.458 "adrfam": "IPv4", 00:21:11.458 "traddr": "10.0.0.2", 00:21:11.458 "trsvcid": "4420" 00:21:11.458 }, 00:21:11.458 "peer_address": { 00:21:11.458 "trtype": "TCP", 00:21:11.458 "adrfam": "IPv4", 00:21:11.458 "traddr": "10.0.0.1", 00:21:11.458 "trsvcid": "57346" 00:21:11.458 }, 00:21:11.458 "auth": { 00:21:11.458 "state": "completed", 00:21:11.458 "digest": "sha256", 00:21:11.458 "dhgroup": "ffdhe8192" 00:21:11.458 } 00:21:11.458 } 00:21:11.458 ]' 00:21:11.458 14:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:11.458 14:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:11.458 14:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:11.458 14:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:11.458 14:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:11.458 14:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.458 14:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.459 14:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.717 14:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NzM1ZWU4NDIwMGQwZDZhNjM0MWU5NjQ4ZDkxYzVkZWZjMjFiNmFkOGRjN2I4NzJiYjlmZDY4NzE2ODMzOTg1Y9/LaFo=: 00:21:12.652 14:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.652 14:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:12.652 14:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.652 14:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.652 14:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.652 14:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:21:12.652 14:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:12.652 14:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:12.652 14:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:12.652 14:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:12.911 14:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:21:12.911 14:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:12.911 14:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:12.911 14:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:12.911 14:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:12.911 14:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.911 14:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.911 14:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.911 14:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.911 14:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.911 14:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.911 14:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.480 00:21:13.480 14:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:13.480 14:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:13.480 14:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.480 14:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.480 14:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.480 14:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.480 14:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.480 14:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.480 14:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:13.480 { 00:21:13.480 "cntlid": 49, 00:21:13.480 "qid": 0, 00:21:13.480 "state": "enabled", 00:21:13.480 "thread": "nvmf_tgt_poll_group_000", 00:21:13.480 "listen_address": { 00:21:13.480 "trtype": "TCP", 00:21:13.480 "adrfam": "IPv4", 00:21:13.480 "traddr": "10.0.0.2", 00:21:13.480 "trsvcid": "4420" 00:21:13.480 }, 00:21:13.480 "peer_address": { 00:21:13.480 "trtype": "TCP", 00:21:13.480 "adrfam": "IPv4", 00:21:13.480 "traddr": "10.0.0.1", 00:21:13.480 "trsvcid": "36902" 00:21:13.480 }, 00:21:13.480 "auth": { 00:21:13.480 "state": "completed", 00:21:13.480 "digest": "sha384", 00:21:13.480 "dhgroup": "null" 00:21:13.480 } 00:21:13.480 } 00:21:13.480 ]' 00:21:13.480 14:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:13.738 14:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:13.738 14:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:13.738 14:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:13.738 14:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:13.738 14:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.738 14:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.738 14:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.997 14:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZGU4NzU5NjI1YWZmODYyMTg1ZjQ0NmZiNjU0MGExNGEwYTdiZGE5NDlmZDlhNGM5oMwwQQ==: --dhchap-ctrl-secret DHHC-1:03:MjU3ZjAyOWY1OGJlZDQyOTAxYWU3MTllMDgxNWIzOTFkOTMxYTNhYWY5NTBkMDk3NDlhODdlY2I1OWQ1YTRkYTuqLZA=: 00:21:14.933 14:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.933 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.933 14:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:14.933 14:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.933 14:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.933 14:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.933 14:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:14.933 14:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:14.933 14:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:15.190 14:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:21:15.190 14:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:15.190 14:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:15.190 14:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:15.190 14:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:15.190 14:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.190 14:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.190 14:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.190 14:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.190 14:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.190 14:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.190 14:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.446 00:21:15.446 14:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:15.446 14:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:15.446 14:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.703 14:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.703 14:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.703 14:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.703 14:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.703 14:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.703 14:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:15.703 { 00:21:15.703 "cntlid": 51, 00:21:15.703 "qid": 0, 00:21:15.703 "state": "enabled", 00:21:15.703 "thread": "nvmf_tgt_poll_group_000", 00:21:15.703 "listen_address": { 00:21:15.703 "trtype": "TCP", 00:21:15.703 "adrfam": "IPv4", 00:21:15.703 "traddr": "10.0.0.2", 00:21:15.703 "trsvcid": "4420" 00:21:15.703 }, 00:21:15.703 "peer_address": { 00:21:15.703 "trtype": "TCP", 00:21:15.703 "adrfam": "IPv4", 00:21:15.703 "traddr": "10.0.0.1", 00:21:15.703 "trsvcid": "36944" 00:21:15.703 }, 00:21:15.703 "auth": { 00:21:15.703 "state": "completed", 00:21:15.703 "digest": "sha384", 00:21:15.704 "dhgroup": "null" 00:21:15.704 } 00:21:15.704 } 00:21:15.704 ]' 00:21:15.704 14:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:15.961 14:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:15.961 14:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:15.961 14:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:15.961 14:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:15.961 14:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.961 14:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.961 14:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.219 14:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZmI5YTYyNjM3NzBkYzA4NTBmODc1YmUyMWM0NzdhMWVi24Rl: --dhchap-ctrl-secret DHHC-1:02:YTk1ZmE0ZGJkNmUxNzQzNDc3NzAxZjFiZmViZDg5ZGY3ODNmNWE0Y2I2Nzc2NjI3W8ciIA==: 00:21:17.152 14:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.152 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.152 14:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:17.152 14:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.152 14:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.152 14:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.152 14:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:17.152 14:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:17.152 14:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:17.409 14:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:21:17.409 14:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:17.409 14:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:17.409 14:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:17.409 14:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:17.409 14:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.409 14:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.409 14:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.409 14:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.409 14:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.409 14:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.409 14:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.666 00:21:17.666 14:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:17.666 14:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:17.666 14:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.923 14:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.923 14:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.923 14:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.923 14:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.923 14:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.923 14:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:17.923 { 00:21:17.923 "cntlid": 53, 00:21:17.923 "qid": 0, 00:21:17.923 "state": "enabled", 00:21:17.923 "thread": "nvmf_tgt_poll_group_000", 00:21:17.923 "listen_address": { 00:21:17.923 "trtype": "TCP", 00:21:17.923 "adrfam": "IPv4", 00:21:17.923 "traddr": "10.0.0.2", 00:21:17.923 "trsvcid": "4420" 00:21:17.923 }, 00:21:17.923 "peer_address": { 00:21:17.923 "trtype": "TCP", 00:21:17.923 "adrfam": "IPv4", 00:21:17.923 "traddr": "10.0.0.1", 00:21:17.923 "trsvcid": "36962" 00:21:17.923 }, 00:21:17.923 "auth": { 00:21:17.923 "state": "completed", 00:21:17.923 "digest": "sha384", 00:21:17.923 "dhgroup": "null" 00:21:17.924 } 00:21:17.924 } 00:21:17.924 ]' 00:21:17.924 14:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:17.924 14:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:17.924 14:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:18.180 14:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:18.180 14:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:18.180 14:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.180 14:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.181 14:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.439 14:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzhlNjBkNWFlYWQ1NzUxMzQ1ZTk3NzRlNTZjNWFlYmIzNmI3MDIxOTRhMDg2OWFj9fRvyA==: --dhchap-ctrl-secret DHHC-1:01:YTQxZjk4ZmRjMGE5OTM2MzllMmNkNDIwYTlmOTQ5MjH+Wu2x: 00:21:19.373 14:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.373 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.373 14:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:19.373 14:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.373 14:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.373 14:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.373 14:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:19.373 14:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:19.373 14:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:19.630 14:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:21:19.630 14:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:19.630 14:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:19.630 14:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:19.630 14:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:19.630 14:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.630 14:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:19.630 14:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.630 14:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.630 14:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.630 14:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:19.630 14:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:19.887 00:21:19.887 14:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:19.887 14:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.887 14:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:20.144 14:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.144 14:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.144 14:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.144 14:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.144 14:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.144 14:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:20.144 { 00:21:20.144 "cntlid": 55, 00:21:20.144 "qid": 0, 00:21:20.144 "state": "enabled", 00:21:20.144 "thread": "nvmf_tgt_poll_group_000", 00:21:20.144 "listen_address": { 00:21:20.144 "trtype": "TCP", 00:21:20.144 "adrfam": "IPv4", 00:21:20.144 "traddr": "10.0.0.2", 00:21:20.144 "trsvcid": "4420" 00:21:20.144 }, 00:21:20.144 "peer_address": { 00:21:20.144 "trtype": "TCP", 00:21:20.144 "adrfam": "IPv4", 00:21:20.144 "traddr": "10.0.0.1", 00:21:20.144 "trsvcid": "36986" 00:21:20.144 }, 00:21:20.144 "auth": { 00:21:20.144 "state": "completed", 00:21:20.144 "digest": "sha384", 00:21:20.144 "dhgroup": "null" 00:21:20.144 } 00:21:20.144 } 00:21:20.144 ]' 00:21:20.144 14:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:20.144 14:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:20.144 14:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:20.400 14:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:20.400 14:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:20.400 14:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.400 14:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.400 14:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.659 14:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NzM1ZWU4NDIwMGQwZDZhNjM0MWU5NjQ4ZDkxYzVkZWZjMjFiNmFkOGRjN2I4NzJiYjlmZDY4NzE2ODMzOTg1Y9/LaFo=: 00:21:21.625 14:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.625 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.625 14:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:21.625 14:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.625 14:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.625 14:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.625 14:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:21.625 14:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:21.625 14:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:21.625 14:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:21.884 14:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:21:21.884 14:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:21.884 14:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:21.884 14:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:21.884 14:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:21.884 14:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.884 14:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.884 14:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.884 14:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.884 14:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.884 14:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.884 14:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.141 00:21:22.141 14:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:22.141 14:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.141 14:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:22.398 14:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.398 14:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.398 14:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.398 14:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.398 14:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.398 14:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:22.398 { 00:21:22.399 "cntlid": 57, 00:21:22.399 "qid": 0, 00:21:22.399 "state": "enabled", 00:21:22.399 "thread": "nvmf_tgt_poll_group_000", 00:21:22.399 "listen_address": { 00:21:22.399 "trtype": "TCP", 00:21:22.399 "adrfam": "IPv4", 00:21:22.399 "traddr": "10.0.0.2", 00:21:22.399 "trsvcid": "4420" 00:21:22.399 }, 00:21:22.399 "peer_address": { 00:21:22.399 "trtype": "TCP", 00:21:22.399 "adrfam": "IPv4", 00:21:22.399 "traddr": "10.0.0.1", 00:21:22.399 "trsvcid": "32998" 00:21:22.399 }, 00:21:22.399 "auth": { 00:21:22.399 "state": "completed", 00:21:22.399 "digest": "sha384", 00:21:22.399 "dhgroup": "ffdhe2048" 00:21:22.399 } 00:21:22.399 } 00:21:22.399 ]' 00:21:22.399 14:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:22.399 14:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:22.399 14:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:22.399 14:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:22.399 14:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:22.656 14:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.656 14:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.656 14:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.915 14:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZGU4NzU5NjI1YWZmODYyMTg1ZjQ0NmZiNjU0MGExNGEwYTdiZGE5NDlmZDlhNGM5oMwwQQ==: --dhchap-ctrl-secret DHHC-1:03:MjU3ZjAyOWY1OGJlZDQyOTAxYWU3MTllMDgxNWIzOTFkOTMxYTNhYWY5NTBkMDk3NDlhODdlY2I1OWQ1YTRkYTuqLZA=: 00:21:23.850 14:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.850 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.850 14:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:23.850 14:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.850 14:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.850 14:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.850 14:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:23.850 14:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:23.850 14:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:24.108 14:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:21:24.108 14:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:24.108 14:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:24.108 14:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:24.108 14:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:24.108 14:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.108 14:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.108 14:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.108 14:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.108 14:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.108 14:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.108 14:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.364 00:21:24.364 14:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:24.364 14:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.364 14:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:24.621 14:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.621 14:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.621 14:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.621 14:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.621 14:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.621 14:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:24.621 { 00:21:24.621 "cntlid": 59, 00:21:24.621 "qid": 0, 00:21:24.621 "state": "enabled", 00:21:24.622 "thread": "nvmf_tgt_poll_group_000", 00:21:24.622 "listen_address": { 00:21:24.622 "trtype": "TCP", 00:21:24.622 "adrfam": "IPv4", 00:21:24.622 "traddr": "10.0.0.2", 00:21:24.622 "trsvcid": "4420" 00:21:24.622 }, 00:21:24.622 "peer_address": { 00:21:24.622 "trtype": "TCP", 00:21:24.622 "adrfam": "IPv4", 00:21:24.622 "traddr": "10.0.0.1", 00:21:24.622 "trsvcid": "33016" 00:21:24.622 }, 00:21:24.622 "auth": { 00:21:24.622 "state": "completed", 00:21:24.622 "digest": "sha384", 00:21:24.622 "dhgroup": "ffdhe2048" 00:21:24.622 } 00:21:24.622 } 00:21:24.622 ]' 00:21:24.622 14:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:24.622 14:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:24.622 14:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:24.622 14:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:24.622 14:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:24.622 14:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.622 14:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.622 14:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.879 14:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZmI5YTYyNjM3NzBkYzA4NTBmODc1YmUyMWM0NzdhMWVi24Rl: --dhchap-ctrl-secret DHHC-1:02:YTk1ZmE0ZGJkNmUxNzQzNDc3NzAxZjFiZmViZDg5ZGY3ODNmNWE0Y2I2Nzc2NjI3W8ciIA==: 00:21:26.255 14:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.255 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.255 14:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:26.255 14:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.255 14:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.255 14:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.255 14:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:26.255 14:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:26.255 14:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:26.255 14:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:21:26.255 14:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:26.255 14:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:26.255 14:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:26.256 14:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:26.256 14:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.256 14:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.256 14:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.256 14:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.256 14:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.256 14:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.256 14:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.514 00:21:26.514 14:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:26.514 14:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:26.514 14:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.775 14:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.775 14:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.775 14:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.775 14:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.775 14:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.775 14:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:26.775 { 00:21:26.775 "cntlid": 61, 00:21:26.775 "qid": 0, 00:21:26.775 "state": "enabled", 00:21:26.775 "thread": "nvmf_tgt_poll_group_000", 00:21:26.775 "listen_address": { 00:21:26.775 "trtype": "TCP", 00:21:26.775 "adrfam": "IPv4", 00:21:26.775 "traddr": "10.0.0.2", 00:21:26.775 "trsvcid": "4420" 00:21:26.775 }, 00:21:26.775 "peer_address": { 00:21:26.775 "trtype": "TCP", 00:21:26.775 "adrfam": "IPv4", 00:21:26.775 "traddr": "10.0.0.1", 00:21:26.775 "trsvcid": "33042" 00:21:26.775 }, 00:21:26.775 "auth": { 00:21:26.775 "state": "completed", 00:21:26.775 "digest": "sha384", 00:21:26.775 "dhgroup": "ffdhe2048" 00:21:26.775 } 00:21:26.775 } 00:21:26.775 ]' 00:21:26.775 14:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:26.775 14:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:26.775 14:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:27.032 14:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:27.032 14:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:27.032 14:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.032 14:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.032 14:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.290 14:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzhlNjBkNWFlYWQ1NzUxMzQ1ZTk3NzRlNTZjNWFlYmIzNmI3MDIxOTRhMDg2OWFj9fRvyA==: --dhchap-ctrl-secret DHHC-1:01:YTQxZjk4ZmRjMGE5OTM2MzllMmNkNDIwYTlmOTQ5MjH+Wu2x: 00:21:28.223 14:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.223 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.223 14:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:28.223 14:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.223 14:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.223 14:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.223 14:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:28.223 14:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:28.223 14:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:28.481 14:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:21:28.481 14:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:28.481 14:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:28.481 14:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:28.481 14:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:28.481 14:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.481 14:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:28.481 14:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.481 14:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.481 14:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.481 14:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:28.481 14:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:28.738 00:21:28.738 14:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:28.738 14:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:28.738 14:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.996 14:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.996 14:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.996 14:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.996 14:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.996 14:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.996 14:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:28.996 { 00:21:28.996 "cntlid": 63, 00:21:28.996 "qid": 0, 00:21:28.996 "state": "enabled", 00:21:28.996 "thread": "nvmf_tgt_poll_group_000", 00:21:28.996 "listen_address": { 00:21:28.996 "trtype": "TCP", 00:21:28.996 "adrfam": "IPv4", 00:21:28.996 "traddr": "10.0.0.2", 00:21:28.996 "trsvcid": "4420" 00:21:28.996 }, 00:21:28.996 "peer_address": { 00:21:28.996 "trtype": "TCP", 00:21:28.996 "adrfam": "IPv4", 00:21:28.996 "traddr": "10.0.0.1", 00:21:28.996 "trsvcid": "33072" 00:21:28.996 }, 00:21:28.996 "auth": { 00:21:28.996 "state": "completed", 00:21:28.996 "digest": "sha384", 00:21:28.996 "dhgroup": "ffdhe2048" 00:21:28.996 } 00:21:28.996 } 00:21:28.996 ]' 00:21:28.996 14:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:28.996 14:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:28.996 14:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:28.996 14:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:28.996 14:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:29.253 14:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.253 14:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.253 14:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.510 14:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NzM1ZWU4NDIwMGQwZDZhNjM0MWU5NjQ4ZDkxYzVkZWZjMjFiNmFkOGRjN2I4NzJiYjlmZDY4NzE2ODMzOTg1Y9/LaFo=: 00:21:30.443 14:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.444 14:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:30.444 14:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.444 14:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.444 14:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.444 14:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:30.444 14:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:30.444 14:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:30.444 14:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:30.702 14:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:21:30.702 14:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:30.702 14:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:30.702 14:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:30.702 14:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:30.702 14:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.702 14:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.702 14:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.702 14:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.702 14:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.702 14:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.702 14:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.960 00:21:30.960 14:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:30.960 14:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:30.960 14:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.218 14:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.218 14:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.218 14:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.218 14:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.218 14:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.218 14:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:31.218 { 00:21:31.218 "cntlid": 65, 00:21:31.218 "qid": 0, 00:21:31.218 "state": "enabled", 00:21:31.218 "thread": "nvmf_tgt_poll_group_000", 00:21:31.218 "listen_address": { 00:21:31.218 "trtype": "TCP", 00:21:31.218 "adrfam": "IPv4", 00:21:31.218 "traddr": "10.0.0.2", 00:21:31.218 "trsvcid": "4420" 00:21:31.218 }, 00:21:31.218 "peer_address": { 00:21:31.218 "trtype": "TCP", 00:21:31.218 "adrfam": "IPv4", 00:21:31.218 "traddr": "10.0.0.1", 00:21:31.218 "trsvcid": "33098" 00:21:31.218 }, 00:21:31.218 "auth": { 00:21:31.218 "state": "completed", 00:21:31.218 "digest": "sha384", 00:21:31.218 "dhgroup": "ffdhe3072" 00:21:31.218 } 00:21:31.218 } 00:21:31.218 ]' 00:21:31.218 14:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:31.218 14:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:31.218 14:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:31.218 14:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:31.218 14:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:31.476 14:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.476 14:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.476 14:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.736 14:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZGU4NzU5NjI1YWZmODYyMTg1ZjQ0NmZiNjU0MGExNGEwYTdiZGE5NDlmZDlhNGM5oMwwQQ==: --dhchap-ctrl-secret DHHC-1:03:MjU3ZjAyOWY1OGJlZDQyOTAxYWU3MTllMDgxNWIzOTFkOTMxYTNhYWY5NTBkMDk3NDlhODdlY2I1OWQ1YTRkYTuqLZA=: 00:21:32.667 14:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.667 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.667 14:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:32.667 14:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.667 14:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.667 14:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.667 14:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:32.667 14:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:32.667 14:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:32.926 14:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:21:32.926 14:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:32.926 14:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:32.926 14:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:32.926 14:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:32.926 14:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.926 14:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:32.926 14:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.926 14:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.926 14:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.926 14:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:32.926 14:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.184 00:21:33.184 14:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:33.184 14:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:33.184 14:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.442 14:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.442 14:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.442 14:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.443 14:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.443 14:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.443 14:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:33.443 { 00:21:33.443 "cntlid": 67, 00:21:33.443 "qid": 0, 00:21:33.443 "state": "enabled", 00:21:33.443 "thread": "nvmf_tgt_poll_group_000", 00:21:33.443 "listen_address": { 00:21:33.443 "trtype": "TCP", 00:21:33.443 "adrfam": "IPv4", 00:21:33.443 "traddr": "10.0.0.2", 00:21:33.443 "trsvcid": "4420" 00:21:33.443 }, 00:21:33.443 "peer_address": { 00:21:33.443 "trtype": "TCP", 00:21:33.443 "adrfam": "IPv4", 00:21:33.443 "traddr": "10.0.0.1", 00:21:33.443 "trsvcid": "33522" 00:21:33.443 }, 00:21:33.443 "auth": { 00:21:33.443 "state": "completed", 00:21:33.443 "digest": "sha384", 00:21:33.443 "dhgroup": "ffdhe3072" 00:21:33.443 } 00:21:33.443 } 00:21:33.443 ]' 00:21:33.443 14:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:33.443 14:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:33.443 14:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:33.443 14:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:33.443 14:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:33.703 14:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.703 14:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.703 14:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.703 14:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZmI5YTYyNjM3NzBkYzA4NTBmODc1YmUyMWM0NzdhMWVi24Rl: --dhchap-ctrl-secret DHHC-1:02:YTk1ZmE0ZGJkNmUxNzQzNDc3NzAxZjFiZmViZDg5ZGY3ODNmNWE0Y2I2Nzc2NjI3W8ciIA==: 00:21:34.640 14:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.640 14:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:34.640 14:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.640 14:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.640 14:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.640 14:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:34.640 14:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:34.640 14:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:35.207 14:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:21:35.207 14:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:35.207 14:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:35.207 14:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:35.207 14:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:35.207 14:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.207 14:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:35.207 14:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.207 14:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.208 14:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.208 14:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:35.208 14:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:35.466 00:21:35.466 14:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:35.466 14:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:35.466 14:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.725 14:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.725 14:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.725 14:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.725 14:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.725 14:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.725 14:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:35.725 { 00:21:35.725 "cntlid": 69, 00:21:35.725 "qid": 0, 00:21:35.725 "state": "enabled", 00:21:35.725 "thread": "nvmf_tgt_poll_group_000", 00:21:35.725 "listen_address": { 00:21:35.725 "trtype": "TCP", 00:21:35.725 "adrfam": "IPv4", 00:21:35.725 "traddr": "10.0.0.2", 00:21:35.725 "trsvcid": "4420" 00:21:35.725 }, 00:21:35.725 "peer_address": { 00:21:35.725 "trtype": "TCP", 00:21:35.725 "adrfam": "IPv4", 00:21:35.725 "traddr": "10.0.0.1", 00:21:35.725 "trsvcid": "33542" 00:21:35.725 }, 00:21:35.725 "auth": { 00:21:35.725 "state": "completed", 00:21:35.725 "digest": "sha384", 00:21:35.725 "dhgroup": "ffdhe3072" 00:21:35.725 } 00:21:35.725 } 00:21:35.725 ]' 00:21:35.725 14:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:35.725 14:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:35.725 14:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:35.725 14:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:35.725 14:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:35.725 14:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.725 14:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.725 14:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.983 14:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzhlNjBkNWFlYWQ1NzUxMzQ1ZTk3NzRlNTZjNWFlYmIzNmI3MDIxOTRhMDg2OWFj9fRvyA==: --dhchap-ctrl-secret DHHC-1:01:YTQxZjk4ZmRjMGE5OTM2MzllMmNkNDIwYTlmOTQ5MjH+Wu2x: 00:21:36.920 14:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.920 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.920 14:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:36.920 14:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.920 14:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.920 14:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.920 14:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:36.920 14:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:36.920 14:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:37.180 14:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:21:37.180 14:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:37.180 14:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:37.180 14:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:37.180 14:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:37.180 14:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.180 14:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:37.180 14:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.180 14:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.180 14:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.180 14:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:37.180 14:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:37.747 00:21:37.747 14:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:37.747 14:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:37.747 14:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.747 14:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.747 14:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.747 14:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.747 14:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.748 14:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.748 14:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:37.748 { 00:21:37.748 "cntlid": 71, 00:21:37.748 "qid": 0, 00:21:37.748 "state": "enabled", 00:21:37.748 "thread": "nvmf_tgt_poll_group_000", 00:21:37.748 "listen_address": { 00:21:37.748 "trtype": "TCP", 00:21:37.748 "adrfam": "IPv4", 00:21:37.748 "traddr": "10.0.0.2", 00:21:37.748 "trsvcid": "4420" 00:21:37.748 }, 00:21:37.748 "peer_address": { 00:21:37.748 "trtype": "TCP", 00:21:37.748 "adrfam": "IPv4", 00:21:37.748 "traddr": "10.0.0.1", 00:21:37.748 "trsvcid": "33568" 00:21:37.748 }, 00:21:37.748 "auth": { 00:21:37.748 "state": "completed", 00:21:37.748 "digest": "sha384", 00:21:37.748 "dhgroup": "ffdhe3072" 00:21:37.748 } 00:21:37.748 } 00:21:37.748 ]' 00:21:37.748 14:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:38.005 14:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:38.005 14:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:38.005 14:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:38.005 14:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:38.005 14:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.005 14:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.005 14:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.262 14:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NzM1ZWU4NDIwMGQwZDZhNjM0MWU5NjQ4ZDkxYzVkZWZjMjFiNmFkOGRjN2I4NzJiYjlmZDY4NzE2ODMzOTg1Y9/LaFo=: 00:21:39.200 14:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.200 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.200 14:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:39.200 14:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.200 14:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.200 14:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.200 14:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:39.200 14:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:39.200 14:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:39.200 14:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:39.457 14:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:21:39.457 14:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:39.457 14:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:39.457 14:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:39.457 14:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:39.457 14:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.457 14:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:39.457 14:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.457 14:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.457 14:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.457 14:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:39.457 14:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.025 00:21:40.025 14:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:40.025 14:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:40.025 14:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.283 14:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.283 14:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.283 14:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.283 14:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.283 14:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.283 14:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:40.283 { 00:21:40.283 "cntlid": 73, 00:21:40.283 "qid": 0, 00:21:40.283 "state": "enabled", 00:21:40.283 "thread": "nvmf_tgt_poll_group_000", 00:21:40.283 "listen_address": { 00:21:40.283 "trtype": "TCP", 00:21:40.283 "adrfam": "IPv4", 00:21:40.283 "traddr": "10.0.0.2", 00:21:40.283 "trsvcid": "4420" 00:21:40.283 }, 00:21:40.283 "peer_address": { 00:21:40.283 "trtype": "TCP", 00:21:40.283 "adrfam": "IPv4", 00:21:40.283 "traddr": "10.0.0.1", 00:21:40.283 "trsvcid": "33582" 00:21:40.283 }, 00:21:40.283 "auth": { 00:21:40.283 "state": "completed", 00:21:40.283 "digest": "sha384", 00:21:40.283 "dhgroup": "ffdhe4096" 00:21:40.283 } 00:21:40.283 } 00:21:40.283 ]' 00:21:40.283 14:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:40.283 14:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:40.283 14:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:40.283 14:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:40.283 14:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:40.283 14:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.283 14:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.283 14:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.541 14:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZGU4NzU5NjI1YWZmODYyMTg1ZjQ0NmZiNjU0MGExNGEwYTdiZGE5NDlmZDlhNGM5oMwwQQ==: --dhchap-ctrl-secret DHHC-1:03:MjU3ZjAyOWY1OGJlZDQyOTAxYWU3MTllMDgxNWIzOTFkOTMxYTNhYWY5NTBkMDk3NDlhODdlY2I1OWQ1YTRkYTuqLZA=: 00:21:41.475 14:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.475 14:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:41.475 14:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.475 14:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.475 14:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.475 14:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:41.475 14:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:41.475 14:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:41.732 14:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:21:41.732 14:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:41.732 14:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:41.732 14:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:41.732 14:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:41.732 14:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.732 14:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:41.732 14:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.732 14:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.732 14:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.732 14:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:41.732 14:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.297 00:21:42.297 14:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:42.297 14:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:42.297 14:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.555 14:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.555 14:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.555 14:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.555 14:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.555 14:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.555 14:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:42.555 { 00:21:42.555 "cntlid": 75, 00:21:42.555 "qid": 0, 00:21:42.555 "state": "enabled", 00:21:42.555 "thread": "nvmf_tgt_poll_group_000", 00:21:42.555 "listen_address": { 00:21:42.555 "trtype": "TCP", 00:21:42.555 "adrfam": "IPv4", 00:21:42.555 "traddr": "10.0.0.2", 00:21:42.555 "trsvcid": "4420" 00:21:42.555 }, 00:21:42.555 "peer_address": { 00:21:42.555 "trtype": "TCP", 00:21:42.555 "adrfam": "IPv4", 00:21:42.555 "traddr": "10.0.0.1", 00:21:42.555 "trsvcid": "33620" 00:21:42.555 }, 00:21:42.555 "auth": { 00:21:42.555 "state": "completed", 00:21:42.555 "digest": "sha384", 00:21:42.555 "dhgroup": "ffdhe4096" 00:21:42.555 } 00:21:42.555 } 00:21:42.555 ]' 00:21:42.555 14:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:42.555 14:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:42.555 14:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:42.555 14:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:42.555 14:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:42.555 14:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.555 14:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.555 14:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.813 14:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZmI5YTYyNjM3NzBkYzA4NTBmODc1YmUyMWM0NzdhMWVi24Rl: --dhchap-ctrl-secret DHHC-1:02:YTk1ZmE0ZGJkNmUxNzQzNDc3NzAxZjFiZmViZDg5ZGY3ODNmNWE0Y2I2Nzc2NjI3W8ciIA==: 00:21:43.749 14:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.749 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.749 14:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:43.749 14:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.749 14:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.749 14:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.749 14:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:43.749 14:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:43.750 14:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:44.315 14:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:21:44.315 14:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:44.315 14:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:44.315 14:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:44.315 14:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:44.315 14:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.315 14:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.315 14:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.315 14:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.315 14:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.315 14:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.315 14:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.572 00:21:44.572 14:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:44.572 14:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.572 14:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:44.829 14:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.829 14:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.829 14:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.829 14:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.829 14:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.829 14:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:44.829 { 00:21:44.829 "cntlid": 77, 00:21:44.829 "qid": 0, 00:21:44.829 "state": "enabled", 00:21:44.829 "thread": "nvmf_tgt_poll_group_000", 00:21:44.829 "listen_address": { 00:21:44.829 "trtype": "TCP", 00:21:44.829 "adrfam": "IPv4", 00:21:44.829 "traddr": "10.0.0.2", 00:21:44.829 "trsvcid": "4420" 00:21:44.830 }, 00:21:44.830 "peer_address": { 00:21:44.830 "trtype": "TCP", 00:21:44.830 "adrfam": "IPv4", 00:21:44.830 "traddr": "10.0.0.1", 00:21:44.830 "trsvcid": "33648" 00:21:44.830 }, 00:21:44.830 "auth": { 00:21:44.830 "state": "completed", 00:21:44.830 "digest": "sha384", 00:21:44.830 "dhgroup": "ffdhe4096" 00:21:44.830 } 00:21:44.830 } 00:21:44.830 ]' 00:21:44.830 14:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:44.830 14:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:44.830 14:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:44.830 14:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:44.830 14:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:44.830 14:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.830 14:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.830 14:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.088 14:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzhlNjBkNWFlYWQ1NzUxMzQ1ZTk3NzRlNTZjNWFlYmIzNmI3MDIxOTRhMDg2OWFj9fRvyA==: --dhchap-ctrl-secret DHHC-1:01:YTQxZjk4ZmRjMGE5OTM2MzllMmNkNDIwYTlmOTQ5MjH+Wu2x: 00:21:46.024 14:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.024 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.024 14:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:46.024 14:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.024 14:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.024 14:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.024 14:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:46.024 14:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:46.024 14:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:46.282 14:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:21:46.282 14:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:46.282 14:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:46.282 14:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:46.282 14:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:46.282 14:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.282 14:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:46.282 14:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.282 14:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.282 14:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.282 14:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:46.282 14:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:46.847 00:21:46.847 14:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:46.847 14:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.847 14:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:47.105 14:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.105 14:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.105 14:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.105 14:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.105 14:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.105 14:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:47.105 { 00:21:47.105 "cntlid": 79, 00:21:47.105 "qid": 0, 00:21:47.105 "state": "enabled", 00:21:47.106 "thread": "nvmf_tgt_poll_group_000", 00:21:47.106 "listen_address": { 00:21:47.106 "trtype": "TCP", 00:21:47.106 "adrfam": "IPv4", 00:21:47.106 "traddr": "10.0.0.2", 00:21:47.106 "trsvcid": "4420" 00:21:47.106 }, 00:21:47.106 "peer_address": { 00:21:47.106 "trtype": "TCP", 00:21:47.106 "adrfam": "IPv4", 00:21:47.106 "traddr": "10.0.0.1", 00:21:47.106 "trsvcid": "33668" 00:21:47.106 }, 00:21:47.106 "auth": { 00:21:47.106 "state": "completed", 00:21:47.106 "digest": "sha384", 00:21:47.106 "dhgroup": "ffdhe4096" 00:21:47.106 } 00:21:47.106 } 00:21:47.106 ]' 00:21:47.106 14:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:47.106 14:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:47.106 14:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:47.106 14:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:47.106 14:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:47.106 14:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.106 14:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.106 14:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.365 14:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NzM1ZWU4NDIwMGQwZDZhNjM0MWU5NjQ4ZDkxYzVkZWZjMjFiNmFkOGRjN2I4NzJiYjlmZDY4NzE2ODMzOTg1Y9/LaFo=: 00:21:48.302 14:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.302 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.302 14:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:48.302 14:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.302 14:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.302 14:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.302 14:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:48.302 14:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:48.302 14:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:48.302 14:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:48.561 14:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:21:48.561 14:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:48.561 14:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:48.561 14:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:48.561 14:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:48.561 14:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.561 14:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.561 14:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.561 14:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.561 14:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.561 14:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.561 14:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.128 00:21:49.128 14:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:49.128 14:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:49.128 14:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.386 14:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.386 14:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.386 14:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.386 14:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.386 14:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.386 14:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:49.386 { 00:21:49.386 "cntlid": 81, 00:21:49.386 "qid": 0, 00:21:49.386 "state": "enabled", 00:21:49.386 "thread": "nvmf_tgt_poll_group_000", 00:21:49.386 "listen_address": { 00:21:49.386 "trtype": "TCP", 00:21:49.386 "adrfam": "IPv4", 00:21:49.386 "traddr": "10.0.0.2", 00:21:49.386 "trsvcid": "4420" 00:21:49.386 }, 00:21:49.386 "peer_address": { 00:21:49.386 "trtype": "TCP", 00:21:49.386 "adrfam": "IPv4", 00:21:49.386 "traddr": "10.0.0.1", 00:21:49.386 "trsvcid": "33686" 00:21:49.386 }, 00:21:49.386 "auth": { 00:21:49.386 "state": "completed", 00:21:49.386 "digest": "sha384", 00:21:49.386 "dhgroup": "ffdhe6144" 00:21:49.386 } 00:21:49.386 } 00:21:49.386 ]' 00:21:49.386 14:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:49.386 14:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:49.645 14:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:49.645 14:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:49.645 14:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:49.645 14:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.645 14:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.645 14:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.903 14:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZGU4NzU5NjI1YWZmODYyMTg1ZjQ0NmZiNjU0MGExNGEwYTdiZGE5NDlmZDlhNGM5oMwwQQ==: --dhchap-ctrl-secret DHHC-1:03:MjU3ZjAyOWY1OGJlZDQyOTAxYWU3MTllMDgxNWIzOTFkOTMxYTNhYWY5NTBkMDk3NDlhODdlY2I1OWQ1YTRkYTuqLZA=: 00:21:50.839 14:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.839 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.839 14:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:50.839 14:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.839 14:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.839 14:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.839 14:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:50.839 14:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:50.839 14:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:51.097 14:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:21:51.097 14:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:51.097 14:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:51.097 14:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:51.097 14:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:51.097 14:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.097 14:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.097 14:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.097 14:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.097 14:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.097 14:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.097 14:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.665 00:21:51.665 14:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:51.665 14:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:51.665 14:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.923 14:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.923 14:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:51.923 14:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.923 14:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.923 14:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.923 14:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:51.923 { 00:21:51.923 "cntlid": 83, 00:21:51.923 "qid": 0, 00:21:51.923 "state": "enabled", 00:21:51.923 "thread": "nvmf_tgt_poll_group_000", 00:21:51.923 "listen_address": { 00:21:51.923 "trtype": "TCP", 00:21:51.923 "adrfam": "IPv4", 00:21:51.923 "traddr": "10.0.0.2", 00:21:51.923 "trsvcid": "4420" 00:21:51.923 }, 00:21:51.923 "peer_address": { 00:21:51.923 "trtype": "TCP", 00:21:51.923 "adrfam": "IPv4", 00:21:51.923 "traddr": "10.0.0.1", 00:21:51.923 "trsvcid": "33706" 00:21:51.923 }, 00:21:51.923 "auth": { 00:21:51.923 "state": "completed", 00:21:51.923 "digest": "sha384", 00:21:51.923 "dhgroup": "ffdhe6144" 00:21:51.923 } 00:21:51.923 } 00:21:51.923 ]' 00:21:51.923 14:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:51.923 14:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:51.923 14:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:51.923 14:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:51.923 14:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:51.923 14:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.923 14:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.923 14:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.180 14:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZmI5YTYyNjM3NzBkYzA4NTBmODc1YmUyMWM0NzdhMWVi24Rl: --dhchap-ctrl-secret DHHC-1:02:YTk1ZmE0ZGJkNmUxNzQzNDc3NzAxZjFiZmViZDg5ZGY3ODNmNWE0Y2I2Nzc2NjI3W8ciIA==: 00:21:53.554 14:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.555 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.555 14:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:53.555 14:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.555 14:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.555 14:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.555 14:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:53.555 14:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:53.555 14:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:53.555 14:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:21:53.555 14:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:53.555 14:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:53.555 14:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:53.555 14:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:53.555 14:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.555 14:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.555 14:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.555 14:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.555 14:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.555 14:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.555 14:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.161 00:21:54.161 14:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:54.161 14:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:54.161 14:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.418 14:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.418 14:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.418 14:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.418 14:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.418 14:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.418 14:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:54.418 { 00:21:54.419 "cntlid": 85, 00:21:54.419 "qid": 0, 00:21:54.419 "state": "enabled", 00:21:54.419 "thread": "nvmf_tgt_poll_group_000", 00:21:54.419 "listen_address": { 00:21:54.419 "trtype": "TCP", 00:21:54.419 "adrfam": "IPv4", 00:21:54.419 "traddr": "10.0.0.2", 00:21:54.419 "trsvcid": "4420" 00:21:54.419 }, 00:21:54.419 "peer_address": { 00:21:54.419 "trtype": "TCP", 00:21:54.419 "adrfam": "IPv4", 00:21:54.419 "traddr": "10.0.0.1", 00:21:54.419 "trsvcid": "38264" 00:21:54.419 }, 00:21:54.419 "auth": { 00:21:54.419 "state": "completed", 00:21:54.419 "digest": "sha384", 00:21:54.419 "dhgroup": "ffdhe6144" 00:21:54.419 } 00:21:54.419 } 00:21:54.419 ]' 00:21:54.419 14:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:54.419 14:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:54.419 14:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:54.419 14:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:54.419 14:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:54.419 14:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.419 14:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.419 14:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.677 14:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzhlNjBkNWFlYWQ1NzUxMzQ1ZTk3NzRlNTZjNWFlYmIzNmI3MDIxOTRhMDg2OWFj9fRvyA==: --dhchap-ctrl-secret DHHC-1:01:YTQxZjk4ZmRjMGE5OTM2MzllMmNkNDIwYTlmOTQ5MjH+Wu2x: 00:21:55.607 14:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.867 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.867 14:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:55.867 14:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.867 14:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.867 14:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.867 14:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:55.867 14:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:55.867 14:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:56.176 14:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:21:56.176 14:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:56.176 14:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:56.176 14:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:56.176 14:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:56.176 14:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:56.176 14:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:56.176 14:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.176 14:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.176 14:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.176 14:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:56.176 14:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:56.742 00:21:56.742 14:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:56.742 14:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:56.742 14:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.742 14:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.742 14:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.742 14:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.742 14:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.743 14:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.743 14:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:56.743 { 00:21:56.743 "cntlid": 87, 00:21:56.743 "qid": 0, 00:21:56.743 "state": "enabled", 00:21:56.743 "thread": "nvmf_tgt_poll_group_000", 00:21:56.743 "listen_address": { 00:21:56.743 "trtype": "TCP", 00:21:56.743 "adrfam": "IPv4", 00:21:56.743 "traddr": "10.0.0.2", 00:21:56.743 "trsvcid": "4420" 00:21:56.743 }, 00:21:56.743 "peer_address": { 00:21:56.743 "trtype": "TCP", 00:21:56.743 "adrfam": "IPv4", 00:21:56.743 "traddr": "10.0.0.1", 00:21:56.743 "trsvcid": "38286" 00:21:56.743 }, 00:21:56.743 "auth": { 00:21:56.743 "state": "completed", 00:21:56.743 "digest": "sha384", 00:21:56.743 "dhgroup": "ffdhe6144" 00:21:56.743 } 00:21:56.743 } 00:21:56.743 ]' 00:21:56.743 14:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:56.999 14:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:57.000 14:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:57.000 14:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:57.000 14:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:57.000 14:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.000 14:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.000 14:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:57.256 14:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NzM1ZWU4NDIwMGQwZDZhNjM0MWU5NjQ4ZDkxYzVkZWZjMjFiNmFkOGRjN2I4NzJiYjlmZDY4NzE2ODMzOTg1Y9/LaFo=: 00:21:58.190 14:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.190 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.190 14:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:58.190 14:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.190 14:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.190 14:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.190 14:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:58.190 14:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:58.190 14:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:58.190 14:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:58.446 14:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:21:58.446 14:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:58.446 14:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:58.446 14:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:58.446 14:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:58.446 14:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:58.446 14:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:58.446 14:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.446 14:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.446 14:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.446 14:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:58.446 14:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.380 00:21:59.380 14:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:59.380 14:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:59.380 14:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.637 14:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.637 14:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.637 14:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.637 14:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.637 14:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.637 14:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:59.637 { 00:21:59.637 "cntlid": 89, 00:21:59.637 "qid": 0, 00:21:59.637 "state": "enabled", 00:21:59.637 "thread": "nvmf_tgt_poll_group_000", 00:21:59.637 "listen_address": { 00:21:59.637 "trtype": "TCP", 00:21:59.637 "adrfam": "IPv4", 00:21:59.637 "traddr": "10.0.0.2", 00:21:59.637 "trsvcid": "4420" 00:21:59.637 }, 00:21:59.637 "peer_address": { 00:21:59.637 "trtype": "TCP", 00:21:59.637 "adrfam": "IPv4", 00:21:59.637 "traddr": "10.0.0.1", 00:21:59.637 "trsvcid": "38304" 00:21:59.637 }, 00:21:59.637 "auth": { 00:21:59.637 "state": "completed", 00:21:59.637 "digest": "sha384", 00:21:59.637 "dhgroup": "ffdhe8192" 00:21:59.637 } 00:21:59.637 } 00:21:59.637 ]' 00:21:59.637 14:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:59.637 14:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:59.637 14:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:59.637 14:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:59.637 14:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:59.637 14:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.638 14:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.638 14:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.896 14:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZGU4NzU5NjI1YWZmODYyMTg1ZjQ0NmZiNjU0MGExNGEwYTdiZGE5NDlmZDlhNGM5oMwwQQ==: --dhchap-ctrl-secret DHHC-1:03:MjU3ZjAyOWY1OGJlZDQyOTAxYWU3MTllMDgxNWIzOTFkOTMxYTNhYWY5NTBkMDk3NDlhODdlY2I1OWQ1YTRkYTuqLZA=: 00:22:01.271 14:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.271 14:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:01.271 14:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.271 14:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.271 14:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.271 14:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:01.271 14:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:01.271 14:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:01.271 14:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:22:01.271 14:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:01.271 14:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:01.271 14:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:01.271 14:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:01.271 14:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:01.271 14:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.271 14:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.271 14:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.271 14:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.271 14:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.271 14:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.205 00:22:02.205 14:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:02.205 14:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:02.205 14:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.463 14:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.463 14:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:02.463 14:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.463 14:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.463 14:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.463 14:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:02.463 { 00:22:02.463 "cntlid": 91, 00:22:02.463 "qid": 0, 00:22:02.463 "state": "enabled", 00:22:02.463 "thread": "nvmf_tgt_poll_group_000", 00:22:02.463 "listen_address": { 00:22:02.463 "trtype": "TCP", 00:22:02.463 "adrfam": "IPv4", 00:22:02.463 "traddr": "10.0.0.2", 00:22:02.463 "trsvcid": "4420" 00:22:02.463 }, 00:22:02.463 "peer_address": { 00:22:02.463 "trtype": "TCP", 00:22:02.463 "adrfam": "IPv4", 00:22:02.463 "traddr": "10.0.0.1", 00:22:02.463 "trsvcid": "38330" 00:22:02.463 }, 00:22:02.463 "auth": { 00:22:02.463 "state": "completed", 00:22:02.463 "digest": "sha384", 00:22:02.463 "dhgroup": "ffdhe8192" 00:22:02.463 } 00:22:02.463 } 00:22:02.463 ]' 00:22:02.463 14:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:02.463 14:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:02.463 14:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:02.463 14:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:02.463 14:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:02.463 14:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.463 14:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.463 14:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:02.720 14:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZmI5YTYyNjM3NzBkYzA4NTBmODc1YmUyMWM0NzdhMWVi24Rl: --dhchap-ctrl-secret DHHC-1:02:YTk1ZmE0ZGJkNmUxNzQzNDc3NzAxZjFiZmViZDg5ZGY3ODNmNWE0Y2I2Nzc2NjI3W8ciIA==: 00:22:03.655 14:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:03.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:03.655 14:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:03.655 14:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.655 14:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.913 14:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.913 14:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:03.913 14:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:03.913 14:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:04.171 14:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:22:04.171 14:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:04.171 14:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:04.171 14:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:04.171 14:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:04.171 14:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:04.171 14:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.171 14:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.171 14:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.171 14:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.171 14:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.171 14:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:05.106 00:22:05.106 14:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:05.106 14:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:05.106 14:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:05.106 14:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.106 14:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:05.106 14:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.106 14:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.106 14:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.106 14:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:05.106 { 00:22:05.106 "cntlid": 93, 00:22:05.106 "qid": 0, 00:22:05.106 "state": "enabled", 00:22:05.106 "thread": "nvmf_tgt_poll_group_000", 00:22:05.106 "listen_address": { 00:22:05.106 "trtype": "TCP", 00:22:05.106 "adrfam": "IPv4", 00:22:05.106 "traddr": "10.0.0.2", 00:22:05.106 "trsvcid": "4420" 00:22:05.106 }, 00:22:05.106 "peer_address": { 00:22:05.106 "trtype": "TCP", 00:22:05.106 "adrfam": "IPv4", 00:22:05.106 "traddr": "10.0.0.1", 00:22:05.106 "trsvcid": "38134" 00:22:05.106 }, 00:22:05.106 "auth": { 00:22:05.106 "state": "completed", 00:22:05.106 "digest": "sha384", 00:22:05.106 "dhgroup": "ffdhe8192" 00:22:05.106 } 00:22:05.106 } 00:22:05.106 ]' 00:22:05.106 14:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:05.364 14:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:05.364 14:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:05.364 14:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:05.364 14:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:05.364 14:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:05.364 14:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:05.364 14:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:05.621 14:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzhlNjBkNWFlYWQ1NzUxMzQ1ZTk3NzRlNTZjNWFlYmIzNmI3MDIxOTRhMDg2OWFj9fRvyA==: --dhchap-ctrl-secret DHHC-1:01:YTQxZjk4ZmRjMGE5OTM2MzllMmNkNDIwYTlmOTQ5MjH+Wu2x: 00:22:06.553 14:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:06.553 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:06.553 14:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:06.553 14:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.553 14:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.553 14:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.553 14:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:06.553 14:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:06.553 14:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:06.810 14:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:22:06.810 14:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:06.810 14:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:06.810 14:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:06.810 14:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:06.810 14:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:06.810 14:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:06.810 14:54:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.810 14:54:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.810 14:54:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.810 14:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:06.810 14:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:07.745 00:22:07.745 14:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:07.745 14:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:07.745 14:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.002 14:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.002 14:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:08.002 14:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.002 14:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.002 14:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.002 14:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:08.002 { 00:22:08.002 "cntlid": 95, 00:22:08.002 "qid": 0, 00:22:08.002 "state": "enabled", 00:22:08.002 "thread": "nvmf_tgt_poll_group_000", 00:22:08.002 "listen_address": { 00:22:08.002 "trtype": "TCP", 00:22:08.002 "adrfam": "IPv4", 00:22:08.002 "traddr": "10.0.0.2", 00:22:08.002 "trsvcid": "4420" 00:22:08.002 }, 00:22:08.002 "peer_address": { 00:22:08.002 "trtype": "TCP", 00:22:08.002 "adrfam": "IPv4", 00:22:08.002 "traddr": "10.0.0.1", 00:22:08.002 "trsvcid": "38164" 00:22:08.002 }, 00:22:08.002 "auth": { 00:22:08.002 "state": "completed", 00:22:08.002 "digest": "sha384", 00:22:08.002 "dhgroup": "ffdhe8192" 00:22:08.002 } 00:22:08.002 } 00:22:08.002 ]' 00:22:08.002 14:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:08.002 14:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:08.002 14:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:08.002 14:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:08.002 14:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:08.002 14:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:08.002 14:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.002 14:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.260 14:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NzM1ZWU4NDIwMGQwZDZhNjM0MWU5NjQ4ZDkxYzVkZWZjMjFiNmFkOGRjN2I4NzJiYjlmZDY4NzE2ODMzOTg1Y9/LaFo=: 00:22:09.198 14:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.457 14:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:09.457 14:54:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.457 14:54:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.457 14:54:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.457 14:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:22:09.457 14:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:09.457 14:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:09.457 14:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:09.457 14:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:09.716 14:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:22:09.716 14:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:09.716 14:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:09.716 14:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:09.716 14:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:09.716 14:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:09.716 14:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:09.716 14:54:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.716 14:54:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.716 14:54:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.716 14:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:09.716 14:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:09.973 00:22:09.973 14:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:09.973 14:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.973 14:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:10.231 14:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.231 14:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:10.231 14:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.231 14:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.231 14:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.231 14:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:10.231 { 00:22:10.231 "cntlid": 97, 00:22:10.231 "qid": 0, 00:22:10.231 "state": "enabled", 00:22:10.231 "thread": "nvmf_tgt_poll_group_000", 00:22:10.231 "listen_address": { 00:22:10.231 "trtype": "TCP", 00:22:10.231 "adrfam": "IPv4", 00:22:10.231 "traddr": "10.0.0.2", 00:22:10.231 "trsvcid": "4420" 00:22:10.231 }, 00:22:10.231 "peer_address": { 00:22:10.231 "trtype": "TCP", 00:22:10.231 "adrfam": "IPv4", 00:22:10.231 "traddr": "10.0.0.1", 00:22:10.231 "trsvcid": "38184" 00:22:10.231 }, 00:22:10.231 "auth": { 00:22:10.231 "state": "completed", 00:22:10.231 "digest": "sha512", 00:22:10.231 "dhgroup": "null" 00:22:10.231 } 00:22:10.231 } 00:22:10.231 ]' 00:22:10.231 14:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:10.231 14:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:10.231 14:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:10.231 14:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:10.231 14:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:10.231 14:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:10.231 14:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.231 14:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.488 14:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZGU4NzU5NjI1YWZmODYyMTg1ZjQ0NmZiNjU0MGExNGEwYTdiZGE5NDlmZDlhNGM5oMwwQQ==: --dhchap-ctrl-secret DHHC-1:03:MjU3ZjAyOWY1OGJlZDQyOTAxYWU3MTllMDgxNWIzOTFkOTMxYTNhYWY5NTBkMDk3NDlhODdlY2I1OWQ1YTRkYTuqLZA=: 00:22:11.452 14:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:11.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:11.452 14:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:11.452 14:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.452 14:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.452 14:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.452 14:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:11.452 14:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:11.452 14:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:11.710 14:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:22:11.710 14:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:11.710 14:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:11.710 14:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:11.710 14:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:11.710 14:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:11.710 14:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:11.710 14:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.710 14:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.710 14:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.710 14:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:11.710 14:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:11.970 00:22:12.230 14:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:12.230 14:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:12.230 14:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:12.230 14:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.230 14:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:12.230 14:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.230 14:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.488 14:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.488 14:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:12.488 { 00:22:12.488 "cntlid": 99, 00:22:12.488 "qid": 0, 00:22:12.488 "state": "enabled", 00:22:12.488 "thread": "nvmf_tgt_poll_group_000", 00:22:12.488 "listen_address": { 00:22:12.488 "trtype": "TCP", 00:22:12.488 "adrfam": "IPv4", 00:22:12.488 "traddr": "10.0.0.2", 00:22:12.488 "trsvcid": "4420" 00:22:12.488 }, 00:22:12.488 "peer_address": { 00:22:12.488 "trtype": "TCP", 00:22:12.488 "adrfam": "IPv4", 00:22:12.488 "traddr": "10.0.0.1", 00:22:12.488 "trsvcid": "59110" 00:22:12.488 }, 00:22:12.488 "auth": { 00:22:12.488 "state": "completed", 00:22:12.488 "digest": "sha512", 00:22:12.488 "dhgroup": "null" 00:22:12.488 } 00:22:12.488 } 00:22:12.488 ]' 00:22:12.488 14:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:12.489 14:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:12.489 14:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:12.489 14:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:12.489 14:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:12.489 14:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:12.489 14:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:12.489 14:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:12.747 14:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZmI5YTYyNjM3NzBkYzA4NTBmODc1YmUyMWM0NzdhMWVi24Rl: --dhchap-ctrl-secret DHHC-1:02:YTk1ZmE0ZGJkNmUxNzQzNDc3NzAxZjFiZmViZDg5ZGY3ODNmNWE0Y2I2Nzc2NjI3W8ciIA==: 00:22:13.680 14:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:13.680 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:13.680 14:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:13.680 14:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.680 14:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.680 14:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.680 14:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:13.680 14:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:13.680 14:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:13.939 14:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:22:13.939 14:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:13.939 14:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:13.939 14:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:13.939 14:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:13.939 14:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:13.939 14:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:13.939 14:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.939 14:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.939 14:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.939 14:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:13.939 14:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.196 00:22:14.196 14:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:14.196 14:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:14.196 14:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.467 14:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.467 14:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:14.467 14:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.467 14:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.467 14:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.467 14:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:14.467 { 00:22:14.467 "cntlid": 101, 00:22:14.467 "qid": 0, 00:22:14.467 "state": "enabled", 00:22:14.467 "thread": "nvmf_tgt_poll_group_000", 00:22:14.467 "listen_address": { 00:22:14.467 "trtype": "TCP", 00:22:14.467 "adrfam": "IPv4", 00:22:14.467 "traddr": "10.0.0.2", 00:22:14.467 "trsvcid": "4420" 00:22:14.467 }, 00:22:14.467 "peer_address": { 00:22:14.467 "trtype": "TCP", 00:22:14.467 "adrfam": "IPv4", 00:22:14.467 "traddr": "10.0.0.1", 00:22:14.467 "trsvcid": "59140" 00:22:14.467 }, 00:22:14.467 "auth": { 00:22:14.467 "state": "completed", 00:22:14.467 "digest": "sha512", 00:22:14.467 "dhgroup": "null" 00:22:14.467 } 00:22:14.467 } 00:22:14.467 ]' 00:22:14.467 14:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:14.467 14:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:14.467 14:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:14.724 14:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:14.724 14:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:14.724 14:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:14.724 14:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:14.724 14:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:14.982 14:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzhlNjBkNWFlYWQ1NzUxMzQ1ZTk3NzRlNTZjNWFlYmIzNmI3MDIxOTRhMDg2OWFj9fRvyA==: --dhchap-ctrl-secret DHHC-1:01:YTQxZjk4ZmRjMGE5OTM2MzllMmNkNDIwYTlmOTQ5MjH+Wu2x: 00:22:15.917 14:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:15.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:15.917 14:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:15.917 14:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.917 14:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.917 14:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.917 14:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:15.917 14:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:15.917 14:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:16.175 14:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:22:16.175 14:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:16.175 14:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:16.175 14:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:16.175 14:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:16.175 14:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:16.175 14:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:16.175 14:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.175 14:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.175 14:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.175 14:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:16.175 14:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:16.433 00:22:16.433 14:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:16.433 14:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:16.433 14:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.691 14:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.691 14:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:16.691 14:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.691 14:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.691 14:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.691 14:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:16.691 { 00:22:16.691 "cntlid": 103, 00:22:16.691 "qid": 0, 00:22:16.691 "state": "enabled", 00:22:16.691 "thread": "nvmf_tgt_poll_group_000", 00:22:16.691 "listen_address": { 00:22:16.691 "trtype": "TCP", 00:22:16.691 "adrfam": "IPv4", 00:22:16.691 "traddr": "10.0.0.2", 00:22:16.691 "trsvcid": "4420" 00:22:16.691 }, 00:22:16.691 "peer_address": { 00:22:16.691 "trtype": "TCP", 00:22:16.691 "adrfam": "IPv4", 00:22:16.691 "traddr": "10.0.0.1", 00:22:16.691 "trsvcid": "59152" 00:22:16.691 }, 00:22:16.691 "auth": { 00:22:16.691 "state": "completed", 00:22:16.691 "digest": "sha512", 00:22:16.691 "dhgroup": "null" 00:22:16.691 } 00:22:16.691 } 00:22:16.691 ]' 00:22:16.692 14:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:16.692 14:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:16.692 14:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:16.692 14:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:16.692 14:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:16.692 14:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:16.692 14:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:16.692 14:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:16.951 14:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NzM1ZWU4NDIwMGQwZDZhNjM0MWU5NjQ4ZDkxYzVkZWZjMjFiNmFkOGRjN2I4NzJiYjlmZDY4NzE2ODMzOTg1Y9/LaFo=: 00:22:17.887 14:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:17.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:17.888 14:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:17.888 14:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.888 14:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.888 14:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.888 14:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:17.888 14:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:17.888 14:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:17.888 14:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:18.146 14:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:22:18.146 14:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:18.146 14:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:18.146 14:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:18.146 14:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:18.146 14:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:18.146 14:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:18.146 14:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.146 14:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.146 14:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.146 14:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:18.146 14:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:18.713 00:22:18.713 14:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:18.713 14:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:18.713 14:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.713 14:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.713 14:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:18.713 14:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.713 14:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.971 14:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.971 14:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:18.971 { 00:22:18.971 "cntlid": 105, 00:22:18.971 "qid": 0, 00:22:18.971 "state": "enabled", 00:22:18.971 "thread": "nvmf_tgt_poll_group_000", 00:22:18.971 "listen_address": { 00:22:18.971 "trtype": "TCP", 00:22:18.971 "adrfam": "IPv4", 00:22:18.971 "traddr": "10.0.0.2", 00:22:18.971 "trsvcid": "4420" 00:22:18.971 }, 00:22:18.971 "peer_address": { 00:22:18.971 "trtype": "TCP", 00:22:18.971 "adrfam": "IPv4", 00:22:18.971 "traddr": "10.0.0.1", 00:22:18.971 "trsvcid": "59188" 00:22:18.971 }, 00:22:18.971 "auth": { 00:22:18.971 "state": "completed", 00:22:18.971 "digest": "sha512", 00:22:18.971 "dhgroup": "ffdhe2048" 00:22:18.971 } 00:22:18.971 } 00:22:18.971 ]' 00:22:18.971 14:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:18.971 14:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:18.971 14:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:18.971 14:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:18.971 14:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:18.971 14:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:18.971 14:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:18.971 14:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:19.229 14:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZGU4NzU5NjI1YWZmODYyMTg1ZjQ0NmZiNjU0MGExNGEwYTdiZGE5NDlmZDlhNGM5oMwwQQ==: --dhchap-ctrl-secret DHHC-1:03:MjU3ZjAyOWY1OGJlZDQyOTAxYWU3MTllMDgxNWIzOTFkOTMxYTNhYWY5NTBkMDk3NDlhODdlY2I1OWQ1YTRkYTuqLZA=: 00:22:20.164 14:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:20.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:20.164 14:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:20.164 14:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.164 14:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.165 14:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.165 14:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:20.165 14:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:20.165 14:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:20.422 14:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:22:20.422 14:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:20.422 14:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:20.422 14:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:20.422 14:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:20.422 14:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:20.422 14:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.422 14:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.422 14:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.422 14:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.422 14:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.422 14:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.679 00:22:20.679 14:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:20.679 14:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:20.679 14:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:20.937 14:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.937 14:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:20.937 14:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.937 14:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.937 14:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.937 14:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:20.937 { 00:22:20.937 "cntlid": 107, 00:22:20.937 "qid": 0, 00:22:20.937 "state": "enabled", 00:22:20.937 "thread": "nvmf_tgt_poll_group_000", 00:22:20.937 "listen_address": { 00:22:20.937 "trtype": "TCP", 00:22:20.937 "adrfam": "IPv4", 00:22:20.937 "traddr": "10.0.0.2", 00:22:20.937 "trsvcid": "4420" 00:22:20.937 }, 00:22:20.937 "peer_address": { 00:22:20.937 "trtype": "TCP", 00:22:20.937 "adrfam": "IPv4", 00:22:20.937 "traddr": "10.0.0.1", 00:22:20.937 "trsvcid": "59214" 00:22:20.937 }, 00:22:20.937 "auth": { 00:22:20.937 "state": "completed", 00:22:20.937 "digest": "sha512", 00:22:20.937 "dhgroup": "ffdhe2048" 00:22:20.937 } 00:22:20.937 } 00:22:20.937 ]' 00:22:20.937 14:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:20.937 14:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:20.937 14:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:21.195 14:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:21.195 14:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:21.195 14:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:21.195 14:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:21.195 14:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:21.452 14:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZmI5YTYyNjM3NzBkYzA4NTBmODc1YmUyMWM0NzdhMWVi24Rl: --dhchap-ctrl-secret DHHC-1:02:YTk1ZmE0ZGJkNmUxNzQzNDc3NzAxZjFiZmViZDg5ZGY3ODNmNWE0Y2I2Nzc2NjI3W8ciIA==: 00:22:22.388 14:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:22.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:22.388 14:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:22.388 14:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.388 14:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.388 14:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.388 14:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:22.388 14:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:22.388 14:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:22.646 14:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:22:22.646 14:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:22.646 14:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:22.646 14:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:22.646 14:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:22.646 14:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:22.646 14:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:22.646 14:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.646 14:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.646 14:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.646 14:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:22.646 14:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:22.904 00:22:22.904 14:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:22.904 14:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:22.904 14:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.162 14:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.162 14:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:23.162 14:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.162 14:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.162 14:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.162 14:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:23.162 { 00:22:23.162 "cntlid": 109, 00:22:23.162 "qid": 0, 00:22:23.162 "state": "enabled", 00:22:23.162 "thread": "nvmf_tgt_poll_group_000", 00:22:23.162 "listen_address": { 00:22:23.162 "trtype": "TCP", 00:22:23.162 "adrfam": "IPv4", 00:22:23.162 "traddr": "10.0.0.2", 00:22:23.162 "trsvcid": "4420" 00:22:23.162 }, 00:22:23.162 "peer_address": { 00:22:23.162 "trtype": "TCP", 00:22:23.162 "adrfam": "IPv4", 00:22:23.162 "traddr": "10.0.0.1", 00:22:23.162 "trsvcid": "57838" 00:22:23.162 }, 00:22:23.162 "auth": { 00:22:23.162 "state": "completed", 00:22:23.162 "digest": "sha512", 00:22:23.162 "dhgroup": "ffdhe2048" 00:22:23.162 } 00:22:23.162 } 00:22:23.162 ]' 00:22:23.162 14:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:23.162 14:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:23.162 14:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:23.420 14:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:23.420 14:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:23.420 14:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:23.420 14:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:23.420 14:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:23.677 14:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzhlNjBkNWFlYWQ1NzUxMzQ1ZTk3NzRlNTZjNWFlYmIzNmI3MDIxOTRhMDg2OWFj9fRvyA==: --dhchap-ctrl-secret DHHC-1:01:YTQxZjk4ZmRjMGE5OTM2MzllMmNkNDIwYTlmOTQ5MjH+Wu2x: 00:22:24.609 14:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:24.609 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:24.609 14:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:24.609 14:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.609 14:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.609 14:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.609 14:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:24.609 14:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:24.609 14:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:24.867 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:22:24.867 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:24.867 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:24.867 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:24.867 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:24.867 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:24.867 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:24.867 14:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.867 14:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.867 14:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.867 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:24.867 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:25.124 00:22:25.124 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:25.124 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:25.124 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:25.382 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.382 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:25.382 14:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.382 14:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.382 14:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.382 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:25.382 { 00:22:25.382 "cntlid": 111, 00:22:25.382 "qid": 0, 00:22:25.382 "state": "enabled", 00:22:25.382 "thread": "nvmf_tgt_poll_group_000", 00:22:25.382 "listen_address": { 00:22:25.382 "trtype": "TCP", 00:22:25.382 "adrfam": "IPv4", 00:22:25.382 "traddr": "10.0.0.2", 00:22:25.382 "trsvcid": "4420" 00:22:25.382 }, 00:22:25.382 "peer_address": { 00:22:25.382 "trtype": "TCP", 00:22:25.382 "adrfam": "IPv4", 00:22:25.382 "traddr": "10.0.0.1", 00:22:25.382 "trsvcid": "57860" 00:22:25.382 }, 00:22:25.382 "auth": { 00:22:25.382 "state": "completed", 00:22:25.382 "digest": "sha512", 00:22:25.382 "dhgroup": "ffdhe2048" 00:22:25.382 } 00:22:25.382 } 00:22:25.382 ]' 00:22:25.382 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:25.640 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:25.640 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:25.640 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:25.640 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:25.640 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:25.640 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:25.640 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:25.898 14:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NzM1ZWU4NDIwMGQwZDZhNjM0MWU5NjQ4ZDkxYzVkZWZjMjFiNmFkOGRjN2I4NzJiYjlmZDY4NzE2ODMzOTg1Y9/LaFo=: 00:22:26.834 14:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:26.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:26.835 14:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:26.835 14:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.835 14:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.835 14:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.835 14:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:26.835 14:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:26.835 14:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:26.835 14:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:27.125 14:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:22:27.125 14:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:27.125 14:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:27.125 14:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:27.125 14:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:27.125 14:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:27.125 14:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:27.125 14:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.125 14:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.125 14:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.125 14:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:27.125 14:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:27.383 00:22:27.383 14:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:27.383 14:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:27.383 14:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:27.641 14:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.641 14:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:27.641 14:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.641 14:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.641 14:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.641 14:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:27.641 { 00:22:27.641 "cntlid": 113, 00:22:27.641 "qid": 0, 00:22:27.641 "state": "enabled", 00:22:27.641 "thread": "nvmf_tgt_poll_group_000", 00:22:27.641 "listen_address": { 00:22:27.641 "trtype": "TCP", 00:22:27.641 "adrfam": "IPv4", 00:22:27.641 "traddr": "10.0.0.2", 00:22:27.641 "trsvcid": "4420" 00:22:27.641 }, 00:22:27.641 "peer_address": { 00:22:27.641 "trtype": "TCP", 00:22:27.641 "adrfam": "IPv4", 00:22:27.641 "traddr": "10.0.0.1", 00:22:27.641 "trsvcid": "57892" 00:22:27.641 }, 00:22:27.641 "auth": { 00:22:27.641 "state": "completed", 00:22:27.641 "digest": "sha512", 00:22:27.641 "dhgroup": "ffdhe3072" 00:22:27.641 } 00:22:27.641 } 00:22:27.641 ]' 00:22:27.641 14:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:27.641 14:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:27.641 14:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:27.898 14:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:27.899 14:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:27.899 14:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:27.899 14:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:27.899 14:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:28.156 14:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZGU4NzU5NjI1YWZmODYyMTg1ZjQ0NmZiNjU0MGExNGEwYTdiZGE5NDlmZDlhNGM5oMwwQQ==: --dhchap-ctrl-secret DHHC-1:03:MjU3ZjAyOWY1OGJlZDQyOTAxYWU3MTllMDgxNWIzOTFkOTMxYTNhYWY5NTBkMDk3NDlhODdlY2I1OWQ1YTRkYTuqLZA=: 00:22:29.092 14:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:29.092 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:29.092 14:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:29.092 14:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.092 14:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.092 14:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.092 14:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:29.092 14:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:29.092 14:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:29.350 14:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:22:29.350 14:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:29.350 14:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:29.350 14:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:29.350 14:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:29.350 14:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:29.350 14:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:29.350 14:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.350 14:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.350 14:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.350 14:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:29.350 14:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:29.607 00:22:29.607 14:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:29.607 14:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:29.607 14:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:29.864 14:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.864 14:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:29.864 14:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.864 14:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.864 14:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.864 14:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:29.864 { 00:22:29.864 "cntlid": 115, 00:22:29.864 "qid": 0, 00:22:29.864 "state": "enabled", 00:22:29.864 "thread": "nvmf_tgt_poll_group_000", 00:22:29.864 "listen_address": { 00:22:29.864 "trtype": "TCP", 00:22:29.864 "adrfam": "IPv4", 00:22:29.864 "traddr": "10.0.0.2", 00:22:29.864 "trsvcid": "4420" 00:22:29.864 }, 00:22:29.864 "peer_address": { 00:22:29.864 "trtype": "TCP", 00:22:29.864 "adrfam": "IPv4", 00:22:29.864 "traddr": "10.0.0.1", 00:22:29.864 "trsvcid": "57914" 00:22:29.864 }, 00:22:29.864 "auth": { 00:22:29.864 "state": "completed", 00:22:29.864 "digest": "sha512", 00:22:29.864 "dhgroup": "ffdhe3072" 00:22:29.864 } 00:22:29.864 } 00:22:29.864 ]' 00:22:29.864 14:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:29.864 14:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:29.864 14:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:30.122 14:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:30.122 14:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:30.122 14:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:30.122 14:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:30.122 14:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:30.379 14:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZmI5YTYyNjM3NzBkYzA4NTBmODc1YmUyMWM0NzdhMWVi24Rl: --dhchap-ctrl-secret DHHC-1:02:YTk1ZmE0ZGJkNmUxNzQzNDc3NzAxZjFiZmViZDg5ZGY3ODNmNWE0Y2I2Nzc2NjI3W8ciIA==: 00:22:31.311 14:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:31.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:31.311 14:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:31.311 14:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.311 14:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.311 14:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.311 14:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:31.311 14:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:31.311 14:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:31.568 14:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:22:31.568 14:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:31.568 14:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:31.568 14:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:31.568 14:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:31.568 14:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:31.568 14:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:31.568 14:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.568 14:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.568 14:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.568 14:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:31.568 14:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:31.825 00:22:31.826 14:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:31.826 14:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:31.826 14:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.083 14:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.083 14:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:32.083 14:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.083 14:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.083 14:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.083 14:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:32.083 { 00:22:32.083 "cntlid": 117, 00:22:32.083 "qid": 0, 00:22:32.083 "state": "enabled", 00:22:32.083 "thread": "nvmf_tgt_poll_group_000", 00:22:32.083 "listen_address": { 00:22:32.083 "trtype": "TCP", 00:22:32.083 "adrfam": "IPv4", 00:22:32.083 "traddr": "10.0.0.2", 00:22:32.083 "trsvcid": "4420" 00:22:32.083 }, 00:22:32.083 "peer_address": { 00:22:32.083 "trtype": "TCP", 00:22:32.083 "adrfam": "IPv4", 00:22:32.083 "traddr": "10.0.0.1", 00:22:32.083 "trsvcid": "57940" 00:22:32.083 }, 00:22:32.083 "auth": { 00:22:32.083 "state": "completed", 00:22:32.083 "digest": "sha512", 00:22:32.083 "dhgroup": "ffdhe3072" 00:22:32.083 } 00:22:32.083 } 00:22:32.083 ]' 00:22:32.083 14:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:32.083 14:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:32.083 14:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:32.341 14:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:32.341 14:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:32.341 14:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:32.341 14:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:32.341 14:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:32.598 14:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzhlNjBkNWFlYWQ1NzUxMzQ1ZTk3NzRlNTZjNWFlYmIzNmI3MDIxOTRhMDg2OWFj9fRvyA==: --dhchap-ctrl-secret DHHC-1:01:YTQxZjk4ZmRjMGE5OTM2MzllMmNkNDIwYTlmOTQ5MjH+Wu2x: 00:22:33.530 14:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:33.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:33.530 14:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:33.530 14:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.530 14:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.530 14:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.530 14:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:33.530 14:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:33.530 14:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:33.788 14:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:22:33.788 14:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:33.788 14:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:33.788 14:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:33.788 14:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:33.788 14:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:33.788 14:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:33.788 14:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.788 14:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.788 14:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.788 14:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:33.788 14:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:34.053 00:22:34.053 14:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:34.053 14:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:34.053 14:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:34.317 14:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.317 14:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:34.317 14:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.317 14:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.317 14:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.317 14:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:34.317 { 00:22:34.317 "cntlid": 119, 00:22:34.317 "qid": 0, 00:22:34.318 "state": "enabled", 00:22:34.318 "thread": "nvmf_tgt_poll_group_000", 00:22:34.318 "listen_address": { 00:22:34.318 "trtype": "TCP", 00:22:34.318 "adrfam": "IPv4", 00:22:34.318 "traddr": "10.0.0.2", 00:22:34.318 "trsvcid": "4420" 00:22:34.318 }, 00:22:34.318 "peer_address": { 00:22:34.318 "trtype": "TCP", 00:22:34.318 "adrfam": "IPv4", 00:22:34.318 "traddr": "10.0.0.1", 00:22:34.318 "trsvcid": "39364" 00:22:34.318 }, 00:22:34.318 "auth": { 00:22:34.318 "state": "completed", 00:22:34.318 "digest": "sha512", 00:22:34.318 "dhgroup": "ffdhe3072" 00:22:34.318 } 00:22:34.318 } 00:22:34.318 ]' 00:22:34.318 14:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:34.318 14:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:34.318 14:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:34.318 14:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:34.318 14:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:34.318 14:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:34.318 14:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:34.318 14:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:34.575 14:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NzM1ZWU4NDIwMGQwZDZhNjM0MWU5NjQ4ZDkxYzVkZWZjMjFiNmFkOGRjN2I4NzJiYjlmZDY4NzE2ODMzOTg1Y9/LaFo=: 00:22:35.508 14:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:35.508 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:35.508 14:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:35.508 14:55:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.508 14:55:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.767 14:55:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.767 14:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:35.767 14:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:35.767 14:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:35.767 14:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:35.767 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:22:35.767 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:35.767 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:35.767 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:35.767 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:35.767 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:35.767 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:35.767 14:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.767 14:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.027 14:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.027 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:36.027 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:36.286 00:22:36.286 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:36.286 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:36.286 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:36.544 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.544 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:36.544 14:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.544 14:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.544 14:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.544 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:36.544 { 00:22:36.544 "cntlid": 121, 00:22:36.544 "qid": 0, 00:22:36.544 "state": "enabled", 00:22:36.544 "thread": "nvmf_tgt_poll_group_000", 00:22:36.544 "listen_address": { 00:22:36.544 "trtype": "TCP", 00:22:36.544 "adrfam": "IPv4", 00:22:36.544 "traddr": "10.0.0.2", 00:22:36.544 "trsvcid": "4420" 00:22:36.544 }, 00:22:36.544 "peer_address": { 00:22:36.544 "trtype": "TCP", 00:22:36.544 "adrfam": "IPv4", 00:22:36.544 "traddr": "10.0.0.1", 00:22:36.544 "trsvcid": "39390" 00:22:36.544 }, 00:22:36.544 "auth": { 00:22:36.544 "state": "completed", 00:22:36.544 "digest": "sha512", 00:22:36.544 "dhgroup": "ffdhe4096" 00:22:36.544 } 00:22:36.544 } 00:22:36.544 ]' 00:22:36.544 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:36.544 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:36.544 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:36.544 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:36.544 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:36.544 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:36.544 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:36.544 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:36.803 14:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZGU4NzU5NjI1YWZmODYyMTg1ZjQ0NmZiNjU0MGExNGEwYTdiZGE5NDlmZDlhNGM5oMwwQQ==: --dhchap-ctrl-secret DHHC-1:03:MjU3ZjAyOWY1OGJlZDQyOTAxYWU3MTllMDgxNWIzOTFkOTMxYTNhYWY5NTBkMDk3NDlhODdlY2I1OWQ1YTRkYTuqLZA=: 00:22:37.739 14:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:37.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:37.739 14:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:37.739 14:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.739 14:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.996 14:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.996 14:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:37.996 14:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:37.996 14:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:37.996 14:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:22:37.996 14:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:37.997 14:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:37.997 14:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:37.997 14:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:37.997 14:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:37.997 14:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:37.997 14:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.997 14:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.255 14:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.255 14:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:38.255 14:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:38.513 00:22:38.513 14:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:38.513 14:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:38.513 14:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:38.771 14:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.771 14:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:38.771 14:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.771 14:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.771 14:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.771 14:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:38.771 { 00:22:38.771 "cntlid": 123, 00:22:38.771 "qid": 0, 00:22:38.771 "state": "enabled", 00:22:38.771 "thread": "nvmf_tgt_poll_group_000", 00:22:38.771 "listen_address": { 00:22:38.771 "trtype": "TCP", 00:22:38.771 "adrfam": "IPv4", 00:22:38.771 "traddr": "10.0.0.2", 00:22:38.771 "trsvcid": "4420" 00:22:38.771 }, 00:22:38.771 "peer_address": { 00:22:38.771 "trtype": "TCP", 00:22:38.771 "adrfam": "IPv4", 00:22:38.771 "traddr": "10.0.0.1", 00:22:38.771 "trsvcid": "39434" 00:22:38.771 }, 00:22:38.771 "auth": { 00:22:38.771 "state": "completed", 00:22:38.771 "digest": "sha512", 00:22:38.771 "dhgroup": "ffdhe4096" 00:22:38.771 } 00:22:38.771 } 00:22:38.771 ]' 00:22:38.771 14:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:38.771 14:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:38.771 14:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:38.771 14:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:38.771 14:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:38.771 14:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:38.771 14:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:38.771 14:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:39.029 14:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZmI5YTYyNjM3NzBkYzA4NTBmODc1YmUyMWM0NzdhMWVi24Rl: --dhchap-ctrl-secret DHHC-1:02:YTk1ZmE0ZGJkNmUxNzQzNDc3NzAxZjFiZmViZDg5ZGY3ODNmNWE0Y2I2Nzc2NjI3W8ciIA==: 00:22:39.962 14:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:39.962 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:39.962 14:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:39.962 14:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.962 14:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.962 14:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.962 14:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:39.962 14:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:39.962 14:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:40.529 14:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:22:40.529 14:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:40.529 14:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:40.529 14:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:40.529 14:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:40.529 14:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:40.529 14:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:40.529 14:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.529 14:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.529 14:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.529 14:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:40.529 14:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:40.787 00:22:40.787 14:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:40.787 14:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:40.787 14:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:41.043 14:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.043 14:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:41.044 14:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.044 14:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.044 14:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.044 14:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:41.044 { 00:22:41.044 "cntlid": 125, 00:22:41.044 "qid": 0, 00:22:41.044 "state": "enabled", 00:22:41.044 "thread": "nvmf_tgt_poll_group_000", 00:22:41.044 "listen_address": { 00:22:41.044 "trtype": "TCP", 00:22:41.044 "adrfam": "IPv4", 00:22:41.044 "traddr": "10.0.0.2", 00:22:41.044 "trsvcid": "4420" 00:22:41.044 }, 00:22:41.044 "peer_address": { 00:22:41.044 "trtype": "TCP", 00:22:41.044 "adrfam": "IPv4", 00:22:41.044 "traddr": "10.0.0.1", 00:22:41.044 "trsvcid": "39458" 00:22:41.044 }, 00:22:41.044 "auth": { 00:22:41.044 "state": "completed", 00:22:41.044 "digest": "sha512", 00:22:41.044 "dhgroup": "ffdhe4096" 00:22:41.044 } 00:22:41.044 } 00:22:41.044 ]' 00:22:41.044 14:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:41.044 14:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:41.044 14:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:41.044 14:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:41.044 14:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:41.301 14:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:41.301 14:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:41.301 14:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:41.559 14:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzhlNjBkNWFlYWQ1NzUxMzQ1ZTk3NzRlNTZjNWFlYmIzNmI3MDIxOTRhMDg2OWFj9fRvyA==: --dhchap-ctrl-secret DHHC-1:01:YTQxZjk4ZmRjMGE5OTM2MzllMmNkNDIwYTlmOTQ5MjH+Wu2x: 00:22:42.496 14:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:42.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:42.496 14:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:42.496 14:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.496 14:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.496 14:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.496 14:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:42.496 14:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:42.496 14:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:42.755 14:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:22:42.755 14:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:42.755 14:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:42.755 14:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:42.755 14:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:42.755 14:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:42.755 14:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:42.755 14:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.755 14:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.755 14:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.755 14:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:42.755 14:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:43.018 00:22:43.331 14:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:43.331 14:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:43.331 14:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:43.331 14:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.331 14:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:43.331 14:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.331 14:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.331 14:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.331 14:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:43.331 { 00:22:43.331 "cntlid": 127, 00:22:43.331 "qid": 0, 00:22:43.331 "state": "enabled", 00:22:43.331 "thread": "nvmf_tgt_poll_group_000", 00:22:43.331 "listen_address": { 00:22:43.331 "trtype": "TCP", 00:22:43.331 "adrfam": "IPv4", 00:22:43.331 "traddr": "10.0.0.2", 00:22:43.331 "trsvcid": "4420" 00:22:43.331 }, 00:22:43.331 "peer_address": { 00:22:43.331 "trtype": "TCP", 00:22:43.331 "adrfam": "IPv4", 00:22:43.331 "traddr": "10.0.0.1", 00:22:43.331 "trsvcid": "58212" 00:22:43.331 }, 00:22:43.331 "auth": { 00:22:43.331 "state": "completed", 00:22:43.331 "digest": "sha512", 00:22:43.331 "dhgroup": "ffdhe4096" 00:22:43.331 } 00:22:43.331 } 00:22:43.331 ]' 00:22:43.331 14:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:43.589 14:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:43.589 14:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:43.589 14:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:43.589 14:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:43.589 14:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:43.589 14:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:43.589 14:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:43.847 14:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NzM1ZWU4NDIwMGQwZDZhNjM0MWU5NjQ4ZDkxYzVkZWZjMjFiNmFkOGRjN2I4NzJiYjlmZDY4NzE2ODMzOTg1Y9/LaFo=: 00:22:44.783 14:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:44.783 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:44.783 14:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:44.783 14:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.783 14:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.783 14:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.783 14:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:44.783 14:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:44.783 14:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:44.783 14:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:45.041 14:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:22:45.041 14:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:45.041 14:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:45.041 14:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:45.041 14:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:45.041 14:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:45.041 14:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:45.041 14:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.041 14:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.041 14:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.041 14:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:45.041 14:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:45.610 00:22:45.610 14:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:45.610 14:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:45.610 14:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:45.869 14:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.869 14:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:45.869 14:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.869 14:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.869 14:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.869 14:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:45.869 { 00:22:45.869 "cntlid": 129, 00:22:45.869 "qid": 0, 00:22:45.869 "state": "enabled", 00:22:45.869 "thread": "nvmf_tgt_poll_group_000", 00:22:45.869 "listen_address": { 00:22:45.869 "trtype": "TCP", 00:22:45.869 "adrfam": "IPv4", 00:22:45.869 "traddr": "10.0.0.2", 00:22:45.869 "trsvcid": "4420" 00:22:45.869 }, 00:22:45.869 "peer_address": { 00:22:45.869 "trtype": "TCP", 00:22:45.869 "adrfam": "IPv4", 00:22:45.869 "traddr": "10.0.0.1", 00:22:45.869 "trsvcid": "58246" 00:22:45.869 }, 00:22:45.869 "auth": { 00:22:45.869 "state": "completed", 00:22:45.869 "digest": "sha512", 00:22:45.869 "dhgroup": "ffdhe6144" 00:22:45.869 } 00:22:45.869 } 00:22:45.869 ]' 00:22:45.869 14:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:45.869 14:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:45.869 14:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:45.869 14:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:45.869 14:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:45.869 14:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:45.869 14:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:45.869 14:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:46.129 14:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZGU4NzU5NjI1YWZmODYyMTg1ZjQ0NmZiNjU0MGExNGEwYTdiZGE5NDlmZDlhNGM5oMwwQQ==: --dhchap-ctrl-secret DHHC-1:03:MjU3ZjAyOWY1OGJlZDQyOTAxYWU3MTllMDgxNWIzOTFkOTMxYTNhYWY5NTBkMDk3NDlhODdlY2I1OWQ1YTRkYTuqLZA=: 00:22:47.067 14:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:47.067 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:47.067 14:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:47.067 14:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.067 14:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.067 14:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.067 14:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:47.067 14:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:47.067 14:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:47.325 14:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:22:47.325 14:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:47.325 14:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:47.325 14:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:47.325 14:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:47.325 14:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:47.325 14:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:47.325 14:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.325 14:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.325 14:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.325 14:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:47.325 14:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:47.893 00:22:47.893 14:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:47.893 14:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:47.893 14:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:48.151 14:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.151 14:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:48.151 14:55:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.151 14:55:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.151 14:55:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.151 14:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:48.151 { 00:22:48.151 "cntlid": 131, 00:22:48.151 "qid": 0, 00:22:48.151 "state": "enabled", 00:22:48.151 "thread": "nvmf_tgt_poll_group_000", 00:22:48.151 "listen_address": { 00:22:48.151 "trtype": "TCP", 00:22:48.151 "adrfam": "IPv4", 00:22:48.151 "traddr": "10.0.0.2", 00:22:48.151 "trsvcid": "4420" 00:22:48.151 }, 00:22:48.151 "peer_address": { 00:22:48.151 "trtype": "TCP", 00:22:48.152 "adrfam": "IPv4", 00:22:48.152 "traddr": "10.0.0.1", 00:22:48.152 "trsvcid": "58264" 00:22:48.152 }, 00:22:48.152 "auth": { 00:22:48.152 "state": "completed", 00:22:48.152 "digest": "sha512", 00:22:48.152 "dhgroup": "ffdhe6144" 00:22:48.152 } 00:22:48.152 } 00:22:48.152 ]' 00:22:48.152 14:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:48.409 14:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:48.409 14:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:48.409 14:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:48.409 14:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:48.409 14:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:48.409 14:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:48.409 14:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:48.667 14:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZmI5YTYyNjM3NzBkYzA4NTBmODc1YmUyMWM0NzdhMWVi24Rl: --dhchap-ctrl-secret DHHC-1:02:YTk1ZmE0ZGJkNmUxNzQzNDc3NzAxZjFiZmViZDg5ZGY3ODNmNWE0Y2I2Nzc2NjI3W8ciIA==: 00:22:49.602 14:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:49.602 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:49.602 14:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:49.602 14:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.602 14:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.602 14:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.602 14:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:49.602 14:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:49.602 14:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:49.861 14:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:22:49.861 14:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:49.861 14:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:49.861 14:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:49.861 14:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:49.861 14:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:49.861 14:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:49.861 14:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.861 14:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.861 14:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.861 14:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:49.861 14:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:50.429 00:22:50.429 14:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:50.429 14:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:50.429 14:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:50.686 14:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.686 14:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:50.686 14:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.686 14:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.686 14:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.686 14:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:50.686 { 00:22:50.686 "cntlid": 133, 00:22:50.686 "qid": 0, 00:22:50.686 "state": "enabled", 00:22:50.686 "thread": "nvmf_tgt_poll_group_000", 00:22:50.686 "listen_address": { 00:22:50.686 "trtype": "TCP", 00:22:50.686 "adrfam": "IPv4", 00:22:50.686 "traddr": "10.0.0.2", 00:22:50.686 "trsvcid": "4420" 00:22:50.686 }, 00:22:50.686 "peer_address": { 00:22:50.686 "trtype": "TCP", 00:22:50.686 "adrfam": "IPv4", 00:22:50.686 "traddr": "10.0.0.1", 00:22:50.686 "trsvcid": "58294" 00:22:50.686 }, 00:22:50.686 "auth": { 00:22:50.686 "state": "completed", 00:22:50.686 "digest": "sha512", 00:22:50.686 "dhgroup": "ffdhe6144" 00:22:50.686 } 00:22:50.686 } 00:22:50.686 ]' 00:22:50.686 14:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:50.686 14:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:50.686 14:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:50.686 14:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:50.686 14:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:50.686 14:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:50.686 14:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:50.686 14:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:50.945 14:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzhlNjBkNWFlYWQ1NzUxMzQ1ZTk3NzRlNTZjNWFlYmIzNmI3MDIxOTRhMDg2OWFj9fRvyA==: --dhchap-ctrl-secret DHHC-1:01:YTQxZjk4ZmRjMGE5OTM2MzllMmNkNDIwYTlmOTQ5MjH+Wu2x: 00:22:52.320 14:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:52.320 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:52.320 14:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:52.320 14:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.320 14:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.320 14:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.320 14:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:52.320 14:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:52.320 14:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:52.320 14:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:22:52.320 14:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:52.320 14:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:52.320 14:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:52.320 14:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:52.320 14:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:52.320 14:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:52.320 14:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.320 14:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.320 14:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.320 14:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:52.320 14:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:52.886 00:22:52.886 14:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:52.886 14:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:52.886 14:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:53.144 14:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.144 14:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:53.144 14:55:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.144 14:55:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.144 14:55:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.144 14:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:53.144 { 00:22:53.144 "cntlid": 135, 00:22:53.144 "qid": 0, 00:22:53.144 "state": "enabled", 00:22:53.144 "thread": "nvmf_tgt_poll_group_000", 00:22:53.144 "listen_address": { 00:22:53.144 "trtype": "TCP", 00:22:53.144 "adrfam": "IPv4", 00:22:53.144 "traddr": "10.0.0.2", 00:22:53.144 "trsvcid": "4420" 00:22:53.144 }, 00:22:53.144 "peer_address": { 00:22:53.144 "trtype": "TCP", 00:22:53.144 "adrfam": "IPv4", 00:22:53.144 "traddr": "10.0.0.1", 00:22:53.144 "trsvcid": "49848" 00:22:53.144 }, 00:22:53.144 "auth": { 00:22:53.144 "state": "completed", 00:22:53.144 "digest": "sha512", 00:22:53.144 "dhgroup": "ffdhe6144" 00:22:53.144 } 00:22:53.144 } 00:22:53.144 ]' 00:22:53.144 14:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:53.144 14:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:53.144 14:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:53.144 14:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:53.144 14:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:53.402 14:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:53.402 14:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:53.402 14:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:53.402 14:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NzM1ZWU4NDIwMGQwZDZhNjM0MWU5NjQ4ZDkxYzVkZWZjMjFiNmFkOGRjN2I4NzJiYjlmZDY4NzE2ODMzOTg1Y9/LaFo=: 00:22:54.337 14:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:54.595 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:54.595 14:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:54.595 14:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.595 14:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.595 14:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.595 14:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:54.595 14:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:54.595 14:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:54.595 14:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:54.853 14:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:22:54.853 14:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:54.853 14:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:54.853 14:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:54.853 14:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:54.853 14:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:54.853 14:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:54.853 14:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.853 14:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.853 14:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.853 14:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:54.853 14:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:55.788 00:22:55.788 14:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:55.788 14:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:55.788 14:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:56.046 14:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:56.046 14:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:56.046 14:55:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.046 14:55:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.046 14:55:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.046 14:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:56.046 { 00:22:56.046 "cntlid": 137, 00:22:56.046 "qid": 0, 00:22:56.046 "state": "enabled", 00:22:56.046 "thread": "nvmf_tgt_poll_group_000", 00:22:56.046 "listen_address": { 00:22:56.046 "trtype": "TCP", 00:22:56.046 "adrfam": "IPv4", 00:22:56.046 "traddr": "10.0.0.2", 00:22:56.046 "trsvcid": "4420" 00:22:56.046 }, 00:22:56.046 "peer_address": { 00:22:56.046 "trtype": "TCP", 00:22:56.046 "adrfam": "IPv4", 00:22:56.046 "traddr": "10.0.0.1", 00:22:56.046 "trsvcid": "49878" 00:22:56.046 }, 00:22:56.046 "auth": { 00:22:56.046 "state": "completed", 00:22:56.046 "digest": "sha512", 00:22:56.046 "dhgroup": "ffdhe8192" 00:22:56.046 } 00:22:56.046 } 00:22:56.046 ]' 00:22:56.046 14:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:56.046 14:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:56.046 14:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:56.046 14:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:56.046 14:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:56.046 14:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:56.046 14:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:56.046 14:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:56.304 14:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZGU4NzU5NjI1YWZmODYyMTg1ZjQ0NmZiNjU0MGExNGEwYTdiZGE5NDlmZDlhNGM5oMwwQQ==: --dhchap-ctrl-secret DHHC-1:03:MjU3ZjAyOWY1OGJlZDQyOTAxYWU3MTllMDgxNWIzOTFkOTMxYTNhYWY5NTBkMDk3NDlhODdlY2I1OWQ1YTRkYTuqLZA=: 00:22:57.241 14:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:57.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:57.241 14:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:57.241 14:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.241 14:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.241 14:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.241 14:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:57.241 14:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:57.241 14:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:57.499 14:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:22:57.499 14:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:57.499 14:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:57.499 14:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:57.499 14:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:57.499 14:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:57.499 14:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:57.500 14:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.500 14:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.500 14:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.500 14:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:57.500 14:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:58.434 00:22:58.434 14:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:58.434 14:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:58.434 14:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:58.692 14:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:58.692 14:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:58.692 14:55:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.692 14:55:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.692 14:55:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.692 14:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:58.692 { 00:22:58.692 "cntlid": 139, 00:22:58.692 "qid": 0, 00:22:58.692 "state": "enabled", 00:22:58.692 "thread": "nvmf_tgt_poll_group_000", 00:22:58.692 "listen_address": { 00:22:58.692 "trtype": "TCP", 00:22:58.692 "adrfam": "IPv4", 00:22:58.692 "traddr": "10.0.0.2", 00:22:58.692 "trsvcid": "4420" 00:22:58.692 }, 00:22:58.692 "peer_address": { 00:22:58.692 "trtype": "TCP", 00:22:58.692 "adrfam": "IPv4", 00:22:58.692 "traddr": "10.0.0.1", 00:22:58.692 "trsvcid": "49898" 00:22:58.692 }, 00:22:58.692 "auth": { 00:22:58.692 "state": "completed", 00:22:58.692 "digest": "sha512", 00:22:58.692 "dhgroup": "ffdhe8192" 00:22:58.692 } 00:22:58.692 } 00:22:58.692 ]' 00:22:58.692 14:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:58.692 14:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:58.692 14:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:58.692 14:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:58.692 14:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:58.692 14:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:58.692 14:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:58.692 14:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:58.950 14:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZmI5YTYyNjM3NzBkYzA4NTBmODc1YmUyMWM0NzdhMWVi24Rl: --dhchap-ctrl-secret DHHC-1:02:YTk1ZmE0ZGJkNmUxNzQzNDc3NzAxZjFiZmViZDg5ZGY3ODNmNWE0Y2I2Nzc2NjI3W8ciIA==: 00:22:59.886 14:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:59.886 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:59.886 14:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:59.886 14:55:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.886 14:55:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.886 14:55:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.886 14:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:59.886 14:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:59.886 14:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:00.190 14:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:23:00.190 14:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:00.190 14:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:00.190 14:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:00.190 14:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:00.190 14:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:00.190 14:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:00.190 14:55:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.190 14:55:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.190 14:55:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.190 14:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:00.190 14:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:01.130 00:23:01.130 14:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:01.130 14:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:01.130 14:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:01.388 14:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.388 14:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:01.388 14:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.388 14:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.388 14:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.388 14:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:01.388 { 00:23:01.388 "cntlid": 141, 00:23:01.388 "qid": 0, 00:23:01.388 "state": "enabled", 00:23:01.388 "thread": "nvmf_tgt_poll_group_000", 00:23:01.388 "listen_address": { 00:23:01.388 "trtype": "TCP", 00:23:01.388 "adrfam": "IPv4", 00:23:01.388 "traddr": "10.0.0.2", 00:23:01.388 "trsvcid": "4420" 00:23:01.388 }, 00:23:01.388 "peer_address": { 00:23:01.388 "trtype": "TCP", 00:23:01.388 "adrfam": "IPv4", 00:23:01.388 "traddr": "10.0.0.1", 00:23:01.388 "trsvcid": "49918" 00:23:01.388 }, 00:23:01.388 "auth": { 00:23:01.388 "state": "completed", 00:23:01.388 "digest": "sha512", 00:23:01.388 "dhgroup": "ffdhe8192" 00:23:01.388 } 00:23:01.388 } 00:23:01.388 ]' 00:23:01.388 14:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:01.388 14:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:01.388 14:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:01.388 14:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:01.388 14:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:01.388 14:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:01.388 14:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:01.388 14:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:01.647 14:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzhlNjBkNWFlYWQ1NzUxMzQ1ZTk3NzRlNTZjNWFlYmIzNmI3MDIxOTRhMDg2OWFj9fRvyA==: --dhchap-ctrl-secret DHHC-1:01:YTQxZjk4ZmRjMGE5OTM2MzllMmNkNDIwYTlmOTQ5MjH+Wu2x: 00:23:02.584 14:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:02.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:02.584 14:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:02.584 14:55:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.584 14:55:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.584 14:55:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.584 14:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:02.584 14:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:02.584 14:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:03.150 14:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:23:03.150 14:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:03.150 14:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:03.150 14:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:03.150 14:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:03.150 14:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:03.151 14:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:23:03.151 14:55:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.151 14:55:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.151 14:55:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.151 14:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:03.151 14:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:04.086 00:23:04.086 14:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:04.086 14:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:04.086 14:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:04.086 14:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.086 14:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:04.086 14:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.086 14:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.086 14:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.086 14:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:04.086 { 00:23:04.086 "cntlid": 143, 00:23:04.086 "qid": 0, 00:23:04.086 "state": "enabled", 00:23:04.086 "thread": "nvmf_tgt_poll_group_000", 00:23:04.086 "listen_address": { 00:23:04.086 "trtype": "TCP", 00:23:04.086 "adrfam": "IPv4", 00:23:04.086 "traddr": "10.0.0.2", 00:23:04.086 "trsvcid": "4420" 00:23:04.086 }, 00:23:04.086 "peer_address": { 00:23:04.086 "trtype": "TCP", 00:23:04.086 "adrfam": "IPv4", 00:23:04.086 "traddr": "10.0.0.1", 00:23:04.086 "trsvcid": "39580" 00:23:04.086 }, 00:23:04.086 "auth": { 00:23:04.086 "state": "completed", 00:23:04.086 "digest": "sha512", 00:23:04.086 "dhgroup": "ffdhe8192" 00:23:04.086 } 00:23:04.086 } 00:23:04.086 ]' 00:23:04.086 14:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:04.086 14:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:04.086 14:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:04.086 14:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:04.086 14:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:04.343 14:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:04.343 14:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:04.343 14:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:04.600 14:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NzM1ZWU4NDIwMGQwZDZhNjM0MWU5NjQ4ZDkxYzVkZWZjMjFiNmFkOGRjN2I4NzJiYjlmZDY4NzE2ODMzOTg1Y9/LaFo=: 00:23:05.536 14:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:05.536 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:05.536 14:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:05.536 14:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.536 14:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.536 14:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.536 14:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:23:05.536 14:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:23:05.536 14:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:23:05.536 14:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:05.536 14:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:05.536 14:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:05.794 14:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:23:05.794 14:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:05.794 14:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:05.794 14:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:05.794 14:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:05.794 14:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:05.794 14:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:05.794 14:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.794 14:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.794 14:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.794 14:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:05.794 14:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:06.734 00:23:06.734 14:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:06.734 14:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:06.734 14:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:06.992 14:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.992 14:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:06.992 14:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.992 14:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.992 14:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.992 14:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:06.992 { 00:23:06.992 "cntlid": 145, 00:23:06.992 "qid": 0, 00:23:06.992 "state": "enabled", 00:23:06.992 "thread": "nvmf_tgt_poll_group_000", 00:23:06.992 "listen_address": { 00:23:06.992 "trtype": "TCP", 00:23:06.992 "adrfam": "IPv4", 00:23:06.992 "traddr": "10.0.0.2", 00:23:06.992 "trsvcid": "4420" 00:23:06.992 }, 00:23:06.992 "peer_address": { 00:23:06.992 "trtype": "TCP", 00:23:06.992 "adrfam": "IPv4", 00:23:06.992 "traddr": "10.0.0.1", 00:23:06.992 "trsvcid": "39610" 00:23:06.992 }, 00:23:06.992 "auth": { 00:23:06.992 "state": "completed", 00:23:06.992 "digest": "sha512", 00:23:06.992 "dhgroup": "ffdhe8192" 00:23:06.992 } 00:23:06.992 } 00:23:06.992 ]' 00:23:06.992 14:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:06.992 14:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:06.992 14:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:06.992 14:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:06.992 14:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:06.992 14:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:06.992 14:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:06.992 14:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:07.249 14:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZGU4NzU5NjI1YWZmODYyMTg1ZjQ0NmZiNjU0MGExNGEwYTdiZGE5NDlmZDlhNGM5oMwwQQ==: --dhchap-ctrl-secret DHHC-1:03:MjU3ZjAyOWY1OGJlZDQyOTAxYWU3MTllMDgxNWIzOTFkOTMxYTNhYWY5NTBkMDk3NDlhODdlY2I1OWQ1YTRkYTuqLZA=: 00:23:08.183 14:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:08.183 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:08.183 14:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:08.183 14:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.183 14:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.183 14:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.183 14:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:23:08.183 14:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.183 14:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.183 14:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.183 14:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:08.183 14:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:08.183 14:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:08.183 14:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:08.183 14:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:08.183 14:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:08.183 14:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:08.183 14:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:08.183 14:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:09.117 request: 00:23:09.117 { 00:23:09.117 "name": "nvme0", 00:23:09.117 "trtype": "tcp", 00:23:09.117 "traddr": "10.0.0.2", 00:23:09.117 "adrfam": "ipv4", 00:23:09.117 "trsvcid": "4420", 00:23:09.117 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:09.117 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:09.117 "prchk_reftag": false, 00:23:09.117 "prchk_guard": false, 00:23:09.117 "hdgst": false, 00:23:09.117 "ddgst": false, 00:23:09.117 "dhchap_key": "key2", 00:23:09.117 "method": "bdev_nvme_attach_controller", 00:23:09.117 "req_id": 1 00:23:09.117 } 00:23:09.117 Got JSON-RPC error response 00:23:09.117 response: 00:23:09.117 { 00:23:09.117 "code": -5, 00:23:09.117 "message": "Input/output error" 00:23:09.117 } 00:23:09.117 14:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:09.117 14:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:09.117 14:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:09.117 14:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:09.117 14:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:09.117 14:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.117 14:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.117 14:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.117 14:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:09.117 14:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.117 14:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.117 14:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.117 14:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:09.117 14:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:09.117 14:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:09.117 14:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:09.117 14:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:09.117 14:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:09.117 14:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:09.117 14:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:09.117 14:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:10.056 request: 00:23:10.056 { 00:23:10.056 "name": "nvme0", 00:23:10.056 "trtype": "tcp", 00:23:10.056 "traddr": "10.0.0.2", 00:23:10.056 "adrfam": "ipv4", 00:23:10.056 "trsvcid": "4420", 00:23:10.056 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:10.056 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:10.056 "prchk_reftag": false, 00:23:10.056 "prchk_guard": false, 00:23:10.056 "hdgst": false, 00:23:10.056 "ddgst": false, 00:23:10.056 "dhchap_key": "key1", 00:23:10.056 "dhchap_ctrlr_key": "ckey2", 00:23:10.056 "method": "bdev_nvme_attach_controller", 00:23:10.056 "req_id": 1 00:23:10.056 } 00:23:10.056 Got JSON-RPC error response 00:23:10.056 response: 00:23:10.056 { 00:23:10.056 "code": -5, 00:23:10.056 "message": "Input/output error" 00:23:10.056 } 00:23:10.056 14:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:10.056 14:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:10.056 14:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:10.056 14:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:10.056 14:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:10.056 14:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.056 14:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.056 14:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.056 14:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:23:10.056 14:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.056 14:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.056 14:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.056 14:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:10.056 14:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:10.056 14:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:10.056 14:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:10.056 14:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:10.056 14:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:10.056 14:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:10.056 14:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:10.056 14:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:10.995 request: 00:23:10.995 { 00:23:10.995 "name": "nvme0", 00:23:10.995 "trtype": "tcp", 00:23:10.995 "traddr": "10.0.0.2", 00:23:10.995 "adrfam": "ipv4", 00:23:10.995 "trsvcid": "4420", 00:23:10.995 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:10.995 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:10.995 "prchk_reftag": false, 00:23:10.995 "prchk_guard": false, 00:23:10.995 "hdgst": false, 00:23:10.995 "ddgst": false, 00:23:10.995 "dhchap_key": "key1", 00:23:10.995 "dhchap_ctrlr_key": "ckey1", 00:23:10.995 "method": "bdev_nvme_attach_controller", 00:23:10.995 "req_id": 1 00:23:10.995 } 00:23:10.995 Got JSON-RPC error response 00:23:10.995 response: 00:23:10.995 { 00:23:10.995 "code": -5, 00:23:10.995 "message": "Input/output error" 00:23:10.995 } 00:23:10.995 14:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:10.995 14:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:10.995 14:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:10.995 14:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:10.995 14:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:10.995 14:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.995 14:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.995 14:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.995 14:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1900087 00:23:10.995 14:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1900087 ']' 00:23:10.995 14:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1900087 00:23:10.995 14:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:23:10.995 14:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:10.995 14:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1900087 00:23:10.995 14:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:10.995 14:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:10.995 14:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1900087' 00:23:10.995 killing process with pid 1900087 00:23:10.995 14:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1900087 00:23:10.995 14:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1900087 00:23:12.374 14:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:23:12.374 14:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:12.374 14:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:12.374 14:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.374 14:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1922726 00:23:12.374 14:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:23:12.374 14:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1922726 00:23:12.374 14:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1922726 ']' 00:23:12.374 14:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:12.374 14:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:12.374 14:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:12.374 14:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:12.374 14:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.310 14:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:13.310 14:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:23:13.310 14:55:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:13.310 14:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:13.310 14:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.310 14:55:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:13.310 14:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:23:13.310 14:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1922726 00:23:13.310 14:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1922726 ']' 00:23:13.310 14:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:13.310 14:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:13.310 14:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:13.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:13.310 14:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:13.310 14:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.570 14:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:13.570 14:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:23:13.570 14:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:23:13.570 14:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.570 14:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.146 14:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.147 14:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:23:14.147 14:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:14.147 14:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:14.147 14:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:14.147 14:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:14.147 14:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:14.147 14:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:23:14.147 14:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.147 14:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.147 14:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.147 14:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:14.147 14:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:15.079 00:23:15.079 14:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:15.079 14:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:15.079 14:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:15.079 14:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.079 14:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:15.079 14:55:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.079 14:55:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.079 14:55:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.079 14:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:15.079 { 00:23:15.079 "cntlid": 1, 00:23:15.079 "qid": 0, 00:23:15.079 "state": "enabled", 00:23:15.079 "thread": "nvmf_tgt_poll_group_000", 00:23:15.079 "listen_address": { 00:23:15.079 "trtype": "TCP", 00:23:15.079 "adrfam": "IPv4", 00:23:15.079 "traddr": "10.0.0.2", 00:23:15.079 "trsvcid": "4420" 00:23:15.079 }, 00:23:15.079 "peer_address": { 00:23:15.079 "trtype": "TCP", 00:23:15.079 "adrfam": "IPv4", 00:23:15.079 "traddr": "10.0.0.1", 00:23:15.079 "trsvcid": "57280" 00:23:15.079 }, 00:23:15.079 "auth": { 00:23:15.079 "state": "completed", 00:23:15.079 "digest": "sha512", 00:23:15.079 "dhgroup": "ffdhe8192" 00:23:15.079 } 00:23:15.079 } 00:23:15.079 ]' 00:23:15.079 14:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:15.335 14:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:15.335 14:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:15.335 14:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:15.335 14:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:15.335 14:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:15.335 14:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:15.335 14:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:15.591 14:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NzM1ZWU4NDIwMGQwZDZhNjM0MWU5NjQ4ZDkxYzVkZWZjMjFiNmFkOGRjN2I4NzJiYjlmZDY4NzE2ODMzOTg1Y9/LaFo=: 00:23:16.524 14:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:16.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:16.524 14:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:16.524 14:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.524 14:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.524 14:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.524 14:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:23:16.524 14:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.524 14:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.524 14:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.524 14:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:23:16.524 14:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:23:16.780 14:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:16.780 14:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:16.780 14:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:16.780 14:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:16.780 14:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:16.780 14:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:16.781 14:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:16.781 14:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:16.781 14:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:17.037 request: 00:23:17.037 { 00:23:17.037 "name": "nvme0", 00:23:17.037 "trtype": "tcp", 00:23:17.037 "traddr": "10.0.0.2", 00:23:17.037 "adrfam": "ipv4", 00:23:17.037 "trsvcid": "4420", 00:23:17.037 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:17.037 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:17.037 "prchk_reftag": false, 00:23:17.037 "prchk_guard": false, 00:23:17.037 "hdgst": false, 00:23:17.037 "ddgst": false, 00:23:17.037 "dhchap_key": "key3", 00:23:17.037 "method": "bdev_nvme_attach_controller", 00:23:17.037 "req_id": 1 00:23:17.037 } 00:23:17.037 Got JSON-RPC error response 00:23:17.037 response: 00:23:17.037 { 00:23:17.037 "code": -5, 00:23:17.037 "message": "Input/output error" 00:23:17.037 } 00:23:17.037 14:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:17.037 14:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:17.037 14:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:17.037 14:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:17.037 14:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:23:17.037 14:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:23:17.037 14:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:17.037 14:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:17.296 14:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:17.296 14:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:17.296 14:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:17.296 14:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:17.296 14:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:17.296 14:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:17.296 14:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:17.296 14:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:17.296 14:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:17.586 request: 00:23:17.586 { 00:23:17.586 "name": "nvme0", 00:23:17.586 "trtype": "tcp", 00:23:17.586 "traddr": "10.0.0.2", 00:23:17.586 "adrfam": "ipv4", 00:23:17.586 "trsvcid": "4420", 00:23:17.586 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:17.586 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:17.586 "prchk_reftag": false, 00:23:17.586 "prchk_guard": false, 00:23:17.586 "hdgst": false, 00:23:17.586 "ddgst": false, 00:23:17.586 "dhchap_key": "key3", 00:23:17.586 "method": "bdev_nvme_attach_controller", 00:23:17.586 "req_id": 1 00:23:17.586 } 00:23:17.586 Got JSON-RPC error response 00:23:17.586 response: 00:23:17.586 { 00:23:17.586 "code": -5, 00:23:17.586 "message": "Input/output error" 00:23:17.586 } 00:23:17.586 14:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:17.586 14:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:17.586 14:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:17.586 14:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:17.586 14:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:23:17.586 14:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:23:17.586 14:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:23:17.844 14:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:17.844 14:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:17.844 14:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:17.844 14:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:17.844 14:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.844 14:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.844 14:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.844 14:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:17.844 14:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.844 14:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.844 14:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.844 14:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:17.844 14:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:17.844 14:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:17.844 14:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:17.844 14:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:17.844 14:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:17.844 14:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:17.844 14:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:17.844 14:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:18.102 request: 00:23:18.102 { 00:23:18.102 "name": "nvme0", 00:23:18.102 "trtype": "tcp", 00:23:18.102 "traddr": "10.0.0.2", 00:23:18.102 "adrfam": "ipv4", 00:23:18.102 "trsvcid": "4420", 00:23:18.102 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:18.102 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:18.102 "prchk_reftag": false, 00:23:18.102 "prchk_guard": false, 00:23:18.102 "hdgst": false, 00:23:18.102 "ddgst": false, 00:23:18.102 "dhchap_key": "key0", 00:23:18.102 "dhchap_ctrlr_key": "key1", 00:23:18.102 "method": "bdev_nvme_attach_controller", 00:23:18.102 "req_id": 1 00:23:18.102 } 00:23:18.102 Got JSON-RPC error response 00:23:18.102 response: 00:23:18.102 { 00:23:18.102 "code": -5, 00:23:18.102 "message": "Input/output error" 00:23:18.102 } 00:23:18.102 14:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:18.102 14:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:18.102 14:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:18.102 14:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:18.102 14:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:23:18.102 14:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:23:18.667 00:23:18.667 14:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:23:18.667 14:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:18.667 14:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:23:18.667 14:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.667 14:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:18.667 14:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:18.925 14:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:23:18.925 14:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:23:18.925 14:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1900238 00:23:18.925 14:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1900238 ']' 00:23:18.925 14:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1900238 00:23:18.925 14:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:23:18.925 14:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:18.926 14:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1900238 00:23:19.184 14:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:19.184 14:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:19.184 14:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1900238' 00:23:19.184 killing process with pid 1900238 00:23:19.184 14:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1900238 00:23:19.184 14:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1900238 00:23:21.722 14:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:23:21.722 14:56:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:21.722 14:56:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:23:21.722 14:56:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:21.722 14:56:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:23:21.722 14:56:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:21.722 14:56:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:21.722 rmmod nvme_tcp 00:23:21.723 rmmod nvme_fabrics 00:23:21.723 rmmod nvme_keyring 00:23:21.723 14:56:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:21.723 14:56:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:23:21.723 14:56:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:23:21.723 14:56:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1922726 ']' 00:23:21.723 14:56:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1922726 00:23:21.723 14:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1922726 ']' 00:23:21.723 14:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1922726 00:23:21.723 14:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:23:21.723 14:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:21.723 14:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1922726 00:23:21.723 14:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:21.723 14:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:21.723 14:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1922726' 00:23:21.723 killing process with pid 1922726 00:23:21.723 14:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1922726 00:23:21.723 14:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1922726 00:23:22.658 14:56:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:22.658 14:56:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:22.658 14:56:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:22.658 14:56:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:22.658 14:56:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:22.658 14:56:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.658 14:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:22.658 14:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.195 14:56:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:25.195 14:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.u13 /tmp/spdk.key-sha256.uxX /tmp/spdk.key-sha384.B0u /tmp/spdk.key-sha512.339 /tmp/spdk.key-sha512.4qX /tmp/spdk.key-sha384.G9Z /tmp/spdk.key-sha256.GwX '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:23:25.196 00:23:25.196 real 3m15.220s 00:23:25.196 user 7m30.405s 00:23:25.196 sys 0m24.493s 00:23:25.196 14:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:25.196 14:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.196 ************************************ 00:23:25.196 END TEST nvmf_auth_target 00:23:25.196 ************************************ 00:23:25.196 14:56:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:25.196 14:56:04 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:23:25.196 14:56:04 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:25.196 14:56:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:23:25.196 14:56:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:25.196 14:56:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:25.196 ************************************ 00:23:25.196 START TEST nvmf_bdevio_no_huge 00:23:25.196 ************************************ 00:23:25.196 14:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:25.196 * Looking for test storage... 00:23:25.196 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:25.196 14:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:25.196 14:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:23:25.196 14:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:25.196 14:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:25.196 14:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:25.196 14:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:25.196 14:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:25.196 14:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:25.196 14:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:25.196 14:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:25.196 14:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:25.196 14:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:25.196 14:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:25.196 14:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:25.196 14:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:25.196 14:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:25.196 14:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:25.196 14:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:25.196 14:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:25.196 14:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:25.196 14:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:25.196 14:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:25.196 14:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.196 14:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.196 14:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.196 14:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:23:25.196 14:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.196 14:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:23:25.196 14:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:25.196 14:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:25.196 14:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:25.196 14:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:25.196 14:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:25.196 14:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:25.196 14:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:25.196 14:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:25.196 14:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:25.196 14:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:25.196 14:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:23:25.196 14:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:25.196 14:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:25.196 14:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:25.196 14:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:25.196 14:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:25.196 14:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:25.196 14:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:25.196 14:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.196 14:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:25.196 14:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:25.196 14:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:23:25.196 14:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:27.099 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:27.099 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:23:27.099 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:27.099 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:27.099 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:27.099 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:27.099 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:27.099 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:23:27.099 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:27.099 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:23:27.099 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:23:27.099 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:23:27.099 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:23:27.099 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:23:27.099 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:23:27.099 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:27.099 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:27.100 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:27.100 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:27.100 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:27.100 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:27.100 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:27.100 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:23:27.100 00:23:27.100 --- 10.0.0.2 ping statistics --- 00:23:27.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.100 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:27.100 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:27.100 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:23:27.100 00:23:27.100 --- 10.0.0.1 ping statistics --- 00:23:27.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.100 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:27.100 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:23:27.101 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:27.101 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:27.101 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:27.101 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1925897 00:23:27.101 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:23:27.101 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1925897 00:23:27.101 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 1925897 ']' 00:23:27.101 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:27.101 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:27.101 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:27.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:27.101 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:27.101 14:56:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:27.358 [2024-07-14 14:56:06.418155] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:27.358 [2024-07-14 14:56:06.418319] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:23:27.358 [2024-07-14 14:56:06.581165] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:27.617 [2024-07-14 14:56:06.858618] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:27.617 [2024-07-14 14:56:06.858697] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:27.617 [2024-07-14 14:56:06.858724] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:27.617 [2024-07-14 14:56:06.858745] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:27.617 [2024-07-14 14:56:06.858778] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:27.617 [2024-07-14 14:56:06.858918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:27.617 [2024-07-14 14:56:06.858965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:23:27.617 [2024-07-14 14:56:06.858995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:23:27.617 [2024-07-14 14:56:06.858982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:28.183 14:56:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:28.183 14:56:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:23:28.183 14:56:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:28.183 14:56:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:28.183 14:56:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:28.183 14:56:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:28.183 14:56:07 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:28.183 14:56:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.183 14:56:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:28.183 [2024-07-14 14:56:07.391309] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:28.183 14:56:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.183 14:56:07 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:28.183 14:56:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.183 14:56:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:28.183 Malloc0 00:23:28.183 14:56:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.183 14:56:07 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:28.183 14:56:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.183 14:56:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:28.183 14:56:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.183 14:56:07 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:28.183 14:56:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.183 14:56:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:28.183 14:56:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.183 14:56:07 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:28.183 14:56:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.183 14:56:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:28.183 [2024-07-14 14:56:07.481265] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:28.183 14:56:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.183 14:56:07 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:23:28.183 14:56:07 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:23:28.183 14:56:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:23:28.183 14:56:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:23:28.183 14:56:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:28.183 14:56:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:28.183 { 00:23:28.183 "params": { 00:23:28.183 "name": "Nvme$subsystem", 00:23:28.183 "trtype": "$TEST_TRANSPORT", 00:23:28.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.183 "adrfam": "ipv4", 00:23:28.183 "trsvcid": "$NVMF_PORT", 00:23:28.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.183 "hdgst": ${hdgst:-false}, 00:23:28.184 "ddgst": ${ddgst:-false} 00:23:28.184 }, 00:23:28.184 "method": "bdev_nvme_attach_controller" 00:23:28.184 } 00:23:28.184 EOF 00:23:28.184 )") 00:23:28.184 14:56:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:23:28.184 14:56:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:23:28.444 14:56:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:23:28.444 14:56:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:28.444 "params": { 00:23:28.444 "name": "Nvme1", 00:23:28.444 "trtype": "tcp", 00:23:28.444 "traddr": "10.0.0.2", 00:23:28.444 "adrfam": "ipv4", 00:23:28.444 "trsvcid": "4420", 00:23:28.444 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.444 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:28.444 "hdgst": false, 00:23:28.444 "ddgst": false 00:23:28.444 }, 00:23:28.444 "method": "bdev_nvme_attach_controller" 00:23:28.444 }' 00:23:28.444 [2024-07-14 14:56:07.563045] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:28.444 [2024-07-14 14:56:07.563191] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1926060 ] 00:23:28.444 [2024-07-14 14:56:07.702896] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:28.703 [2024-07-14 14:56:07.957970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:28.703 [2024-07-14 14:56:07.957995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.703 [2024-07-14 14:56:07.958002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:29.270 I/O targets: 00:23:29.270 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:23:29.270 00:23:29.270 00:23:29.270 CUnit - A unit testing framework for C - Version 2.1-3 00:23:29.270 http://cunit.sourceforge.net/ 00:23:29.270 00:23:29.270 00:23:29.270 Suite: bdevio tests on: Nvme1n1 00:23:29.270 Test: blockdev write read block ...passed 00:23:29.270 Test: blockdev write zeroes read block ...passed 00:23:29.270 Test: blockdev write zeroes read no split ...passed 00:23:29.270 Test: blockdev write zeroes read split ...passed 00:23:29.270 Test: blockdev write zeroes read split partial ...passed 00:23:29.270 Test: blockdev reset ...[2024-07-14 14:56:08.544007] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:29.270 [2024-07-14 14:56:08.544192] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f1100 (9): Bad file descriptor 00:23:29.270 [2024-07-14 14:56:08.565125] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:29.270 passed 00:23:29.270 Test: blockdev write read 8 blocks ...passed 00:23:29.270 Test: blockdev write read size > 128k ...passed 00:23:29.270 Test: blockdev write read invalid size ...passed 00:23:29.530 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:29.530 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:29.530 Test: blockdev write read max offset ...passed 00:23:29.530 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:29.530 Test: blockdev writev readv 8 blocks ...passed 00:23:29.530 Test: blockdev writev readv 30 x 1block ...passed 00:23:29.530 Test: blockdev writev readv block ...passed 00:23:29.530 Test: blockdev writev readv size > 128k ...passed 00:23:29.530 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:29.530 Test: blockdev comparev and writev ...[2024-07-14 14:56:08.779709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:29.530 [2024-07-14 14:56:08.779775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.530 [2024-07-14 14:56:08.779819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:29.530 [2024-07-14 14:56:08.779848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:29.530 [2024-07-14 14:56:08.780328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:29.530 [2024-07-14 14:56:08.780362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:29.530 [2024-07-14 14:56:08.780401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:29.530 [2024-07-14 14:56:08.780428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:29.530 [2024-07-14 14:56:08.780903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:29.530 [2024-07-14 14:56:08.780936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:29.530 [2024-07-14 14:56:08.780968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:29.530 [2024-07-14 14:56:08.780994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:29.530 [2024-07-14 14:56:08.781468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:29.530 [2024-07-14 14:56:08.781500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:29.530 [2024-07-14 14:56:08.781533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:29.530 [2024-07-14 14:56:08.781559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:29.530 passed 00:23:29.789 Test: blockdev nvme passthru rw ...passed 00:23:29.789 Test: blockdev nvme passthru vendor specific ...[2024-07-14 14:56:08.864313] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:29.789 [2024-07-14 14:56:08.864370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:29.789 [2024-07-14 14:56:08.864648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:29.789 [2024-07-14 14:56:08.864681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:29.789 [2024-07-14 14:56:08.864885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:29.789 [2024-07-14 14:56:08.864917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:29.789 [2024-07-14 14:56:08.865129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:29.789 [2024-07-14 14:56:08.865160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:29.789 passed 00:23:29.789 Test: blockdev nvme admin passthru ...passed 00:23:29.789 Test: blockdev copy ...passed 00:23:29.789 00:23:29.789 Run Summary: Type Total Ran Passed Failed Inactive 00:23:29.789 suites 1 1 n/a 0 0 00:23:29.789 tests 23 23 23 0 0 00:23:29.789 asserts 152 152 152 0 n/a 00:23:29.789 00:23:29.789 Elapsed time = 1.153 seconds 00:23:30.357 14:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:30.357 14:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.357 14:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:30.614 14:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.615 14:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:23:30.615 14:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:23:30.615 14:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:30.615 14:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:23:30.615 14:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:30.615 14:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:23:30.615 14:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:30.615 14:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:30.615 rmmod nvme_tcp 00:23:30.615 rmmod nvme_fabrics 00:23:30.615 rmmod nvme_keyring 00:23:30.615 14:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:30.615 14:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:23:30.615 14:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:23:30.615 14:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1925897 ']' 00:23:30.615 14:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1925897 00:23:30.615 14:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 1925897 ']' 00:23:30.615 14:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 1925897 00:23:30.615 14:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:23:30.615 14:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:30.615 14:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1925897 00:23:30.615 14:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:23:30.615 14:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:23:30.615 14:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1925897' 00:23:30.615 killing process with pid 1925897 00:23:30.615 14:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 1925897 00:23:30.615 14:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 1925897 00:23:31.550 14:56:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:31.550 14:56:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:31.550 14:56:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:31.550 14:56:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:31.550 14:56:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:31.550 14:56:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.550 14:56:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:31.550 14:56:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.459 14:56:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:33.459 00:23:33.459 real 0m8.601s 00:23:33.459 user 0m18.794s 00:23:33.459 sys 0m2.837s 00:23:33.459 14:56:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:33.459 14:56:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:33.459 ************************************ 00:23:33.459 END TEST nvmf_bdevio_no_huge 00:23:33.459 ************************************ 00:23:33.459 14:56:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:33.459 14:56:12 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:33.459 14:56:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:33.459 14:56:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:33.459 14:56:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:33.459 ************************************ 00:23:33.459 START TEST nvmf_tls 00:23:33.459 ************************************ 00:23:33.459 14:56:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:33.459 * Looking for test storage... 00:23:33.459 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:33.459 14:56:12 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:33.459 14:56:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:23:33.459 14:56:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:33.459 14:56:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:33.459 14:56:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:33.459 14:56:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:33.459 14:56:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:33.459 14:56:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:33.459 14:56:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:33.459 14:56:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:33.459 14:56:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:33.459 14:56:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:33.459 14:56:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:33.459 14:56:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:33.459 14:56:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:33.459 14:56:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:33.459 14:56:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:33.459 14:56:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:33.459 14:56:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:33.717 14:56:12 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:33.717 14:56:12 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:33.717 14:56:12 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:33.718 14:56:12 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.718 14:56:12 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.718 14:56:12 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.718 14:56:12 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:23:33.718 14:56:12 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.718 14:56:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:23:33.718 14:56:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:33.718 14:56:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:33.718 14:56:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:33.718 14:56:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:33.718 14:56:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:33.718 14:56:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:33.718 14:56:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:33.718 14:56:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:33.718 14:56:12 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:33.718 14:56:12 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:23:33.718 14:56:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:33.718 14:56:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:33.718 14:56:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:33.718 14:56:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:33.718 14:56:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:33.718 14:56:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.718 14:56:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:33.718 14:56:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.718 14:56:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:33.718 14:56:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:33.718 14:56:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:23:33.718 14:56:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.618 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:35.618 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:23:35.618 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:35.618 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:35.618 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:35.618 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:35.618 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:35.618 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:23:35.618 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:35.619 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:35.619 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:35.619 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:35.619 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:35.619 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:35.619 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:23:35.619 00:23:35.619 --- 10.0.0.2 ping statistics --- 00:23:35.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.619 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:35.619 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:35.619 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:23:35.619 00:23:35.619 --- 10.0.0.1 ping statistics --- 00:23:35.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.619 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1928262 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1928262 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1928262 ']' 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:35.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:35.619 14:56:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.879 [2024-07-14 14:56:14.988385] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:35.879 [2024-07-14 14:56:14.988548] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:35.879 EAL: No free 2048 kB hugepages reported on node 1 00:23:35.879 [2024-07-14 14:56:15.125940] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.137 [2024-07-14 14:56:15.381325] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:36.137 [2024-07-14 14:56:15.381390] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:36.137 [2024-07-14 14:56:15.381417] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:36.137 [2024-07-14 14:56:15.381457] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:36.137 [2024-07-14 14:56:15.381475] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:36.137 [2024-07-14 14:56:15.381530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:36.703 14:56:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:36.703 14:56:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:36.703 14:56:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:36.703 14:56:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:36.703 14:56:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.703 14:56:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:36.703 14:56:15 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:23:36.703 14:56:15 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:23:36.989 true 00:23:36.989 14:56:16 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:36.989 14:56:16 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:23:37.248 14:56:16 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:23:37.248 14:56:16 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:23:37.248 14:56:16 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:37.507 14:56:16 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:37.507 14:56:16 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:23:37.766 14:56:16 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:23:37.766 14:56:16 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:23:37.766 14:56:16 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:23:38.024 14:56:17 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:38.024 14:56:17 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:23:38.282 14:56:17 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:23:38.282 14:56:17 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:23:38.282 14:56:17 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:38.282 14:56:17 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:23:38.539 14:56:17 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:23:38.539 14:56:17 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:23:38.539 14:56:17 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:23:38.798 14:56:17 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:38.798 14:56:17 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:23:39.056 14:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:23:39.056 14:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:23:39.056 14:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:23:39.315 14:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:39.315 14:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:23:39.573 14:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:23:39.573 14:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:23:39.573 14:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:23:39.573 14:56:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:23:39.573 14:56:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:39.573 14:56:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:39.573 14:56:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:23:39.573 14:56:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:23:39.573 14:56:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:39.573 14:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:39.573 14:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:23:39.573 14:56:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:23:39.573 14:56:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:39.573 14:56:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:39.573 14:56:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:23:39.573 14:56:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:23:39.573 14:56:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:39.573 14:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:39.573 14:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:23:39.573 14:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.5z6Fzabdf4 00:23:39.573 14:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:23:39.573 14:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.InPAqvZpBV 00:23:39.573 14:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:39.573 14:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:39.831 14:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.5z6Fzabdf4 00:23:39.831 14:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.InPAqvZpBV 00:23:39.831 14:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:39.831 14:56:19 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:23:40.767 14:56:19 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.5z6Fzabdf4 00:23:40.767 14:56:19 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.5z6Fzabdf4 00:23:40.767 14:56:19 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:41.024 [2024-07-14 14:56:20.085066] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:41.024 14:56:20 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:41.282 14:56:20 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:41.541 [2024-07-14 14:56:20.658553] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:41.541 [2024-07-14 14:56:20.658900] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:41.541 14:56:20 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:41.800 malloc0 00:23:41.800 14:56:20 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:42.057 14:56:21 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5z6Fzabdf4 00:23:42.315 [2024-07-14 14:56:21.450481] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:42.315 14:56:21 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.5z6Fzabdf4 00:23:42.315 EAL: No free 2048 kB hugepages reported on node 1 00:23:54.534 Initializing NVMe Controllers 00:23:54.534 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:54.534 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:54.534 Initialization complete. Launching workers. 00:23:54.534 ======================================================== 00:23:54.534 Latency(us) 00:23:54.534 Device Information : IOPS MiB/s Average min max 00:23:54.535 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5450.56 21.29 11747.23 2445.82 18536.86 00:23:54.535 ======================================================== 00:23:54.535 Total : 5450.56 21.29 11747.23 2445.82 18536.86 00:23:54.535 00:23:54.535 14:56:31 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5z6Fzabdf4 00:23:54.535 14:56:31 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:54.535 14:56:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:54.535 14:56:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:54.535 14:56:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.5z6Fzabdf4' 00:23:54.535 14:56:31 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:54.535 14:56:31 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1930282 00:23:54.535 14:56:31 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:54.535 14:56:31 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:54.535 14:56:31 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1930282 /var/tmp/bdevperf.sock 00:23:54.535 14:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1930282 ']' 00:23:54.535 14:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:54.535 14:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:54.535 14:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:54.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:54.535 14:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:54.535 14:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.535 [2024-07-14 14:56:31.772192] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:54.535 [2024-07-14 14:56:31.772329] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1930282 ] 00:23:54.535 EAL: No free 2048 kB hugepages reported on node 1 00:23:54.535 [2024-07-14 14:56:31.895201] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.535 [2024-07-14 14:56:32.119578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:54.535 14:56:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:54.535 14:56:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:54.535 14:56:32 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5z6Fzabdf4 00:23:54.535 [2024-07-14 14:56:32.931584] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:54.535 [2024-07-14 14:56:32.931773] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:54.535 TLSTESTn1 00:23:54.535 14:56:33 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:54.535 Running I/O for 10 seconds... 00:24:04.509 00:24:04.509 Latency(us) 00:24:04.509 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:04.509 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:04.509 Verification LBA range: start 0x0 length 0x2000 00:24:04.509 TLSTESTn1 : 10.04 2656.97 10.38 0.00 0.00 48066.18 8252.68 45632.47 00:24:04.509 =================================================================================================================== 00:24:04.509 Total : 2656.97 10.38 0.00 0.00 48066.18 8252.68 45632.47 00:24:04.509 0 00:24:04.509 14:56:43 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:04.509 14:56:43 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1930282 00:24:04.509 14:56:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1930282 ']' 00:24:04.509 14:56:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1930282 00:24:04.509 14:56:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:04.509 14:56:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:04.509 14:56:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1930282 00:24:04.509 14:56:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:04.510 14:56:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:04.510 14:56:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1930282' 00:24:04.510 killing process with pid 1930282 00:24:04.510 14:56:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1930282 00:24:04.510 Received shutdown signal, test time was about 10.000000 seconds 00:24:04.510 00:24:04.510 Latency(us) 00:24:04.510 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:04.510 =================================================================================================================== 00:24:04.510 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:04.510 [2024-07-14 14:56:43.234430] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:04.510 14:56:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1930282 00:24:05.074 14:56:44 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.InPAqvZpBV 00:24:05.074 14:56:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:05.074 14:56:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.InPAqvZpBV 00:24:05.074 14:56:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:24:05.074 14:56:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:05.074 14:56:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:24:05.074 14:56:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:05.074 14:56:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.InPAqvZpBV 00:24:05.074 14:56:44 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:05.074 14:56:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:05.074 14:56:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:05.074 14:56:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.InPAqvZpBV' 00:24:05.074 14:56:44 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:05.074 14:56:44 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1931736 00:24:05.074 14:56:44 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:05.074 14:56:44 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:05.074 14:56:44 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1931736 /var/tmp/bdevperf.sock 00:24:05.074 14:56:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1931736 ']' 00:24:05.074 14:56:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:05.074 14:56:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:05.074 14:56:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:05.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:05.074 14:56:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:05.074 14:56:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:05.074 [2024-07-14 14:56:44.252979] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:05.075 [2024-07-14 14:56:44.253121] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1931736 ] 00:24:05.075 EAL: No free 2048 kB hugepages reported on node 1 00:24:05.075 [2024-07-14 14:56:44.375354] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.332 [2024-07-14 14:56:44.598376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:06.269 14:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:06.269 14:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:06.269 14:56:45 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.InPAqvZpBV 00:24:06.269 [2024-07-14 14:56:45.488798] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:06.269 [2024-07-14 14:56:45.489033] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:06.269 [2024-07-14 14:56:45.501210] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:06.269 [2024-07-14 14:56:45.501363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:24:06.269 [2024-07-14 14:56:45.502331] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:24:06.269 [2024-07-14 14:56:45.503330] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.269 [2024-07-14 14:56:45.503359] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:06.269 [2024-07-14 14:56:45.503407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.269 request: 00:24:06.269 { 00:24:06.269 "name": "TLSTEST", 00:24:06.269 "trtype": "tcp", 00:24:06.269 "traddr": "10.0.0.2", 00:24:06.269 "adrfam": "ipv4", 00:24:06.269 "trsvcid": "4420", 00:24:06.269 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:06.269 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:06.269 "prchk_reftag": false, 00:24:06.269 "prchk_guard": false, 00:24:06.269 "hdgst": false, 00:24:06.269 "ddgst": false, 00:24:06.269 "psk": "/tmp/tmp.InPAqvZpBV", 00:24:06.269 "method": "bdev_nvme_attach_controller", 00:24:06.269 "req_id": 1 00:24:06.269 } 00:24:06.269 Got JSON-RPC error response 00:24:06.269 response: 00:24:06.269 { 00:24:06.269 "code": -5, 00:24:06.269 "message": "Input/output error" 00:24:06.269 } 00:24:06.269 14:56:45 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1931736 00:24:06.269 14:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1931736 ']' 00:24:06.269 14:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1931736 00:24:06.269 14:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:06.269 14:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:06.269 14:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1931736 00:24:06.269 14:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:06.270 14:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:06.270 14:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1931736' 00:24:06.270 killing process with pid 1931736 00:24:06.270 14:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1931736 00:24:06.270 Received shutdown signal, test time was about 10.000000 seconds 00:24:06.270 00:24:06.270 Latency(us) 00:24:06.270 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:06.270 =================================================================================================================== 00:24:06.270 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:06.270 14:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1931736 00:24:06.270 [2024-07-14 14:56:45.555054] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:07.207 14:56:46 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:07.207 14:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:07.467 14:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:07.467 14:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:07.467 14:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:07.467 14:56:46 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.5z6Fzabdf4 00:24:07.467 14:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:07.467 14:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.5z6Fzabdf4 00:24:07.467 14:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:24:07.467 14:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:07.467 14:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:24:07.467 14:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:07.467 14:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.5z6Fzabdf4 00:24:07.467 14:56:46 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:07.467 14:56:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:07.467 14:56:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:24:07.467 14:56:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.5z6Fzabdf4' 00:24:07.467 14:56:46 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:07.467 14:56:46 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1932009 00:24:07.467 14:56:46 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:07.467 14:56:46 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:07.467 14:56:46 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1932009 /var/tmp/bdevperf.sock 00:24:07.467 14:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1932009 ']' 00:24:07.467 14:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:07.467 14:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:07.467 14:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:07.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:07.467 14:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:07.467 14:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:07.467 [2024-07-14 14:56:46.601014] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:07.467 [2024-07-14 14:56:46.601174] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1932009 ] 00:24:07.467 EAL: No free 2048 kB hugepages reported on node 1 00:24:07.467 [2024-07-14 14:56:46.721335] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.726 [2024-07-14 14:56:46.943676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:08.296 14:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:08.296 14:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:08.296 14:56:47 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.5z6Fzabdf4 00:24:08.610 [2024-07-14 14:56:47.749747] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:08.610 [2024-07-14 14:56:47.749980] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:08.610 [2024-07-14 14:56:47.759965] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:08.610 [2024-07-14 14:56:47.760005] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:08.610 [2024-07-14 14:56:47.760069] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:08.610 [2024-07-14 14:56:47.760839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:24:08.610 [2024-07-14 14:56:47.761812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:24:08.610 [2024-07-14 14:56:47.762806] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.610 [2024-07-14 14:56:47.762856] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:08.610 [2024-07-14 14:56:47.762902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.610 request: 00:24:08.610 { 00:24:08.610 "name": "TLSTEST", 00:24:08.610 "trtype": "tcp", 00:24:08.610 "traddr": "10.0.0.2", 00:24:08.610 "adrfam": "ipv4", 00:24:08.610 "trsvcid": "4420", 00:24:08.610 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:08.610 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:08.610 "prchk_reftag": false, 00:24:08.610 "prchk_guard": false, 00:24:08.610 "hdgst": false, 00:24:08.610 "ddgst": false, 00:24:08.610 "psk": "/tmp/tmp.5z6Fzabdf4", 00:24:08.610 "method": "bdev_nvme_attach_controller", 00:24:08.610 "req_id": 1 00:24:08.610 } 00:24:08.610 Got JSON-RPC error response 00:24:08.610 response: 00:24:08.610 { 00:24:08.610 "code": -5, 00:24:08.610 "message": "Input/output error" 00:24:08.610 } 00:24:08.610 14:56:47 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1932009 00:24:08.610 14:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1932009 ']' 00:24:08.610 14:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1932009 00:24:08.610 14:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:08.610 14:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:08.610 14:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1932009 00:24:08.610 14:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:08.610 14:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:08.610 14:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1932009' 00:24:08.610 killing process with pid 1932009 00:24:08.610 14:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1932009 00:24:08.610 Received shutdown signal, test time was about 10.000000 seconds 00:24:08.610 00:24:08.610 Latency(us) 00:24:08.610 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:08.610 =================================================================================================================== 00:24:08.610 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:08.610 14:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1932009 00:24:08.610 [2024-07-14 14:56:47.804510] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:09.548 14:56:48 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:09.548 14:56:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:09.548 14:56:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:09.548 14:56:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:09.548 14:56:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:09.548 14:56:48 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.5z6Fzabdf4 00:24:09.548 14:56:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:09.548 14:56:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.5z6Fzabdf4 00:24:09.548 14:56:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:24:09.548 14:56:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:09.548 14:56:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:24:09.548 14:56:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:09.548 14:56:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.5z6Fzabdf4 00:24:09.548 14:56:48 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:09.548 14:56:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:24:09.548 14:56:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:09.548 14:56:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.5z6Fzabdf4' 00:24:09.548 14:56:48 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:09.548 14:56:48 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1932282 00:24:09.548 14:56:48 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:09.548 14:56:48 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:09.548 14:56:48 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1932282 /var/tmp/bdevperf.sock 00:24:09.548 14:56:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1932282 ']' 00:24:09.548 14:56:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:09.548 14:56:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:09.548 14:56:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:09.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:09.548 14:56:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:09.548 14:56:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:09.548 [2024-07-14 14:56:48.828489] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:09.548 [2024-07-14 14:56:48.828634] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1932282 ] 00:24:09.808 EAL: No free 2048 kB hugepages reported on node 1 00:24:09.808 [2024-07-14 14:56:48.951846] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.069 [2024-07-14 14:56:49.179744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:10.637 14:56:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:10.637 14:56:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:10.637 14:56:49 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5z6Fzabdf4 00:24:10.897 [2024-07-14 14:56:50.053910] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:10.897 [2024-07-14 14:56:50.054172] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:10.897 [2024-07-14 14:56:50.064304] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:10.897 [2024-07-14 14:56:50.064344] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:10.897 [2024-07-14 14:56:50.064418] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:10.897 [2024-07-14 14:56:50.065395] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:24:10.897 [2024-07-14 14:56:50.066364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:24:10.897 [2024-07-14 14:56:50.067357] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:10.897 [2024-07-14 14:56:50.067394] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:10.897 [2024-07-14 14:56:50.067437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:10.897 request: 00:24:10.897 { 00:24:10.897 "name": "TLSTEST", 00:24:10.897 "trtype": "tcp", 00:24:10.897 "traddr": "10.0.0.2", 00:24:10.897 "adrfam": "ipv4", 00:24:10.897 "trsvcid": "4420", 00:24:10.897 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:10.897 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:10.897 "prchk_reftag": false, 00:24:10.897 "prchk_guard": false, 00:24:10.897 "hdgst": false, 00:24:10.897 "ddgst": false, 00:24:10.897 "psk": "/tmp/tmp.5z6Fzabdf4", 00:24:10.897 "method": "bdev_nvme_attach_controller", 00:24:10.897 "req_id": 1 00:24:10.897 } 00:24:10.897 Got JSON-RPC error response 00:24:10.897 response: 00:24:10.897 { 00:24:10.897 "code": -5, 00:24:10.897 "message": "Input/output error" 00:24:10.897 } 00:24:10.897 14:56:50 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1932282 00:24:10.897 14:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1932282 ']' 00:24:10.897 14:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1932282 00:24:10.897 14:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:10.897 14:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:10.897 14:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1932282 00:24:10.897 14:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:10.897 14:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:10.897 14:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1932282' 00:24:10.897 killing process with pid 1932282 00:24:10.897 14:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1932282 00:24:10.897 Received shutdown signal, test time was about 10.000000 seconds 00:24:10.897 00:24:10.897 Latency(us) 00:24:10.897 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:10.897 =================================================================================================================== 00:24:10.897 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:10.897 [2024-07-14 14:56:50.122059] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:10.897 14:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1932282 00:24:11.833 14:56:51 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:11.833 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:11.833 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:11.833 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:11.833 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:11.833 14:56:51 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:11.833 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:11.833 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:11.833 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:24:11.833 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:11.833 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:24:11.833 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:11.833 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:11.833 14:56:51 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:11.833 14:56:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:11.833 14:56:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:11.833 14:56:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:24:11.833 14:56:51 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:11.833 14:56:51 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1932561 00:24:11.833 14:56:51 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:11.833 14:56:51 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:11.833 14:56:51 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1932561 /var/tmp/bdevperf.sock 00:24:11.833 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1932561 ']' 00:24:11.833 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:11.833 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:11.833 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:11.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:11.833 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:11.833 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:11.833 [2024-07-14 14:56:51.129109] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:11.833 [2024-07-14 14:56:51.129251] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1932561 ] 00:24:12.093 EAL: No free 2048 kB hugepages reported on node 1 00:24:12.093 [2024-07-14 14:56:51.251625] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.353 [2024-07-14 14:56:51.481308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:12.920 14:56:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:12.920 14:56:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:12.920 14:56:52 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:13.181 [2024-07-14 14:56:52.368915] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:13.181 [2024-07-14 14:56:52.370837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2000 (9): Bad file descriptor 00:24:13.181 [2024-07-14 14:56:52.371820] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:13.181 [2024-07-14 14:56:52.371873] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:13.181 [2024-07-14 14:56:52.371909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:13.181 request: 00:24:13.181 { 00:24:13.181 "name": "TLSTEST", 00:24:13.181 "trtype": "tcp", 00:24:13.181 "traddr": "10.0.0.2", 00:24:13.181 "adrfam": "ipv4", 00:24:13.181 "trsvcid": "4420", 00:24:13.181 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:13.181 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:13.181 "prchk_reftag": false, 00:24:13.181 "prchk_guard": false, 00:24:13.181 "hdgst": false, 00:24:13.181 "ddgst": false, 00:24:13.181 "method": "bdev_nvme_attach_controller", 00:24:13.181 "req_id": 1 00:24:13.181 } 00:24:13.181 Got JSON-RPC error response 00:24:13.181 response: 00:24:13.181 { 00:24:13.181 "code": -5, 00:24:13.181 "message": "Input/output error" 00:24:13.181 } 00:24:13.181 14:56:52 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1932561 00:24:13.181 14:56:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1932561 ']' 00:24:13.181 14:56:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1932561 00:24:13.181 14:56:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:13.181 14:56:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:13.181 14:56:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1932561 00:24:13.181 14:56:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:13.181 14:56:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:13.181 14:56:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1932561' 00:24:13.181 killing process with pid 1932561 00:24:13.181 14:56:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1932561 00:24:13.181 Received shutdown signal, test time was about 10.000000 seconds 00:24:13.181 00:24:13.181 Latency(us) 00:24:13.181 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:13.181 =================================================================================================================== 00:24:13.181 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:13.181 14:56:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1932561 00:24:14.117 14:56:53 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:14.117 14:56:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:14.117 14:56:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:14.117 14:56:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:14.117 14:56:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:14.117 14:56:53 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 1928262 00:24:14.117 14:56:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1928262 ']' 00:24:14.117 14:56:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1928262 00:24:14.117 14:56:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:14.117 14:56:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:14.117 14:56:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1928262 00:24:14.117 14:56:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:14.117 14:56:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:14.117 14:56:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1928262' 00:24:14.117 killing process with pid 1928262 00:24:14.117 14:56:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1928262 00:24:14.117 [2024-07-14 14:56:53.366418] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:14.117 14:56:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1928262 00:24:16.023 14:56:54 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:24:16.023 14:56:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:24:16.023 14:56:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:24:16.023 14:56:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:16.023 14:56:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:24:16.023 14:56:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:24:16.023 14:56:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:24:16.023 14:56:54 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:16.023 14:56:54 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:24:16.023 14:56:54 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.nU1Vut9UEl 00:24:16.023 14:56:54 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:16.023 14:56:54 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.nU1Vut9UEl 00:24:16.023 14:56:54 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:24:16.023 14:56:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:16.023 14:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:16.023 14:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:16.023 14:56:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1932976 00:24:16.023 14:56:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:16.023 14:56:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1932976 00:24:16.023 14:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1932976 ']' 00:24:16.023 14:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:16.023 14:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:16.024 14:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:16.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:16.024 14:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:16.024 14:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:16.024 [2024-07-14 14:56:54.999033] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:16.024 [2024-07-14 14:56:54.999169] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:16.024 EAL: No free 2048 kB hugepages reported on node 1 00:24:16.024 [2024-07-14 14:56:55.140943] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.284 [2024-07-14 14:56:55.399980] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:16.284 [2024-07-14 14:56:55.400053] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:16.284 [2024-07-14 14:56:55.400081] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:16.284 [2024-07-14 14:56:55.400108] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:16.284 [2024-07-14 14:56:55.400130] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:16.284 [2024-07-14 14:56:55.400179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:16.851 14:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:16.851 14:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:16.851 14:56:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:16.851 14:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:16.851 14:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:16.851 14:56:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:16.851 14:56:55 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.nU1Vut9UEl 00:24:16.851 14:56:55 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.nU1Vut9UEl 00:24:16.851 14:56:55 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:17.109 [2024-07-14 14:56:56.245944] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:17.109 14:56:56 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:17.367 14:56:56 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:17.625 [2024-07-14 14:56:56.739279] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:17.625 [2024-07-14 14:56:56.739589] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:17.625 14:56:56 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:17.883 malloc0 00:24:17.883 14:56:57 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:18.141 14:56:57 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nU1Vut9UEl 00:24:18.402 [2024-07-14 14:56:57.520796] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:18.402 14:56:57 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nU1Vut9UEl 00:24:18.402 14:56:57 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:18.402 14:56:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:18.402 14:56:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:18.402 14:56:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.nU1Vut9UEl' 00:24:18.402 14:56:57 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:18.402 14:56:57 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1933386 00:24:18.402 14:56:57 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:18.402 14:56:57 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:18.402 14:56:57 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1933386 /var/tmp/bdevperf.sock 00:24:18.402 14:56:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1933386 ']' 00:24:18.402 14:56:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:18.402 14:56:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:18.402 14:56:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:18.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:18.402 14:56:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:18.402 14:56:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:18.402 [2024-07-14 14:56:57.611488] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:18.402 [2024-07-14 14:56:57.611625] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1933386 ] 00:24:18.402 EAL: No free 2048 kB hugepages reported on node 1 00:24:18.662 [2024-07-14 14:56:57.738661] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.662 [2024-07-14 14:56:57.961237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:19.600 14:56:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:19.600 14:56:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:19.600 14:56:58 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nU1Vut9UEl 00:24:19.600 [2024-07-14 14:56:58.801952] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:19.600 [2024-07-14 14:56:58.802189] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:19.600 TLSTESTn1 00:24:19.600 14:56:58 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:19.860 Running I/O for 10 seconds... 00:24:29.834 00:24:29.834 Latency(us) 00:24:29.834 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:29.834 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:29.834 Verification LBA range: start 0x0 length 0x2000 00:24:29.834 TLSTESTn1 : 10.03 2723.30 10.64 0.00 0.00 46910.29 8689.59 47185.92 00:24:29.834 =================================================================================================================== 00:24:29.834 Total : 2723.30 10.64 0.00 0.00 46910.29 8689.59 47185.92 00:24:29.834 0 00:24:29.834 14:57:09 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:29.834 14:57:09 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1933386 00:24:29.834 14:57:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1933386 ']' 00:24:29.834 14:57:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1933386 00:24:29.834 14:57:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:29.834 14:57:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:29.834 14:57:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1933386 00:24:29.834 14:57:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:29.834 14:57:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:29.834 14:57:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1933386' 00:24:29.834 killing process with pid 1933386 00:24:29.834 14:57:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1933386 00:24:29.834 Received shutdown signal, test time was about 10.000000 seconds 00:24:29.834 00:24:29.834 Latency(us) 00:24:29.834 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:29.834 =================================================================================================================== 00:24:29.834 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:29.834 [2024-07-14 14:57:09.122533] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:29.834 14:57:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1933386 00:24:31.209 14:57:10 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.nU1Vut9UEl 00:24:31.209 14:57:10 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nU1Vut9UEl 00:24:31.210 14:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:31.210 14:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nU1Vut9UEl 00:24:31.210 14:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:24:31.210 14:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:31.210 14:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:24:31.210 14:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:31.210 14:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nU1Vut9UEl 00:24:31.210 14:57:10 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:31.210 14:57:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:31.210 14:57:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:31.210 14:57:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.nU1Vut9UEl' 00:24:31.210 14:57:10 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:31.210 14:57:10 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1935451 00:24:31.210 14:57:10 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:31.210 14:57:10 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:31.210 14:57:10 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1935451 /var/tmp/bdevperf.sock 00:24:31.210 14:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1935451 ']' 00:24:31.210 14:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:31.210 14:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:31.210 14:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:31.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:31.210 14:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:31.210 14:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:31.210 [2024-07-14 14:57:10.159954] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:31.210 [2024-07-14 14:57:10.160097] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1935451 ] 00:24:31.210 EAL: No free 2048 kB hugepages reported on node 1 00:24:31.210 [2024-07-14 14:57:10.282549] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.210 [2024-07-14 14:57:10.504958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:32.167 14:57:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:32.167 14:57:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:32.167 14:57:11 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nU1Vut9UEl 00:24:32.167 [2024-07-14 14:57:11.408054] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:32.167 [2024-07-14 14:57:11.408175] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:24:32.167 [2024-07-14 14:57:11.408212] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.nU1Vut9UEl 00:24:32.167 request: 00:24:32.167 { 00:24:32.167 "name": "TLSTEST", 00:24:32.167 "trtype": "tcp", 00:24:32.167 "traddr": "10.0.0.2", 00:24:32.167 "adrfam": "ipv4", 00:24:32.167 "trsvcid": "4420", 00:24:32.167 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:32.167 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:32.167 "prchk_reftag": false, 00:24:32.167 "prchk_guard": false, 00:24:32.167 "hdgst": false, 00:24:32.167 "ddgst": false, 00:24:32.167 "psk": "/tmp/tmp.nU1Vut9UEl", 00:24:32.167 "method": "bdev_nvme_attach_controller", 00:24:32.167 "req_id": 1 00:24:32.167 } 00:24:32.167 Got JSON-RPC error response 00:24:32.167 response: 00:24:32.167 { 00:24:32.167 "code": -1, 00:24:32.167 "message": "Operation not permitted" 00:24:32.167 } 00:24:32.167 14:57:11 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1935451 00:24:32.167 14:57:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1935451 ']' 00:24:32.167 14:57:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1935451 00:24:32.167 14:57:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:32.167 14:57:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:32.167 14:57:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1935451 00:24:32.167 14:57:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:32.167 14:57:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:32.167 14:57:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1935451' 00:24:32.167 killing process with pid 1935451 00:24:32.167 14:57:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1935451 00:24:32.167 Received shutdown signal, test time was about 10.000000 seconds 00:24:32.167 00:24:32.167 Latency(us) 00:24:32.167 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:32.167 =================================================================================================================== 00:24:32.167 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:32.167 14:57:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1935451 00:24:33.101 14:57:12 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:33.101 14:57:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:33.101 14:57:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:33.101 14:57:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:33.101 14:57:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:33.101 14:57:12 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 1932976 00:24:33.101 14:57:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1932976 ']' 00:24:33.101 14:57:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1932976 00:24:33.101 14:57:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:33.101 14:57:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:33.101 14:57:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1932976 00:24:33.360 14:57:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:33.360 14:57:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:33.360 14:57:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1932976' 00:24:33.360 killing process with pid 1932976 00:24:33.360 14:57:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1932976 00:24:33.360 [2024-07-14 14:57:12.415053] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:33.360 14:57:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1932976 00:24:34.739 14:57:13 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:24:34.739 14:57:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:34.739 14:57:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:34.739 14:57:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:34.739 14:57:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1935871 00:24:34.739 14:57:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:34.739 14:57:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1935871 00:24:34.739 14:57:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1935871 ']' 00:24:34.739 14:57:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:34.739 14:57:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:34.739 14:57:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:34.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:34.739 14:57:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:34.739 14:57:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:34.739 [2024-07-14 14:57:13.907331] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:34.739 [2024-07-14 14:57:13.907484] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:34.739 EAL: No free 2048 kB hugepages reported on node 1 00:24:34.739 [2024-07-14 14:57:14.039329] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.997 [2024-07-14 14:57:14.297321] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:34.997 [2024-07-14 14:57:14.297397] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:34.997 [2024-07-14 14:57:14.297425] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:34.997 [2024-07-14 14:57:14.297451] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:34.997 [2024-07-14 14:57:14.297474] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:34.997 [2024-07-14 14:57:14.297528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:35.566 14:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:35.566 14:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:35.566 14:57:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:35.566 14:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:35.566 14:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:35.824 14:57:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:35.824 14:57:14 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.nU1Vut9UEl 00:24:35.824 14:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:35.824 14:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.nU1Vut9UEl 00:24:35.824 14:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:24:35.824 14:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:35.824 14:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:24:35.824 14:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:35.824 14:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.nU1Vut9UEl 00:24:35.824 14:57:14 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.nU1Vut9UEl 00:24:35.824 14:57:14 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:36.081 [2024-07-14 14:57:15.146956] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:36.081 14:57:15 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:36.338 14:57:15 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:36.595 [2024-07-14 14:57:15.684488] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:36.595 [2024-07-14 14:57:15.684827] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:36.595 14:57:15 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:36.853 malloc0 00:24:36.853 14:57:15 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:37.111 14:57:16 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nU1Vut9UEl 00:24:37.371 [2024-07-14 14:57:16.465751] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:24:37.371 [2024-07-14 14:57:16.465810] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:24:37.371 [2024-07-14 14:57:16.465867] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:24:37.371 request: 00:24:37.371 { 00:24:37.371 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:37.371 "host": "nqn.2016-06.io.spdk:host1", 00:24:37.371 "psk": "/tmp/tmp.nU1Vut9UEl", 00:24:37.371 "method": "nvmf_subsystem_add_host", 00:24:37.371 "req_id": 1 00:24:37.371 } 00:24:37.371 Got JSON-RPC error response 00:24:37.371 response: 00:24:37.371 { 00:24:37.371 "code": -32603, 00:24:37.371 "message": "Internal error" 00:24:37.371 } 00:24:37.371 14:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:37.371 14:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:37.371 14:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:37.371 14:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:37.371 14:57:16 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 1935871 00:24:37.371 14:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1935871 ']' 00:24:37.371 14:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1935871 00:24:37.371 14:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:37.371 14:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:37.371 14:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1935871 00:24:37.371 14:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:37.371 14:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:37.371 14:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1935871' 00:24:37.371 killing process with pid 1935871 00:24:37.371 14:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1935871 00:24:37.371 14:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1935871 00:24:38.750 14:57:17 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.nU1Vut9UEl 00:24:38.750 14:57:17 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:24:38.750 14:57:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:38.750 14:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:38.750 14:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:38.750 14:57:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1936424 00:24:38.750 14:57:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:38.750 14:57:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1936424 00:24:38.750 14:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1936424 ']' 00:24:38.750 14:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:38.750 14:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:38.750 14:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:38.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:38.750 14:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:38.750 14:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:39.010 [2024-07-14 14:57:18.089601] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:39.010 [2024-07-14 14:57:18.089781] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:39.010 EAL: No free 2048 kB hugepages reported on node 1 00:24:39.010 [2024-07-14 14:57:18.231960] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.270 [2024-07-14 14:57:18.494571] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:39.270 [2024-07-14 14:57:18.494647] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:39.270 [2024-07-14 14:57:18.494677] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:39.270 [2024-07-14 14:57:18.494704] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:39.270 [2024-07-14 14:57:18.494726] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:39.270 [2024-07-14 14:57:18.494778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:39.836 14:57:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:39.836 14:57:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:39.836 14:57:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:39.836 14:57:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:39.836 14:57:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:39.836 14:57:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:39.836 14:57:19 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.nU1Vut9UEl 00:24:39.836 14:57:19 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.nU1Vut9UEl 00:24:39.836 14:57:19 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:40.094 [2024-07-14 14:57:19.268454] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:40.094 14:57:19 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:40.352 14:57:19 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:40.610 [2024-07-14 14:57:19.806051] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:40.610 [2024-07-14 14:57:19.806429] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:40.610 14:57:19 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:40.867 malloc0 00:24:40.867 14:57:20 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:41.125 14:57:20 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nU1Vut9UEl 00:24:41.383 [2024-07-14 14:57:20.597087] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:41.383 14:57:20 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1936717 00:24:41.384 14:57:20 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:41.384 14:57:20 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:41.384 14:57:20 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1936717 /var/tmp/bdevperf.sock 00:24:41.384 14:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1936717 ']' 00:24:41.384 14:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:41.384 14:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:41.384 14:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:41.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:41.384 14:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:41.384 14:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:41.642 [2024-07-14 14:57:20.697941] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:41.642 [2024-07-14 14:57:20.698089] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1936717 ] 00:24:41.642 EAL: No free 2048 kB hugepages reported on node 1 00:24:41.642 [2024-07-14 14:57:20.827524] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:41.900 [2024-07-14 14:57:21.047563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:42.466 14:57:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:42.466 14:57:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:42.466 14:57:21 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nU1Vut9UEl 00:24:42.724 [2024-07-14 14:57:21.835972] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:42.724 [2024-07-14 14:57:21.836208] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:42.724 TLSTESTn1 00:24:42.724 14:57:21 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:24:43.001 14:57:22 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:24:43.001 "subsystems": [ 00:24:43.001 { 00:24:43.001 "subsystem": "keyring", 00:24:43.001 "config": [] 00:24:43.001 }, 00:24:43.001 { 00:24:43.001 "subsystem": "iobuf", 00:24:43.001 "config": [ 00:24:43.001 { 00:24:43.001 "method": "iobuf_set_options", 00:24:43.001 "params": { 00:24:43.001 "small_pool_count": 8192, 00:24:43.001 "large_pool_count": 1024, 00:24:43.001 "small_bufsize": 8192, 00:24:43.001 "large_bufsize": 135168 00:24:43.001 } 00:24:43.001 } 00:24:43.001 ] 00:24:43.001 }, 00:24:43.001 { 00:24:43.001 "subsystem": "sock", 00:24:43.001 "config": [ 00:24:43.001 { 00:24:43.001 "method": "sock_set_default_impl", 00:24:43.001 "params": { 00:24:43.001 "impl_name": "posix" 00:24:43.001 } 00:24:43.001 }, 00:24:43.001 { 00:24:43.001 "method": "sock_impl_set_options", 00:24:43.001 "params": { 00:24:43.001 "impl_name": "ssl", 00:24:43.001 "recv_buf_size": 4096, 00:24:43.001 "send_buf_size": 4096, 00:24:43.001 "enable_recv_pipe": true, 00:24:43.001 "enable_quickack": false, 00:24:43.001 "enable_placement_id": 0, 00:24:43.001 "enable_zerocopy_send_server": true, 00:24:43.001 "enable_zerocopy_send_client": false, 00:24:43.001 "zerocopy_threshold": 0, 00:24:43.001 "tls_version": 0, 00:24:43.001 "enable_ktls": false 00:24:43.001 } 00:24:43.001 }, 00:24:43.001 { 00:24:43.001 "method": "sock_impl_set_options", 00:24:43.001 "params": { 00:24:43.001 "impl_name": "posix", 00:24:43.001 "recv_buf_size": 2097152, 00:24:43.001 "send_buf_size": 2097152, 00:24:43.001 "enable_recv_pipe": true, 00:24:43.001 "enable_quickack": false, 00:24:43.001 "enable_placement_id": 0, 00:24:43.001 "enable_zerocopy_send_server": true, 00:24:43.001 "enable_zerocopy_send_client": false, 00:24:43.001 "zerocopy_threshold": 0, 00:24:43.001 "tls_version": 0, 00:24:43.001 "enable_ktls": false 00:24:43.001 } 00:24:43.001 } 00:24:43.001 ] 00:24:43.001 }, 00:24:43.001 { 00:24:43.001 "subsystem": "vmd", 00:24:43.001 "config": [] 00:24:43.001 }, 00:24:43.001 { 00:24:43.001 "subsystem": "accel", 00:24:43.001 "config": [ 00:24:43.001 { 00:24:43.001 "method": "accel_set_options", 00:24:43.001 "params": { 00:24:43.001 "small_cache_size": 128, 00:24:43.001 "large_cache_size": 16, 00:24:43.001 "task_count": 2048, 00:24:43.001 "sequence_count": 2048, 00:24:43.001 "buf_count": 2048 00:24:43.001 } 00:24:43.001 } 00:24:43.001 ] 00:24:43.001 }, 00:24:43.001 { 00:24:43.001 "subsystem": "bdev", 00:24:43.001 "config": [ 00:24:43.001 { 00:24:43.001 "method": "bdev_set_options", 00:24:43.001 "params": { 00:24:43.001 "bdev_io_pool_size": 65535, 00:24:43.001 "bdev_io_cache_size": 256, 00:24:43.001 "bdev_auto_examine": true, 00:24:43.001 "iobuf_small_cache_size": 128, 00:24:43.001 "iobuf_large_cache_size": 16 00:24:43.001 } 00:24:43.001 }, 00:24:43.001 { 00:24:43.001 "method": "bdev_raid_set_options", 00:24:43.001 "params": { 00:24:43.001 "process_window_size_kb": 1024 00:24:43.001 } 00:24:43.001 }, 00:24:43.001 { 00:24:43.001 "method": "bdev_iscsi_set_options", 00:24:43.001 "params": { 00:24:43.001 "timeout_sec": 30 00:24:43.001 } 00:24:43.001 }, 00:24:43.001 { 00:24:43.001 "method": "bdev_nvme_set_options", 00:24:43.001 "params": { 00:24:43.001 "action_on_timeout": "none", 00:24:43.001 "timeout_us": 0, 00:24:43.001 "timeout_admin_us": 0, 00:24:43.001 "keep_alive_timeout_ms": 10000, 00:24:43.001 "arbitration_burst": 0, 00:24:43.001 "low_priority_weight": 0, 00:24:43.001 "medium_priority_weight": 0, 00:24:43.001 "high_priority_weight": 0, 00:24:43.001 "nvme_adminq_poll_period_us": 10000, 00:24:43.001 "nvme_ioq_poll_period_us": 0, 00:24:43.001 "io_queue_requests": 0, 00:24:43.001 "delay_cmd_submit": true, 00:24:43.001 "transport_retry_count": 4, 00:24:43.001 "bdev_retry_count": 3, 00:24:43.001 "transport_ack_timeout": 0, 00:24:43.001 "ctrlr_loss_timeout_sec": 0, 00:24:43.001 "reconnect_delay_sec": 0, 00:24:43.001 "fast_io_fail_timeout_sec": 0, 00:24:43.001 "disable_auto_failback": false, 00:24:43.001 "generate_uuids": false, 00:24:43.001 "transport_tos": 0, 00:24:43.001 "nvme_error_stat": false, 00:24:43.001 "rdma_srq_size": 0, 00:24:43.001 "io_path_stat": false, 00:24:43.001 "allow_accel_sequence": false, 00:24:43.001 "rdma_max_cq_size": 0, 00:24:43.001 "rdma_cm_event_timeout_ms": 0, 00:24:43.001 "dhchap_digests": [ 00:24:43.001 "sha256", 00:24:43.001 "sha384", 00:24:43.001 "sha512" 00:24:43.001 ], 00:24:43.001 "dhchap_dhgroups": [ 00:24:43.001 "null", 00:24:43.001 "ffdhe2048", 00:24:43.001 "ffdhe3072", 00:24:43.001 "ffdhe4096", 00:24:43.001 "ffdhe6144", 00:24:43.001 "ffdhe8192" 00:24:43.001 ] 00:24:43.001 } 00:24:43.001 }, 00:24:43.001 { 00:24:43.001 "method": "bdev_nvme_set_hotplug", 00:24:43.002 "params": { 00:24:43.002 "period_us": 100000, 00:24:43.002 "enable": false 00:24:43.002 } 00:24:43.002 }, 00:24:43.002 { 00:24:43.002 "method": "bdev_malloc_create", 00:24:43.002 "params": { 00:24:43.002 "name": "malloc0", 00:24:43.002 "num_blocks": 8192, 00:24:43.002 "block_size": 4096, 00:24:43.002 "physical_block_size": 4096, 00:24:43.002 "uuid": "697a7133-4b2e-4ee2-a603-fb07edc9b83f", 00:24:43.002 "optimal_io_boundary": 0 00:24:43.002 } 00:24:43.002 }, 00:24:43.002 { 00:24:43.002 "method": "bdev_wait_for_examine" 00:24:43.002 } 00:24:43.002 ] 00:24:43.002 }, 00:24:43.002 { 00:24:43.002 "subsystem": "nbd", 00:24:43.002 "config": [] 00:24:43.002 }, 00:24:43.002 { 00:24:43.002 "subsystem": "scheduler", 00:24:43.002 "config": [ 00:24:43.002 { 00:24:43.002 "method": "framework_set_scheduler", 00:24:43.002 "params": { 00:24:43.002 "name": "static" 00:24:43.002 } 00:24:43.002 } 00:24:43.002 ] 00:24:43.002 }, 00:24:43.002 { 00:24:43.002 "subsystem": "nvmf", 00:24:43.002 "config": [ 00:24:43.002 { 00:24:43.002 "method": "nvmf_set_config", 00:24:43.002 "params": { 00:24:43.002 "discovery_filter": "match_any", 00:24:43.002 "admin_cmd_passthru": { 00:24:43.002 "identify_ctrlr": false 00:24:43.002 } 00:24:43.002 } 00:24:43.002 }, 00:24:43.002 { 00:24:43.002 "method": "nvmf_set_max_subsystems", 00:24:43.002 "params": { 00:24:43.002 "max_subsystems": 1024 00:24:43.002 } 00:24:43.002 }, 00:24:43.002 { 00:24:43.002 "method": "nvmf_set_crdt", 00:24:43.002 "params": { 00:24:43.002 "crdt1": 0, 00:24:43.002 "crdt2": 0, 00:24:43.002 "crdt3": 0 00:24:43.002 } 00:24:43.002 }, 00:24:43.002 { 00:24:43.002 "method": "nvmf_create_transport", 00:24:43.002 "params": { 00:24:43.002 "trtype": "TCP", 00:24:43.002 "max_queue_depth": 128, 00:24:43.002 "max_io_qpairs_per_ctrlr": 127, 00:24:43.002 "in_capsule_data_size": 4096, 00:24:43.002 "max_io_size": 131072, 00:24:43.002 "io_unit_size": 131072, 00:24:43.002 "max_aq_depth": 128, 00:24:43.002 "num_shared_buffers": 511, 00:24:43.002 "buf_cache_size": 4294967295, 00:24:43.002 "dif_insert_or_strip": false, 00:24:43.002 "zcopy": false, 00:24:43.002 "c2h_success": false, 00:24:43.002 "sock_priority": 0, 00:24:43.002 "abort_timeout_sec": 1, 00:24:43.002 "ack_timeout": 0, 00:24:43.002 "data_wr_pool_size": 0 00:24:43.002 } 00:24:43.002 }, 00:24:43.002 { 00:24:43.002 "method": "nvmf_create_subsystem", 00:24:43.002 "params": { 00:24:43.002 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:43.002 "allow_any_host": false, 00:24:43.002 "serial_number": "SPDK00000000000001", 00:24:43.002 "model_number": "SPDK bdev Controller", 00:24:43.002 "max_namespaces": 10, 00:24:43.002 "min_cntlid": 1, 00:24:43.002 "max_cntlid": 65519, 00:24:43.002 "ana_reporting": false 00:24:43.002 } 00:24:43.002 }, 00:24:43.002 { 00:24:43.002 "method": "nvmf_subsystem_add_host", 00:24:43.002 "params": { 00:24:43.002 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:43.002 "host": "nqn.2016-06.io.spdk:host1", 00:24:43.002 "psk": "/tmp/tmp.nU1Vut9UEl" 00:24:43.002 } 00:24:43.002 }, 00:24:43.002 { 00:24:43.002 "method": "nvmf_subsystem_add_ns", 00:24:43.002 "params": { 00:24:43.002 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:43.002 "namespace": { 00:24:43.002 "nsid": 1, 00:24:43.002 "bdev_name": "malloc0", 00:24:43.002 "nguid": "697A71334B2E4EE2A603FB07EDC9B83F", 00:24:43.002 "uuid": "697a7133-4b2e-4ee2-a603-fb07edc9b83f", 00:24:43.002 "no_auto_visible": false 00:24:43.002 } 00:24:43.002 } 00:24:43.002 }, 00:24:43.002 { 00:24:43.002 "method": "nvmf_subsystem_add_listener", 00:24:43.002 "params": { 00:24:43.002 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:43.002 "listen_address": { 00:24:43.002 "trtype": "TCP", 00:24:43.002 "adrfam": "IPv4", 00:24:43.002 "traddr": "10.0.0.2", 00:24:43.002 "trsvcid": "4420" 00:24:43.002 }, 00:24:43.002 "secure_channel": true 00:24:43.002 } 00:24:43.002 } 00:24:43.002 ] 00:24:43.002 } 00:24:43.002 ] 00:24:43.002 }' 00:24:43.002 14:57:22 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:43.261 14:57:22 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:24:43.261 "subsystems": [ 00:24:43.261 { 00:24:43.261 "subsystem": "keyring", 00:24:43.261 "config": [] 00:24:43.261 }, 00:24:43.261 { 00:24:43.261 "subsystem": "iobuf", 00:24:43.261 "config": [ 00:24:43.261 { 00:24:43.261 "method": "iobuf_set_options", 00:24:43.261 "params": { 00:24:43.261 "small_pool_count": 8192, 00:24:43.261 "large_pool_count": 1024, 00:24:43.261 "small_bufsize": 8192, 00:24:43.261 "large_bufsize": 135168 00:24:43.261 } 00:24:43.261 } 00:24:43.261 ] 00:24:43.261 }, 00:24:43.261 { 00:24:43.261 "subsystem": "sock", 00:24:43.261 "config": [ 00:24:43.261 { 00:24:43.261 "method": "sock_set_default_impl", 00:24:43.261 "params": { 00:24:43.261 "impl_name": "posix" 00:24:43.261 } 00:24:43.261 }, 00:24:43.261 { 00:24:43.261 "method": "sock_impl_set_options", 00:24:43.261 "params": { 00:24:43.261 "impl_name": "ssl", 00:24:43.261 "recv_buf_size": 4096, 00:24:43.261 "send_buf_size": 4096, 00:24:43.261 "enable_recv_pipe": true, 00:24:43.261 "enable_quickack": false, 00:24:43.261 "enable_placement_id": 0, 00:24:43.261 "enable_zerocopy_send_server": true, 00:24:43.261 "enable_zerocopy_send_client": false, 00:24:43.261 "zerocopy_threshold": 0, 00:24:43.261 "tls_version": 0, 00:24:43.261 "enable_ktls": false 00:24:43.261 } 00:24:43.261 }, 00:24:43.261 { 00:24:43.261 "method": "sock_impl_set_options", 00:24:43.261 "params": { 00:24:43.261 "impl_name": "posix", 00:24:43.261 "recv_buf_size": 2097152, 00:24:43.261 "send_buf_size": 2097152, 00:24:43.261 "enable_recv_pipe": true, 00:24:43.261 "enable_quickack": false, 00:24:43.261 "enable_placement_id": 0, 00:24:43.261 "enable_zerocopy_send_server": true, 00:24:43.261 "enable_zerocopy_send_client": false, 00:24:43.261 "zerocopy_threshold": 0, 00:24:43.261 "tls_version": 0, 00:24:43.261 "enable_ktls": false 00:24:43.261 } 00:24:43.261 } 00:24:43.261 ] 00:24:43.261 }, 00:24:43.261 { 00:24:43.261 "subsystem": "vmd", 00:24:43.261 "config": [] 00:24:43.261 }, 00:24:43.261 { 00:24:43.261 "subsystem": "accel", 00:24:43.261 "config": [ 00:24:43.261 { 00:24:43.261 "method": "accel_set_options", 00:24:43.261 "params": { 00:24:43.261 "small_cache_size": 128, 00:24:43.261 "large_cache_size": 16, 00:24:43.261 "task_count": 2048, 00:24:43.261 "sequence_count": 2048, 00:24:43.261 "buf_count": 2048 00:24:43.261 } 00:24:43.261 } 00:24:43.261 ] 00:24:43.261 }, 00:24:43.261 { 00:24:43.261 "subsystem": "bdev", 00:24:43.261 "config": [ 00:24:43.261 { 00:24:43.261 "method": "bdev_set_options", 00:24:43.261 "params": { 00:24:43.261 "bdev_io_pool_size": 65535, 00:24:43.261 "bdev_io_cache_size": 256, 00:24:43.261 "bdev_auto_examine": true, 00:24:43.261 "iobuf_small_cache_size": 128, 00:24:43.261 "iobuf_large_cache_size": 16 00:24:43.261 } 00:24:43.261 }, 00:24:43.261 { 00:24:43.261 "method": "bdev_raid_set_options", 00:24:43.261 "params": { 00:24:43.261 "process_window_size_kb": 1024 00:24:43.261 } 00:24:43.261 }, 00:24:43.261 { 00:24:43.261 "method": "bdev_iscsi_set_options", 00:24:43.261 "params": { 00:24:43.261 "timeout_sec": 30 00:24:43.261 } 00:24:43.261 }, 00:24:43.261 { 00:24:43.261 "method": "bdev_nvme_set_options", 00:24:43.261 "params": { 00:24:43.261 "action_on_timeout": "none", 00:24:43.261 "timeout_us": 0, 00:24:43.261 "timeout_admin_us": 0, 00:24:43.261 "keep_alive_timeout_ms": 10000, 00:24:43.261 "arbitration_burst": 0, 00:24:43.261 "low_priority_weight": 0, 00:24:43.261 "medium_priority_weight": 0, 00:24:43.261 "high_priority_weight": 0, 00:24:43.261 "nvme_adminq_poll_period_us": 10000, 00:24:43.261 "nvme_ioq_poll_period_us": 0, 00:24:43.261 "io_queue_requests": 512, 00:24:43.261 "delay_cmd_submit": true, 00:24:43.261 "transport_retry_count": 4, 00:24:43.261 "bdev_retry_count": 3, 00:24:43.261 "transport_ack_timeout": 0, 00:24:43.261 "ctrlr_loss_timeout_sec": 0, 00:24:43.261 "reconnect_delay_sec": 0, 00:24:43.261 "fast_io_fail_timeout_sec": 0, 00:24:43.261 "disable_auto_failback": false, 00:24:43.262 "generate_uuids": false, 00:24:43.262 "transport_tos": 0, 00:24:43.262 "nvme_error_stat": false, 00:24:43.262 "rdma_srq_size": 0, 00:24:43.262 "io_path_stat": false, 00:24:43.262 "allow_accel_sequence": false, 00:24:43.262 "rdma_max_cq_size": 0, 00:24:43.262 "rdma_cm_event_timeout_ms": 0, 00:24:43.262 "dhchap_digests": [ 00:24:43.262 "sha256", 00:24:43.262 "sha384", 00:24:43.262 "sha512" 00:24:43.262 ], 00:24:43.262 "dhchap_dhgroups": [ 00:24:43.262 "null", 00:24:43.262 "ffdhe2048", 00:24:43.262 "ffdhe3072", 00:24:43.262 "ffdhe4096", 00:24:43.262 "ffdhe6144", 00:24:43.262 "ffdhe8192" 00:24:43.262 ] 00:24:43.262 } 00:24:43.262 }, 00:24:43.262 { 00:24:43.262 "method": "bdev_nvme_attach_controller", 00:24:43.262 "params": { 00:24:43.262 "name": "TLSTEST", 00:24:43.262 "trtype": "TCP", 00:24:43.262 "adrfam": "IPv4", 00:24:43.262 "traddr": "10.0.0.2", 00:24:43.262 "trsvcid": "4420", 00:24:43.262 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:43.262 "prchk_reftag": false, 00:24:43.262 "prchk_guard": false, 00:24:43.262 "ctrlr_loss_timeout_sec": 0, 00:24:43.262 "reconnect_delay_sec": 0, 00:24:43.262 "fast_io_fail_timeout_sec": 0, 00:24:43.262 "psk": "/tmp/tmp.nU1Vut9UEl", 00:24:43.262 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:43.262 "hdgst": false, 00:24:43.262 "ddgst": false 00:24:43.262 } 00:24:43.262 }, 00:24:43.262 { 00:24:43.262 "method": "bdev_nvme_set_hotplug", 00:24:43.262 "params": { 00:24:43.262 "period_us": 100000, 00:24:43.262 "enable": false 00:24:43.262 } 00:24:43.262 }, 00:24:43.262 { 00:24:43.262 "method": "bdev_wait_for_examine" 00:24:43.262 } 00:24:43.262 ] 00:24:43.262 }, 00:24:43.262 { 00:24:43.262 "subsystem": "nbd", 00:24:43.262 "config": [] 00:24:43.262 } 00:24:43.262 ] 00:24:43.262 }' 00:24:43.262 14:57:22 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 1936717 00:24:43.262 14:57:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1936717 ']' 00:24:43.262 14:57:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1936717 00:24:43.521 14:57:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:43.521 14:57:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:43.521 14:57:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1936717 00:24:43.521 14:57:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:43.521 14:57:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:43.521 14:57:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1936717' 00:24:43.522 killing process with pid 1936717 00:24:43.522 14:57:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1936717 00:24:43.522 Received shutdown signal, test time was about 10.000000 seconds 00:24:43.522 00:24:43.522 Latency(us) 00:24:43.522 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:43.522 =================================================================================================================== 00:24:43.522 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:43.522 14:57:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1936717 00:24:43.522 [2024-07-14 14:57:22.594481] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:44.458 14:57:23 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 1936424 00:24:44.458 14:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1936424 ']' 00:24:44.458 14:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1936424 00:24:44.459 14:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:44.459 14:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:44.459 14:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1936424 00:24:44.459 14:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:44.459 14:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:44.459 14:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1936424' 00:24:44.459 killing process with pid 1936424 00:24:44.459 14:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1936424 00:24:44.459 [2024-07-14 14:57:23.539672] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:44.459 14:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1936424 00:24:45.835 14:57:25 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:24:45.835 14:57:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:45.835 14:57:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:45.835 14:57:25 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:24:45.835 "subsystems": [ 00:24:45.835 { 00:24:45.835 "subsystem": "keyring", 00:24:45.835 "config": [] 00:24:45.835 }, 00:24:45.835 { 00:24:45.835 "subsystem": "iobuf", 00:24:45.835 "config": [ 00:24:45.835 { 00:24:45.835 "method": "iobuf_set_options", 00:24:45.835 "params": { 00:24:45.835 "small_pool_count": 8192, 00:24:45.835 "large_pool_count": 1024, 00:24:45.835 "small_bufsize": 8192, 00:24:45.835 "large_bufsize": 135168 00:24:45.835 } 00:24:45.835 } 00:24:45.835 ] 00:24:45.835 }, 00:24:45.835 { 00:24:45.835 "subsystem": "sock", 00:24:45.835 "config": [ 00:24:45.835 { 00:24:45.835 "method": "sock_set_default_impl", 00:24:45.835 "params": { 00:24:45.835 "impl_name": "posix" 00:24:45.835 } 00:24:45.835 }, 00:24:45.835 { 00:24:45.835 "method": "sock_impl_set_options", 00:24:45.835 "params": { 00:24:45.835 "impl_name": "ssl", 00:24:45.835 "recv_buf_size": 4096, 00:24:45.835 "send_buf_size": 4096, 00:24:45.835 "enable_recv_pipe": true, 00:24:45.835 "enable_quickack": false, 00:24:45.835 "enable_placement_id": 0, 00:24:45.835 "enable_zerocopy_send_server": true, 00:24:45.835 "enable_zerocopy_send_client": false, 00:24:45.835 "zerocopy_threshold": 0, 00:24:45.835 "tls_version": 0, 00:24:45.835 "enable_ktls": false 00:24:45.835 } 00:24:45.835 }, 00:24:45.835 { 00:24:45.835 "method": "sock_impl_set_options", 00:24:45.835 "params": { 00:24:45.835 "impl_name": "posix", 00:24:45.835 "recv_buf_size": 2097152, 00:24:45.835 "send_buf_size": 2097152, 00:24:45.835 "enable_recv_pipe": true, 00:24:45.835 "enable_quickack": false, 00:24:45.835 "enable_placement_id": 0, 00:24:45.835 "enable_zerocopy_send_server": true, 00:24:45.835 "enable_zerocopy_send_client": false, 00:24:45.835 "zerocopy_threshold": 0, 00:24:45.835 "tls_version": 0, 00:24:45.835 "enable_ktls": false 00:24:45.835 } 00:24:45.835 } 00:24:45.835 ] 00:24:45.835 }, 00:24:45.835 { 00:24:45.835 "subsystem": "vmd", 00:24:45.835 "config": [] 00:24:45.835 }, 00:24:45.835 { 00:24:45.835 "subsystem": "accel", 00:24:45.835 "config": [ 00:24:45.835 { 00:24:45.835 "method": "accel_set_options", 00:24:45.835 "params": { 00:24:45.835 "small_cache_size": 128, 00:24:45.835 "large_cache_size": 16, 00:24:45.835 "task_count": 2048, 00:24:45.835 "sequence_count": 2048, 00:24:45.835 "buf_count": 2048 00:24:45.835 } 00:24:45.835 } 00:24:45.835 ] 00:24:45.835 }, 00:24:45.835 { 00:24:45.835 "subsystem": "bdev", 00:24:45.835 "config": [ 00:24:45.835 { 00:24:45.835 "method": "bdev_set_options", 00:24:45.835 "params": { 00:24:45.835 "bdev_io_pool_size": 65535, 00:24:45.835 "bdev_io_cache_size": 256, 00:24:45.835 "bdev_auto_examine": true, 00:24:45.835 "iobuf_small_cache_size": 128, 00:24:45.835 "iobuf_large_cache_size": 16 00:24:45.835 } 00:24:45.835 }, 00:24:45.835 { 00:24:45.835 "method": "bdev_raid_set_options", 00:24:45.835 "params": { 00:24:45.835 "process_window_size_kb": 1024 00:24:45.835 } 00:24:45.835 }, 00:24:45.835 { 00:24:45.835 "method": "bdev_iscsi_set_options", 00:24:45.835 "params": { 00:24:45.835 "timeout_sec": 30 00:24:45.835 } 00:24:45.835 }, 00:24:45.835 { 00:24:45.835 "method": "bdev_nvme_set_options", 00:24:45.835 "params": { 00:24:45.835 "action_on_timeout": "none", 00:24:45.835 "timeout_us": 0, 00:24:45.835 "timeout_admin_us": 0, 00:24:45.835 "keep_alive_timeout_ms": 10000, 00:24:45.835 "arbitration_burst": 0, 00:24:45.835 "low_priority_weight": 0, 00:24:45.835 "medium_priority_weight": 0, 00:24:45.835 "high_priority_weight": 0, 00:24:45.835 "nvme_adminq_poll_period_us": 10000, 00:24:45.835 "nvme_ioq_poll_period_us": 0, 00:24:45.835 "io_queue_requests": 0, 00:24:45.835 "delay_cmd_submit": true, 00:24:45.835 "transport_retry_count": 4, 00:24:45.835 "bdev_retry_count": 3, 00:24:45.835 "transport_ack_timeout": 0, 00:24:45.835 "ctrlr_loss_timeout_sec": 0, 00:24:45.835 "reconnect_delay_sec": 0, 00:24:45.835 "fast_io_fail_timeout_sec": 0, 00:24:45.835 "disable_auto_failback": false, 00:24:45.835 "generate_uuids": false, 00:24:45.835 "transport_tos": 0, 00:24:45.835 "nvme_error_stat": false, 00:24:45.835 "rdma_srq_size": 0, 00:24:45.835 "io_path_stat": false, 00:24:45.835 "allow_accel_sequence": false, 00:24:45.835 "rdma_max_cq_size": 0, 00:24:45.835 "rdma_cm_event_timeout_ms": 0, 00:24:45.835 "dhchap_digests": [ 00:24:45.835 "sha256", 00:24:45.835 "sha384", 00:24:45.835 "sha512" 00:24:45.835 ], 00:24:45.835 "dhchap_dhgroups": [ 00:24:45.835 "null", 00:24:45.835 "ffdhe2048", 00:24:45.835 "ffdhe3072", 00:24:45.835 "ffdhe4096", 00:24:45.835 "ffdhe6144", 00:24:45.835 "ffdhe8192" 00:24:45.835 ] 00:24:45.835 } 00:24:45.835 }, 00:24:45.835 { 00:24:45.835 "method": "bdev_nvme_set_hotplug", 00:24:45.835 "params": { 00:24:45.835 "period_us": 100000, 00:24:45.835 "enable": false 00:24:45.835 } 00:24:45.835 }, 00:24:45.835 { 00:24:45.836 "method": "bdev_malloc_create", 00:24:45.836 "params": { 00:24:45.836 "name": "malloc0", 00:24:45.836 "num_blocks": 8192, 00:24:45.836 "block_size": 4096, 00:24:45.836 "physical_block_size": 4096, 00:24:45.836 "uuid": "697a7133-4b2e-4ee2-a603-fb07edc9b83f", 00:24:45.836 "optimal_io_boundary": 0 00:24:45.836 } 00:24:45.836 }, 00:24:45.836 { 00:24:45.836 "method": "bdev_wait_for_examine" 00:24:45.836 } 00:24:45.836 ] 00:24:45.836 }, 00:24:45.836 { 00:24:45.836 "subsystem": "nbd", 00:24:45.836 "config": [] 00:24:45.836 }, 00:24:45.836 { 00:24:45.836 "subsystem": "scheduler", 00:24:45.836 "config": [ 00:24:45.836 { 00:24:45.836 "method": "framework_set_scheduler", 00:24:45.836 "params": { 00:24:45.836 "name": "static" 00:24:45.836 } 00:24:45.836 } 00:24:45.836 ] 00:24:45.836 }, 00:24:45.836 { 00:24:45.836 "subsystem": "nvmf", 00:24:45.836 "config": [ 00:24:45.836 { 00:24:45.836 "method": "nvmf_set_config", 00:24:45.836 "params": { 00:24:45.836 "discovery_filter": "match_any", 00:24:45.836 "admin_cmd_passthru": { 00:24:45.836 "identify_ctrlr": false 00:24:45.836 } 00:24:45.836 } 00:24:45.836 }, 00:24:45.836 { 00:24:45.836 "method": "nvmf_set_max_subsystems", 00:24:45.836 "params": { 00:24:45.836 "max_subsystems": 1024 00:24:45.836 } 00:24:45.836 }, 00:24:45.836 { 00:24:45.836 "method": "nvmf_set_crdt", 00:24:45.836 "params": { 00:24:45.836 "crdt1": 0, 00:24:45.836 "crdt2": 0, 00:24:45.836 "crdt3": 0 00:24:45.836 } 00:24:45.836 }, 00:24:45.836 { 00:24:45.836 "method": "nvmf_create_transport", 00:24:45.836 "params": { 00:24:45.836 "trtype": "TCP", 00:24:45.836 "max_queue_depth": 128, 00:24:45.836 "max_io_qpairs_per_ctrlr": 127, 00:24:45.836 "in_capsule_data_size": 4096, 00:24:45.836 "max_io_size": 131072, 00:24:45.836 "io_unit_size": 131072, 00:24:45.836 "max_aq_depth": 128, 00:24:45.836 "num_shared_buffers": 511, 00:24:45.836 "buf_cache_size": 4294967295, 00:24:45.836 "dif_insert_or_strip": false, 00:24:45.836 "zcopy": false, 00:24:45.836 "c2h_success": false, 00:24:45.836 "sock_priority": 0, 00:24:45.836 "abort_timeout_sec": 1, 00:24:45.836 "ack_timeout": 0, 00:24:45.836 "data_wr_pool_size": 0 00:24:45.836 } 00:24:45.836 }, 00:24:45.836 { 00:24:45.836 "method": "nvmf_create_subsystem", 00:24:45.836 "params": { 00:24:45.836 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:45.836 "allow_any_host": false, 00:24:45.836 "serial_number": "SPDK00000000000001", 00:24:45.836 "model_number": "SPDK bdev Controller", 00:24:45.836 "max_namespaces": 10, 00:24:45.836 "min_cntlid": 1, 00:24:45.836 "max_cntlid": 65519, 00:24:45.836 "ana_reporting": false 00:24:45.836 } 00:24:45.836 }, 00:24:45.836 { 00:24:45.836 "method": "nvmf_subsystem_add_host", 00:24:45.836 "params": { 00:24:45.836 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:45.836 "host": "nqn.2016-06.io.spdk:host1", 00:24:45.836 "psk": "/tmp/tmp.nU1Vut9UEl" 00:24:45.836 } 00:24:45.836 }, 00:24:45.836 { 00:24:45.836 "method": "nvmf_subsystem_add_ns", 00:24:45.836 "params": { 00:24:45.836 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:45.836 "namespace": { 00:24:45.836 "nsid": 1, 00:24:45.836 "bdev_name": "malloc0", 00:24:45.836 "nguid": "697A71334B2E4EE2A603FB07EDC9B83F", 00:24:45.836 "uuid": "697a7133-4b2e-4ee2-a603-fb07edc9b83f", 00:24:45.836 "no_auto_visible": false 00:24:45.836 } 00:24:45.836 } 00:24:45.836 }, 00:24:45.836 { 00:24:45.836 "method": "nvmf_subsystem_add_listener", 00:24:45.836 "params": { 00:24:45.836 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:45.836 "listen_address": { 00:24:45.836 "trtype": "TCP", 00:24:45.836 "adrfam": "IPv4", 00:24:45.836 "traddr": "10.0.0.2", 00:24:45.836 "trsvcid": "4420" 00:24:45.836 }, 00:24:45.836 "secure_channel": true 00:24:45.836 } 00:24:45.836 } 00:24:45.836 ] 00:24:45.836 } 00:24:45.836 ] 00:24:45.836 }' 00:24:45.836 14:57:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:45.836 14:57:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1937263 00:24:45.836 14:57:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:24:45.836 14:57:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1937263 00:24:45.836 14:57:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1937263 ']' 00:24:45.836 14:57:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:45.836 14:57:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:45.836 14:57:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:45.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:45.836 14:57:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:45.836 14:57:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:45.836 [2024-07-14 14:57:25.100464] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:45.836 [2024-07-14 14:57:25.100617] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:46.096 EAL: No free 2048 kB hugepages reported on node 1 00:24:46.096 [2024-07-14 14:57:25.241069] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.354 [2024-07-14 14:57:25.504447] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:46.354 [2024-07-14 14:57:25.504533] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:46.354 [2024-07-14 14:57:25.504563] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:46.354 [2024-07-14 14:57:25.504589] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:46.354 [2024-07-14 14:57:25.504611] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:46.354 [2024-07-14 14:57:25.504780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:46.919 [2024-07-14 14:57:26.054160] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:46.919 [2024-07-14 14:57:26.070129] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:46.919 [2024-07-14 14:57:26.086145] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:46.919 [2024-07-14 14:57:26.086466] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:46.919 14:57:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:46.919 14:57:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:46.919 14:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:46.919 14:57:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:46.919 14:57:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:46.919 14:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:46.919 14:57:26 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1937411 00:24:46.919 14:57:26 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1937411 /var/tmp/bdevperf.sock 00:24:46.919 14:57:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1937411 ']' 00:24:46.919 14:57:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:46.919 14:57:26 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:24:46.919 14:57:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:46.919 14:57:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:46.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:46.919 14:57:26 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:24:46.919 "subsystems": [ 00:24:46.919 { 00:24:46.919 "subsystem": "keyring", 00:24:46.919 "config": [] 00:24:46.919 }, 00:24:46.919 { 00:24:46.919 "subsystem": "iobuf", 00:24:46.919 "config": [ 00:24:46.919 { 00:24:46.919 "method": "iobuf_set_options", 00:24:46.919 "params": { 00:24:46.919 "small_pool_count": 8192, 00:24:46.919 "large_pool_count": 1024, 00:24:46.919 "small_bufsize": 8192, 00:24:46.919 "large_bufsize": 135168 00:24:46.919 } 00:24:46.919 } 00:24:46.919 ] 00:24:46.919 }, 00:24:46.919 { 00:24:46.919 "subsystem": "sock", 00:24:46.919 "config": [ 00:24:46.919 { 00:24:46.919 "method": "sock_set_default_impl", 00:24:46.919 "params": { 00:24:46.919 "impl_name": "posix" 00:24:46.919 } 00:24:46.919 }, 00:24:46.919 { 00:24:46.919 "method": "sock_impl_set_options", 00:24:46.919 "params": { 00:24:46.919 "impl_name": "ssl", 00:24:46.919 "recv_buf_size": 4096, 00:24:46.919 "send_buf_size": 4096, 00:24:46.919 "enable_recv_pipe": true, 00:24:46.919 "enable_quickack": false, 00:24:46.919 "enable_placement_id": 0, 00:24:46.919 "enable_zerocopy_send_server": true, 00:24:46.919 "enable_zerocopy_send_client": false, 00:24:46.919 "zerocopy_threshold": 0, 00:24:46.919 "tls_version": 0, 00:24:46.919 "enable_ktls": false 00:24:46.919 } 00:24:46.919 }, 00:24:46.919 { 00:24:46.919 "method": "sock_impl_set_options", 00:24:46.919 "params": { 00:24:46.919 "impl_name": "posix", 00:24:46.919 "recv_buf_size": 2097152, 00:24:46.919 "send_buf_size": 2097152, 00:24:46.919 "enable_recv_pipe": true, 00:24:46.919 "enable_quickack": false, 00:24:46.919 "enable_placement_id": 0, 00:24:46.919 "enable_zerocopy_send_server": true, 00:24:46.919 "enable_zerocopy_send_client": false, 00:24:46.919 "zerocopy_threshold": 0, 00:24:46.919 "tls_version": 0, 00:24:46.919 "enable_ktls": false 00:24:46.919 } 00:24:46.919 } 00:24:46.919 ] 00:24:46.919 }, 00:24:46.919 { 00:24:46.919 "subsystem": "vmd", 00:24:46.919 "config": [] 00:24:46.919 }, 00:24:46.919 { 00:24:46.919 "subsystem": "accel", 00:24:46.919 "config": [ 00:24:46.919 { 00:24:46.919 "method": "accel_set_options", 00:24:46.919 "params": { 00:24:46.919 "small_cache_size": 128, 00:24:46.919 "large_cache_size": 16, 00:24:46.919 "task_count": 2048, 00:24:46.919 "sequence_count": 2048, 00:24:46.919 "buf_count": 2048 00:24:46.919 } 00:24:46.919 } 00:24:46.919 ] 00:24:46.919 }, 00:24:46.919 { 00:24:46.919 "subsystem": "bdev", 00:24:46.919 "config": [ 00:24:46.919 { 00:24:46.919 "method": "bdev_set_options", 00:24:46.919 "params": { 00:24:46.919 "bdev_io_pool_size": 65535, 00:24:46.919 "bdev_io_cache_size": 256, 00:24:46.919 "bdev_auto_examine": true, 00:24:46.919 "iobuf_small_cache_size": 128, 00:24:46.919 "iobuf_large_cache_size": 16 00:24:46.919 } 00:24:46.919 }, 00:24:46.919 { 00:24:46.919 "method": "bdev_raid_set_options", 00:24:46.919 "params": { 00:24:46.919 "process_window_size_kb": 1024 00:24:46.919 } 00:24:46.919 }, 00:24:46.919 { 00:24:46.919 "method": "bdev_iscsi_set_options", 00:24:46.919 "params": { 00:24:46.919 "timeout_sec": 30 00:24:46.919 } 00:24:46.919 }, 00:24:46.919 { 00:24:46.919 "method": "bdev_nvme_set_options", 00:24:46.919 "params": { 00:24:46.919 "action_on_timeout": "none", 00:24:46.919 "timeout_us": 0, 00:24:46.919 "timeout_admin_us": 0, 00:24:46.919 "keep_alive_timeout_ms": 10000, 00:24:46.919 "arbitration_burst": 0, 00:24:46.919 "low_priority_weight": 0, 00:24:46.919 "medium_priority_weight": 0, 00:24:46.919 "high_priority_weight": 0, 00:24:46.919 "nvme_adminq_poll_period_us": 10000, 00:24:46.919 "nvme_ioq_poll_period_us": 0, 00:24:46.919 "io_queue_requests": 512, 00:24:46.919 "delay_cmd_submit": true, 00:24:46.919 "transport_retry_count": 4, 00:24:46.919 "bdev_retry_count": 3, 00:24:46.919 "transport_ack_timeout": 0, 00:24:46.919 "ctrlr_loss_timeout_sec": 0, 00:24:46.919 "reconnect_delay_sec": 0, 00:24:46.919 "fast_io_fail_timeout_sec": 0, 00:24:46.919 "disable_auto_failback": false, 00:24:46.919 "generate_uuids": false, 00:24:46.919 "transport_tos": 0, 00:24:46.919 "nvme_error_stat": false, 00:24:46.919 "rdma_srq_size": 0, 00:24:46.919 "io_path_stat": false, 00:24:46.919 "allow_accel_sequence": false, 00:24:46.919 "rdma_max_cq_size": 0, 00:24:46.919 "rdma_cm_event_timeout_ms": 0, 00:24:46.920 "dhchap_digests": [ 00:24:46.920 "sha256", 00:24:46.920 "sha384", 00:24:46.920 "sha512" 00:24:46.920 ], 00:24:46.920 "dhchap_dhgroups": [ 00:24:46.920 "null", 00:24:46.920 "ffdhe2048", 00:24:46.920 "ffdhe3072", 00:24:46.920 "ffdhe4096", 00:24:46.920 "ffdhe6144", 00:24:46.920 "ffdhe8192" 00:24:46.920 ] 00:24:46.920 } 00:24:46.920 }, 00:24:46.920 { 00:24:46.920 "method": "bdev_nvme_attach_controller", 00:24:46.920 "params": { 00:24:46.920 "name": "TLSTEST", 00:24:46.920 "trtype": "TCP", 00:24:46.920 "adrfam": "IPv4", 00:24:46.920 "traddr": "10.0.0.2", 00:24:46.920 "trsvcid": "4420", 00:24:46.920 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:46.920 "prchk_reftag": false, 00:24:46.920 "prchk_guard": false, 00:24:46.920 "ctrlr_loss_timeout_sec": 0, 00:24:46.920 "reconnect_delay_sec": 0, 00:24:46.920 "fast_io_fail_timeout_sec": 0, 00:24:46.920 "psk": "/tmp/tmp.nU1Vut9UEl", 00:24:46.920 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:46.920 "hdgst": false, 00:24:46.920 "ddgst": false 00:24:46.920 } 00:24:46.920 }, 00:24:46.920 { 00:24:46.920 "method": "bdev_nvme_set_hotplug", 00:24:46.920 "params": { 00:24:46.920 "period_us": 100000, 00:24:46.920 "enable": false 00:24:46.920 } 00:24:46.920 }, 00:24:46.920 { 00:24:46.920 "method": "bdev_wait_for_examine" 00:24:46.920 } 00:24:46.920 ] 00:24:46.920 }, 00:24:46.920 { 00:24:46.920 "subsystem": "nbd", 00:24:46.920 "config": [] 00:24:46.920 } 00:24:46.920 ] 00:24:46.920 }' 00:24:46.920 14:57:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:46.920 14:57:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:46.920 [2024-07-14 14:57:26.222934] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:46.920 [2024-07-14 14:57:26.223088] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1937411 ] 00:24:47.177 EAL: No free 2048 kB hugepages reported on node 1 00:24:47.177 [2024-07-14 14:57:26.348380] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.434 [2024-07-14 14:57:26.569774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:47.692 [2024-07-14 14:57:26.949062] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:47.692 [2024-07-14 14:57:26.949249] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:47.950 14:57:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:47.950 14:57:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:47.950 14:57:27 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:47.950 Running I/O for 10 seconds... 00:25:00.196 00:25:00.196 Latency(us) 00:25:00.196 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:00.196 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:00.196 Verification LBA range: start 0x0 length 0x2000 00:25:00.196 TLSTESTn1 : 10.04 2688.67 10.50 0.00 0.00 47482.40 9514.86 62137.84 00:25:00.196 =================================================================================================================== 00:25:00.196 Total : 2688.67 10.50 0.00 0.00 47482.40 9514.86 62137.84 00:25:00.196 0 00:25:00.196 14:57:37 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:00.196 14:57:37 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 1937411 00:25:00.196 14:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1937411 ']' 00:25:00.196 14:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1937411 00:25:00.196 14:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:00.196 14:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:00.196 14:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1937411 00:25:00.196 14:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:00.196 14:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:00.196 14:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1937411' 00:25:00.196 killing process with pid 1937411 00:25:00.196 14:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1937411 00:25:00.196 Received shutdown signal, test time was about 10.000000 seconds 00:25:00.196 00:25:00.196 Latency(us) 00:25:00.196 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:00.196 =================================================================================================================== 00:25:00.196 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:00.196 [2024-07-14 14:57:37.363801] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:00.196 14:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1937411 00:25:00.196 14:57:38 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 1937263 00:25:00.196 14:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1937263 ']' 00:25:00.196 14:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1937263 00:25:00.196 14:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:00.196 14:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:00.196 14:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1937263 00:25:00.196 14:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:00.196 14:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:00.196 14:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1937263' 00:25:00.196 killing process with pid 1937263 00:25:00.196 14:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1937263 00:25:00.196 [2024-07-14 14:57:38.381646] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:00.196 14:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1937263 00:25:00.764 14:57:39 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:25:00.764 14:57:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:00.764 14:57:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:00.764 14:57:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:00.764 14:57:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1939005 00:25:00.764 14:57:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:00.764 14:57:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1939005 00:25:00.764 14:57:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1939005 ']' 00:25:00.764 14:57:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:00.764 14:57:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:00.764 14:57:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:00.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:00.764 14:57:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:00.764 14:57:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:00.764 [2024-07-14 14:57:39.950857] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:00.764 [2024-07-14 14:57:39.951009] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:00.764 EAL: No free 2048 kB hugepages reported on node 1 00:25:01.022 [2024-07-14 14:57:40.092155] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.280 [2024-07-14 14:57:40.340259] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:01.280 [2024-07-14 14:57:40.340333] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:01.280 [2024-07-14 14:57:40.340371] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:01.280 [2024-07-14 14:57:40.340397] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:01.280 [2024-07-14 14:57:40.340419] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:01.280 [2024-07-14 14:57:40.340465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:01.847 14:57:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:01.847 14:57:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:01.847 14:57:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:01.847 14:57:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:01.847 14:57:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:01.847 14:57:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:01.847 14:57:40 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.nU1Vut9UEl 00:25:01.847 14:57:40 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.nU1Vut9UEl 00:25:01.847 14:57:40 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:01.847 [2024-07-14 14:57:41.130201] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:01.847 14:57:41 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:02.105 14:57:41 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:02.363 [2024-07-14 14:57:41.627637] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:02.363 [2024-07-14 14:57:41.628013] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:02.363 14:57:41 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:02.933 malloc0 00:25:02.933 14:57:41 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:03.191 14:57:42 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nU1Vut9UEl 00:25:03.449 [2024-07-14 14:57:42.506543] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:03.449 14:57:42 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1939293 00:25:03.450 14:57:42 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:25:03.450 14:57:42 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:03.450 14:57:42 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1939293 /var/tmp/bdevperf.sock 00:25:03.450 14:57:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1939293 ']' 00:25:03.450 14:57:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:03.450 14:57:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:03.450 14:57:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:03.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:03.450 14:57:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:03.450 14:57:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:03.450 [2024-07-14 14:57:42.607866] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:03.450 [2024-07-14 14:57:42.608031] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1939293 ] 00:25:03.450 EAL: No free 2048 kB hugepages reported on node 1 00:25:03.450 [2024-07-14 14:57:42.742445] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:03.709 [2024-07-14 14:57:43.003816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:04.276 14:57:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:04.276 14:57:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:04.276 14:57:43 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.nU1Vut9UEl 00:25:04.843 14:57:43 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:04.843 [2024-07-14 14:57:44.068746] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:05.102 nvme0n1 00:25:05.102 14:57:44 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:05.102 Running I/O for 1 seconds... 00:25:06.040 00:25:06.040 Latency(us) 00:25:06.040 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:06.040 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:06.040 Verification LBA range: start 0x0 length 0x2000 00:25:06.040 nvme0n1 : 1.04 2449.92 9.57 0.00 0.00 51478.44 9223.59 46020.84 00:25:06.040 =================================================================================================================== 00:25:06.040 Total : 2449.92 9.57 0.00 0.00 51478.44 9223.59 46020.84 00:25:06.040 0 00:25:06.298 14:57:45 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 1939293 00:25:06.298 14:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1939293 ']' 00:25:06.298 14:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1939293 00:25:06.298 14:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:06.298 14:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:06.298 14:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1939293 00:25:06.298 14:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:06.299 14:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:06.299 14:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1939293' 00:25:06.299 killing process with pid 1939293 00:25:06.299 14:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1939293 00:25:06.299 Received shutdown signal, test time was about 1.000000 seconds 00:25:06.299 00:25:06.299 Latency(us) 00:25:06.299 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:06.299 =================================================================================================================== 00:25:06.299 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:06.299 14:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1939293 00:25:07.233 14:57:46 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 1939005 00:25:07.233 14:57:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1939005 ']' 00:25:07.233 14:57:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1939005 00:25:07.233 14:57:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:07.233 14:57:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:07.233 14:57:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1939005 00:25:07.233 14:57:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:07.233 14:57:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:07.233 14:57:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1939005' 00:25:07.233 killing process with pid 1939005 00:25:07.233 14:57:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1939005 00:25:07.233 [2024-07-14 14:57:46.533051] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:07.233 14:57:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1939005 00:25:09.135 14:57:47 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:25:09.135 14:57:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:09.135 14:57:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:09.135 14:57:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:09.135 14:57:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1939964 00:25:09.135 14:57:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:09.135 14:57:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1939964 00:25:09.135 14:57:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1939964 ']' 00:25:09.135 14:57:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:09.135 14:57:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:09.135 14:57:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:09.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:09.135 14:57:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:09.135 14:57:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:09.135 [2024-07-14 14:57:48.085627] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:09.135 [2024-07-14 14:57:48.085776] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:09.135 EAL: No free 2048 kB hugepages reported on node 1 00:25:09.135 [2024-07-14 14:57:48.224142] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.393 [2024-07-14 14:57:48.483016] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:09.393 [2024-07-14 14:57:48.483086] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:09.393 [2024-07-14 14:57:48.483117] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:09.393 [2024-07-14 14:57:48.483143] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:09.393 [2024-07-14 14:57:48.483170] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:09.393 [2024-07-14 14:57:48.483216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:09.982 14:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:09.982 14:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:09.982 14:57:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:09.982 14:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:09.982 14:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:09.982 14:57:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:09.982 14:57:49 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:25:09.982 14:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.982 14:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:09.982 [2024-07-14 14:57:49.059621] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:09.982 malloc0 00:25:09.982 [2024-07-14 14:57:49.134473] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:09.982 [2024-07-14 14:57:49.134857] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:09.982 14:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.982 14:57:49 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=1940114 00:25:09.982 14:57:49 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:25:09.982 14:57:49 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 1940114 /var/tmp/bdevperf.sock 00:25:09.982 14:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1940114 ']' 00:25:09.982 14:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:09.982 14:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:09.982 14:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:09.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:09.982 14:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:09.982 14:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:09.982 [2024-07-14 14:57:49.250181] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:09.982 [2024-07-14 14:57:49.250343] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1940114 ] 00:25:10.240 EAL: No free 2048 kB hugepages reported on node 1 00:25:10.240 [2024-07-14 14:57:49.397621] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.499 [2024-07-14 14:57:49.658221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:11.065 14:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:11.065 14:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:11.065 14:57:50 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.nU1Vut9UEl 00:25:11.322 14:57:50 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:11.579 [2024-07-14 14:57:50.772367] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:11.579 nvme0n1 00:25:11.579 14:57:50 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:11.836 Running I/O for 1 seconds... 00:25:12.769 00:25:12.769 Latency(us) 00:25:12.769 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:12.769 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:12.769 Verification LBA range: start 0x0 length 0x2000 00:25:12.769 nvme0n1 : 1.03 2497.79 9.76 0.00 0.00 50591.34 9514.86 59419.31 00:25:12.769 =================================================================================================================== 00:25:12.769 Total : 2497.79 9.76 0.00 0.00 50591.34 9514.86 59419.31 00:25:12.769 0 00:25:12.769 14:57:52 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:25:12.769 14:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.769 14:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:13.027 14:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.027 14:57:52 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:25:13.027 "subsystems": [ 00:25:13.027 { 00:25:13.027 "subsystem": "keyring", 00:25:13.027 "config": [ 00:25:13.028 { 00:25:13.028 "method": "keyring_file_add_key", 00:25:13.028 "params": { 00:25:13.028 "name": "key0", 00:25:13.028 "path": "/tmp/tmp.nU1Vut9UEl" 00:25:13.028 } 00:25:13.028 } 00:25:13.028 ] 00:25:13.028 }, 00:25:13.028 { 00:25:13.028 "subsystem": "iobuf", 00:25:13.028 "config": [ 00:25:13.028 { 00:25:13.028 "method": "iobuf_set_options", 00:25:13.028 "params": { 00:25:13.028 "small_pool_count": 8192, 00:25:13.028 "large_pool_count": 1024, 00:25:13.028 "small_bufsize": 8192, 00:25:13.028 "large_bufsize": 135168 00:25:13.028 } 00:25:13.028 } 00:25:13.028 ] 00:25:13.028 }, 00:25:13.028 { 00:25:13.028 "subsystem": "sock", 00:25:13.028 "config": [ 00:25:13.028 { 00:25:13.028 "method": "sock_set_default_impl", 00:25:13.028 "params": { 00:25:13.028 "impl_name": "posix" 00:25:13.028 } 00:25:13.028 }, 00:25:13.028 { 00:25:13.028 "method": "sock_impl_set_options", 00:25:13.028 "params": { 00:25:13.028 "impl_name": "ssl", 00:25:13.028 "recv_buf_size": 4096, 00:25:13.028 "send_buf_size": 4096, 00:25:13.028 "enable_recv_pipe": true, 00:25:13.028 "enable_quickack": false, 00:25:13.028 "enable_placement_id": 0, 00:25:13.028 "enable_zerocopy_send_server": true, 00:25:13.028 "enable_zerocopy_send_client": false, 00:25:13.028 "zerocopy_threshold": 0, 00:25:13.028 "tls_version": 0, 00:25:13.028 "enable_ktls": false 00:25:13.028 } 00:25:13.028 }, 00:25:13.028 { 00:25:13.028 "method": "sock_impl_set_options", 00:25:13.028 "params": { 00:25:13.028 "impl_name": "posix", 00:25:13.028 "recv_buf_size": 2097152, 00:25:13.028 "send_buf_size": 2097152, 00:25:13.028 "enable_recv_pipe": true, 00:25:13.028 "enable_quickack": false, 00:25:13.028 "enable_placement_id": 0, 00:25:13.028 "enable_zerocopy_send_server": true, 00:25:13.028 "enable_zerocopy_send_client": false, 00:25:13.028 "zerocopy_threshold": 0, 00:25:13.028 "tls_version": 0, 00:25:13.028 "enable_ktls": false 00:25:13.028 } 00:25:13.028 } 00:25:13.028 ] 00:25:13.028 }, 00:25:13.028 { 00:25:13.028 "subsystem": "vmd", 00:25:13.028 "config": [] 00:25:13.028 }, 00:25:13.028 { 00:25:13.028 "subsystem": "accel", 00:25:13.028 "config": [ 00:25:13.028 { 00:25:13.028 "method": "accel_set_options", 00:25:13.028 "params": { 00:25:13.028 "small_cache_size": 128, 00:25:13.028 "large_cache_size": 16, 00:25:13.028 "task_count": 2048, 00:25:13.028 "sequence_count": 2048, 00:25:13.028 "buf_count": 2048 00:25:13.028 } 00:25:13.028 } 00:25:13.028 ] 00:25:13.028 }, 00:25:13.028 { 00:25:13.028 "subsystem": "bdev", 00:25:13.028 "config": [ 00:25:13.028 { 00:25:13.028 "method": "bdev_set_options", 00:25:13.028 "params": { 00:25:13.028 "bdev_io_pool_size": 65535, 00:25:13.028 "bdev_io_cache_size": 256, 00:25:13.028 "bdev_auto_examine": true, 00:25:13.028 "iobuf_small_cache_size": 128, 00:25:13.028 "iobuf_large_cache_size": 16 00:25:13.028 } 00:25:13.028 }, 00:25:13.028 { 00:25:13.028 "method": "bdev_raid_set_options", 00:25:13.028 "params": { 00:25:13.028 "process_window_size_kb": 1024 00:25:13.028 } 00:25:13.028 }, 00:25:13.028 { 00:25:13.028 "method": "bdev_iscsi_set_options", 00:25:13.028 "params": { 00:25:13.028 "timeout_sec": 30 00:25:13.028 } 00:25:13.028 }, 00:25:13.028 { 00:25:13.028 "method": "bdev_nvme_set_options", 00:25:13.028 "params": { 00:25:13.028 "action_on_timeout": "none", 00:25:13.028 "timeout_us": 0, 00:25:13.028 "timeout_admin_us": 0, 00:25:13.028 "keep_alive_timeout_ms": 10000, 00:25:13.028 "arbitration_burst": 0, 00:25:13.028 "low_priority_weight": 0, 00:25:13.028 "medium_priority_weight": 0, 00:25:13.028 "high_priority_weight": 0, 00:25:13.028 "nvme_adminq_poll_period_us": 10000, 00:25:13.028 "nvme_ioq_poll_period_us": 0, 00:25:13.028 "io_queue_requests": 0, 00:25:13.028 "delay_cmd_submit": true, 00:25:13.028 "transport_retry_count": 4, 00:25:13.028 "bdev_retry_count": 3, 00:25:13.028 "transport_ack_timeout": 0, 00:25:13.028 "ctrlr_loss_timeout_sec": 0, 00:25:13.028 "reconnect_delay_sec": 0, 00:25:13.028 "fast_io_fail_timeout_sec": 0, 00:25:13.028 "disable_auto_failback": false, 00:25:13.028 "generate_uuids": false, 00:25:13.028 "transport_tos": 0, 00:25:13.028 "nvme_error_stat": false, 00:25:13.028 "rdma_srq_size": 0, 00:25:13.028 "io_path_stat": false, 00:25:13.028 "allow_accel_sequence": false, 00:25:13.028 "rdma_max_cq_size": 0, 00:25:13.028 "rdma_cm_event_timeout_ms": 0, 00:25:13.028 "dhchap_digests": [ 00:25:13.028 "sha256", 00:25:13.028 "sha384", 00:25:13.028 "sha512" 00:25:13.028 ], 00:25:13.028 "dhchap_dhgroups": [ 00:25:13.028 "null", 00:25:13.028 "ffdhe2048", 00:25:13.028 "ffdhe3072", 00:25:13.028 "ffdhe4096", 00:25:13.028 "ffdhe6144", 00:25:13.028 "ffdhe8192" 00:25:13.028 ] 00:25:13.028 } 00:25:13.028 }, 00:25:13.028 { 00:25:13.028 "method": "bdev_nvme_set_hotplug", 00:25:13.028 "params": { 00:25:13.028 "period_us": 100000, 00:25:13.028 "enable": false 00:25:13.028 } 00:25:13.028 }, 00:25:13.028 { 00:25:13.028 "method": "bdev_malloc_create", 00:25:13.028 "params": { 00:25:13.028 "name": "malloc0", 00:25:13.028 "num_blocks": 8192, 00:25:13.028 "block_size": 4096, 00:25:13.028 "physical_block_size": 4096, 00:25:13.028 "uuid": "446909af-f64b-4c2f-8746-fe8167cccfd9", 00:25:13.028 "optimal_io_boundary": 0 00:25:13.028 } 00:25:13.028 }, 00:25:13.028 { 00:25:13.028 "method": "bdev_wait_for_examine" 00:25:13.028 } 00:25:13.028 ] 00:25:13.028 }, 00:25:13.028 { 00:25:13.028 "subsystem": "nbd", 00:25:13.028 "config": [] 00:25:13.028 }, 00:25:13.028 { 00:25:13.028 "subsystem": "scheduler", 00:25:13.028 "config": [ 00:25:13.028 { 00:25:13.028 "method": "framework_set_scheduler", 00:25:13.028 "params": { 00:25:13.028 "name": "static" 00:25:13.028 } 00:25:13.028 } 00:25:13.028 ] 00:25:13.028 }, 00:25:13.028 { 00:25:13.028 "subsystem": "nvmf", 00:25:13.028 "config": [ 00:25:13.028 { 00:25:13.028 "method": "nvmf_set_config", 00:25:13.028 "params": { 00:25:13.028 "discovery_filter": "match_any", 00:25:13.028 "admin_cmd_passthru": { 00:25:13.028 "identify_ctrlr": false 00:25:13.028 } 00:25:13.028 } 00:25:13.028 }, 00:25:13.028 { 00:25:13.028 "method": "nvmf_set_max_subsystems", 00:25:13.028 "params": { 00:25:13.028 "max_subsystems": 1024 00:25:13.028 } 00:25:13.028 }, 00:25:13.028 { 00:25:13.028 "method": "nvmf_set_crdt", 00:25:13.028 "params": { 00:25:13.028 "crdt1": 0, 00:25:13.028 "crdt2": 0, 00:25:13.028 "crdt3": 0 00:25:13.028 } 00:25:13.028 }, 00:25:13.028 { 00:25:13.028 "method": "nvmf_create_transport", 00:25:13.028 "params": { 00:25:13.028 "trtype": "TCP", 00:25:13.028 "max_queue_depth": 128, 00:25:13.028 "max_io_qpairs_per_ctrlr": 127, 00:25:13.028 "in_capsule_data_size": 4096, 00:25:13.028 "max_io_size": 131072, 00:25:13.028 "io_unit_size": 131072, 00:25:13.028 "max_aq_depth": 128, 00:25:13.028 "num_shared_buffers": 511, 00:25:13.028 "buf_cache_size": 4294967295, 00:25:13.028 "dif_insert_or_strip": false, 00:25:13.028 "zcopy": false, 00:25:13.028 "c2h_success": false, 00:25:13.028 "sock_priority": 0, 00:25:13.028 "abort_timeout_sec": 1, 00:25:13.028 "ack_timeout": 0, 00:25:13.028 "data_wr_pool_size": 0 00:25:13.028 } 00:25:13.028 }, 00:25:13.028 { 00:25:13.028 "method": "nvmf_create_subsystem", 00:25:13.028 "params": { 00:25:13.028 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:13.028 "allow_any_host": false, 00:25:13.028 "serial_number": "00000000000000000000", 00:25:13.028 "model_number": "SPDK bdev Controller", 00:25:13.028 "max_namespaces": 32, 00:25:13.028 "min_cntlid": 1, 00:25:13.028 "max_cntlid": 65519, 00:25:13.028 "ana_reporting": false 00:25:13.028 } 00:25:13.028 }, 00:25:13.028 { 00:25:13.028 "method": "nvmf_subsystem_add_host", 00:25:13.028 "params": { 00:25:13.028 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:13.028 "host": "nqn.2016-06.io.spdk:host1", 00:25:13.028 "psk": "key0" 00:25:13.028 } 00:25:13.028 }, 00:25:13.028 { 00:25:13.028 "method": "nvmf_subsystem_add_ns", 00:25:13.028 "params": { 00:25:13.028 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:13.028 "namespace": { 00:25:13.028 "nsid": 1, 00:25:13.028 "bdev_name": "malloc0", 00:25:13.028 "nguid": "446909AFF64B4C2F8746FE8167CCCFD9", 00:25:13.028 "uuid": "446909af-f64b-4c2f-8746-fe8167cccfd9", 00:25:13.028 "no_auto_visible": false 00:25:13.028 } 00:25:13.028 } 00:25:13.028 }, 00:25:13.028 { 00:25:13.028 "method": "nvmf_subsystem_add_listener", 00:25:13.028 "params": { 00:25:13.028 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:13.028 "listen_address": { 00:25:13.028 "trtype": "TCP", 00:25:13.028 "adrfam": "IPv4", 00:25:13.028 "traddr": "10.0.0.2", 00:25:13.028 "trsvcid": "4420" 00:25:13.028 }, 00:25:13.028 "secure_channel": true 00:25:13.028 } 00:25:13.028 } 00:25:13.028 ] 00:25:13.028 } 00:25:13.028 ] 00:25:13.028 }' 00:25:13.028 14:57:52 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:25:13.288 14:57:52 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:25:13.288 "subsystems": [ 00:25:13.288 { 00:25:13.288 "subsystem": "keyring", 00:25:13.288 "config": [ 00:25:13.288 { 00:25:13.288 "method": "keyring_file_add_key", 00:25:13.288 "params": { 00:25:13.288 "name": "key0", 00:25:13.288 "path": "/tmp/tmp.nU1Vut9UEl" 00:25:13.288 } 00:25:13.288 } 00:25:13.288 ] 00:25:13.288 }, 00:25:13.288 { 00:25:13.288 "subsystem": "iobuf", 00:25:13.288 "config": [ 00:25:13.288 { 00:25:13.288 "method": "iobuf_set_options", 00:25:13.288 "params": { 00:25:13.288 "small_pool_count": 8192, 00:25:13.288 "large_pool_count": 1024, 00:25:13.288 "small_bufsize": 8192, 00:25:13.288 "large_bufsize": 135168 00:25:13.288 } 00:25:13.288 } 00:25:13.288 ] 00:25:13.288 }, 00:25:13.288 { 00:25:13.288 "subsystem": "sock", 00:25:13.288 "config": [ 00:25:13.288 { 00:25:13.288 "method": "sock_set_default_impl", 00:25:13.288 "params": { 00:25:13.288 "impl_name": "posix" 00:25:13.288 } 00:25:13.288 }, 00:25:13.288 { 00:25:13.288 "method": "sock_impl_set_options", 00:25:13.288 "params": { 00:25:13.288 "impl_name": "ssl", 00:25:13.288 "recv_buf_size": 4096, 00:25:13.288 "send_buf_size": 4096, 00:25:13.288 "enable_recv_pipe": true, 00:25:13.288 "enable_quickack": false, 00:25:13.288 "enable_placement_id": 0, 00:25:13.288 "enable_zerocopy_send_server": true, 00:25:13.288 "enable_zerocopy_send_client": false, 00:25:13.288 "zerocopy_threshold": 0, 00:25:13.288 "tls_version": 0, 00:25:13.288 "enable_ktls": false 00:25:13.288 } 00:25:13.288 }, 00:25:13.288 { 00:25:13.288 "method": "sock_impl_set_options", 00:25:13.288 "params": { 00:25:13.288 "impl_name": "posix", 00:25:13.288 "recv_buf_size": 2097152, 00:25:13.288 "send_buf_size": 2097152, 00:25:13.288 "enable_recv_pipe": true, 00:25:13.288 "enable_quickack": false, 00:25:13.288 "enable_placement_id": 0, 00:25:13.288 "enable_zerocopy_send_server": true, 00:25:13.288 "enable_zerocopy_send_client": false, 00:25:13.288 "zerocopy_threshold": 0, 00:25:13.288 "tls_version": 0, 00:25:13.288 "enable_ktls": false 00:25:13.288 } 00:25:13.288 } 00:25:13.288 ] 00:25:13.288 }, 00:25:13.288 { 00:25:13.288 "subsystem": "vmd", 00:25:13.288 "config": [] 00:25:13.288 }, 00:25:13.288 { 00:25:13.288 "subsystem": "accel", 00:25:13.288 "config": [ 00:25:13.288 { 00:25:13.288 "method": "accel_set_options", 00:25:13.288 "params": { 00:25:13.288 "small_cache_size": 128, 00:25:13.288 "large_cache_size": 16, 00:25:13.288 "task_count": 2048, 00:25:13.288 "sequence_count": 2048, 00:25:13.288 "buf_count": 2048 00:25:13.288 } 00:25:13.288 } 00:25:13.288 ] 00:25:13.288 }, 00:25:13.288 { 00:25:13.288 "subsystem": "bdev", 00:25:13.288 "config": [ 00:25:13.288 { 00:25:13.288 "method": "bdev_set_options", 00:25:13.288 "params": { 00:25:13.288 "bdev_io_pool_size": 65535, 00:25:13.288 "bdev_io_cache_size": 256, 00:25:13.288 "bdev_auto_examine": true, 00:25:13.288 "iobuf_small_cache_size": 128, 00:25:13.288 "iobuf_large_cache_size": 16 00:25:13.288 } 00:25:13.288 }, 00:25:13.288 { 00:25:13.288 "method": "bdev_raid_set_options", 00:25:13.288 "params": { 00:25:13.288 "process_window_size_kb": 1024 00:25:13.288 } 00:25:13.288 }, 00:25:13.288 { 00:25:13.288 "method": "bdev_iscsi_set_options", 00:25:13.288 "params": { 00:25:13.288 "timeout_sec": 30 00:25:13.288 } 00:25:13.288 }, 00:25:13.288 { 00:25:13.288 "method": "bdev_nvme_set_options", 00:25:13.288 "params": { 00:25:13.288 "action_on_timeout": "none", 00:25:13.288 "timeout_us": 0, 00:25:13.288 "timeout_admin_us": 0, 00:25:13.288 "keep_alive_timeout_ms": 10000, 00:25:13.288 "arbitration_burst": 0, 00:25:13.288 "low_priority_weight": 0, 00:25:13.288 "medium_priority_weight": 0, 00:25:13.288 "high_priority_weight": 0, 00:25:13.289 "nvme_adminq_poll_period_us": 10000, 00:25:13.289 "nvme_ioq_poll_period_us": 0, 00:25:13.289 "io_queue_requests": 512, 00:25:13.289 "delay_cmd_submit": true, 00:25:13.289 "transport_retry_count": 4, 00:25:13.289 "bdev_retry_count": 3, 00:25:13.289 "transport_ack_timeout": 0, 00:25:13.289 "ctrlr_loss_timeout_sec": 0, 00:25:13.289 "reconnect_delay_sec": 0, 00:25:13.289 "fast_io_fail_timeout_sec": 0, 00:25:13.289 "disable_auto_failback": false, 00:25:13.289 "generate_uuids": false, 00:25:13.289 "transport_tos": 0, 00:25:13.289 "nvme_error_stat": false, 00:25:13.289 "rdma_srq_size": 0, 00:25:13.289 "io_path_stat": false, 00:25:13.289 "allow_accel_sequence": false, 00:25:13.289 "rdma_max_cq_size": 0, 00:25:13.289 "rdma_cm_event_timeout_ms": 0, 00:25:13.289 "dhchap_digests": [ 00:25:13.289 "sha256", 00:25:13.289 "sha384", 00:25:13.289 "sha512" 00:25:13.289 ], 00:25:13.289 "dhchap_dhgroups": [ 00:25:13.289 "null", 00:25:13.289 "ffdhe2048", 00:25:13.289 "ffdhe3072", 00:25:13.289 "ffdhe4096", 00:25:13.289 "ffdhe6144", 00:25:13.289 "ffdhe8192" 00:25:13.289 ] 00:25:13.289 } 00:25:13.289 }, 00:25:13.289 { 00:25:13.289 "method": "bdev_nvme_attach_controller", 00:25:13.289 "params": { 00:25:13.289 "name": "nvme0", 00:25:13.289 "trtype": "TCP", 00:25:13.289 "adrfam": "IPv4", 00:25:13.289 "traddr": "10.0.0.2", 00:25:13.289 "trsvcid": "4420", 00:25:13.289 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:13.289 "prchk_reftag": false, 00:25:13.289 "prchk_guard": false, 00:25:13.289 "ctrlr_loss_timeout_sec": 0, 00:25:13.289 "reconnect_delay_sec": 0, 00:25:13.289 "fast_io_fail_timeout_sec": 0, 00:25:13.289 "psk": "key0", 00:25:13.289 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:13.289 "hdgst": false, 00:25:13.289 "ddgst": false 00:25:13.289 } 00:25:13.289 }, 00:25:13.289 { 00:25:13.289 "method": "bdev_nvme_set_hotplug", 00:25:13.289 "params": { 00:25:13.289 "period_us": 100000, 00:25:13.289 "enable": false 00:25:13.289 } 00:25:13.289 }, 00:25:13.289 { 00:25:13.289 "method": "bdev_enable_histogram", 00:25:13.289 "params": { 00:25:13.289 "name": "nvme0n1", 00:25:13.289 "enable": true 00:25:13.289 } 00:25:13.289 }, 00:25:13.289 { 00:25:13.289 "method": "bdev_wait_for_examine" 00:25:13.289 } 00:25:13.289 ] 00:25:13.289 }, 00:25:13.289 { 00:25:13.289 "subsystem": "nbd", 00:25:13.289 "config": [] 00:25:13.289 } 00:25:13.289 ] 00:25:13.289 }' 00:25:13.289 14:57:52 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 1940114 00:25:13.289 14:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1940114 ']' 00:25:13.289 14:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1940114 00:25:13.289 14:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:13.289 14:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:13.289 14:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1940114 00:25:13.289 14:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:13.289 14:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:13.289 14:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1940114' 00:25:13.289 killing process with pid 1940114 00:25:13.289 14:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1940114 00:25:13.289 Received shutdown signal, test time was about 1.000000 seconds 00:25:13.289 00:25:13.289 Latency(us) 00:25:13.289 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:13.289 =================================================================================================================== 00:25:13.289 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:13.289 14:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1940114 00:25:14.690 14:57:53 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 1939964 00:25:14.690 14:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1939964 ']' 00:25:14.690 14:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1939964 00:25:14.690 14:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:14.690 14:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:14.690 14:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1939964 00:25:14.690 14:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:14.691 14:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:14.691 14:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1939964' 00:25:14.691 killing process with pid 1939964 00:25:14.691 14:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1939964 00:25:14.691 14:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1939964 00:25:16.069 14:57:55 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:25:16.069 14:57:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:16.069 14:57:55 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:25:16.069 "subsystems": [ 00:25:16.069 { 00:25:16.069 "subsystem": "keyring", 00:25:16.069 "config": [ 00:25:16.069 { 00:25:16.069 "method": "keyring_file_add_key", 00:25:16.069 "params": { 00:25:16.069 "name": "key0", 00:25:16.069 "path": "/tmp/tmp.nU1Vut9UEl" 00:25:16.069 } 00:25:16.069 } 00:25:16.070 ] 00:25:16.070 }, 00:25:16.070 { 00:25:16.070 "subsystem": "iobuf", 00:25:16.070 "config": [ 00:25:16.070 { 00:25:16.070 "method": "iobuf_set_options", 00:25:16.070 "params": { 00:25:16.070 "small_pool_count": 8192, 00:25:16.070 "large_pool_count": 1024, 00:25:16.070 "small_bufsize": 8192, 00:25:16.070 "large_bufsize": 135168 00:25:16.070 } 00:25:16.070 } 00:25:16.070 ] 00:25:16.070 }, 00:25:16.070 { 00:25:16.070 "subsystem": "sock", 00:25:16.070 "config": [ 00:25:16.070 { 00:25:16.070 "method": "sock_set_default_impl", 00:25:16.070 "params": { 00:25:16.070 "impl_name": "posix" 00:25:16.070 } 00:25:16.070 }, 00:25:16.070 { 00:25:16.070 "method": "sock_impl_set_options", 00:25:16.070 "params": { 00:25:16.070 "impl_name": "ssl", 00:25:16.070 "recv_buf_size": 4096, 00:25:16.070 "send_buf_size": 4096, 00:25:16.070 "enable_recv_pipe": true, 00:25:16.070 "enable_quickack": false, 00:25:16.070 "enable_placement_id": 0, 00:25:16.070 "enable_zerocopy_send_server": true, 00:25:16.070 "enable_zerocopy_send_client": false, 00:25:16.070 "zerocopy_threshold": 0, 00:25:16.070 "tls_version": 0, 00:25:16.070 "enable_ktls": false 00:25:16.070 } 00:25:16.070 }, 00:25:16.070 { 00:25:16.070 "method": "sock_impl_set_options", 00:25:16.070 "params": { 00:25:16.070 "impl_name": "posix", 00:25:16.070 "recv_buf_size": 2097152, 00:25:16.070 "send_buf_size": 2097152, 00:25:16.070 "enable_recv_pipe": true, 00:25:16.070 "enable_quickack": false, 00:25:16.070 "enable_placement_id": 0, 00:25:16.070 "enable_zerocopy_send_server": true, 00:25:16.070 "enable_zerocopy_send_client": false, 00:25:16.070 "zerocopy_threshold": 0, 00:25:16.070 "tls_version": 0, 00:25:16.070 "enable_ktls": false 00:25:16.070 } 00:25:16.070 } 00:25:16.070 ] 00:25:16.070 }, 00:25:16.070 { 00:25:16.070 "subsystem": "vmd", 00:25:16.070 "config": [] 00:25:16.070 }, 00:25:16.070 { 00:25:16.070 "subsystem": "accel", 00:25:16.070 "config": [ 00:25:16.070 { 00:25:16.070 "method": "accel_set_options", 00:25:16.070 "params": { 00:25:16.070 "small_cache_size": 128, 00:25:16.070 "large_cache_size": 16, 00:25:16.070 "task_count": 2048, 00:25:16.070 "sequence_count": 2048, 00:25:16.070 "buf_count": 2048 00:25:16.070 } 00:25:16.070 } 00:25:16.070 ] 00:25:16.070 }, 00:25:16.070 { 00:25:16.070 "subsystem": "bdev", 00:25:16.070 "config": [ 00:25:16.070 { 00:25:16.070 "method": "bdev_set_options", 00:25:16.070 "params": { 00:25:16.070 "bdev_io_pool_size": 65535, 00:25:16.070 "bdev_io_cache_size": 256, 00:25:16.070 "bdev_auto_examine": true, 00:25:16.070 "iobuf_small_cache_size": 128, 00:25:16.070 "iobuf_large_cache_size": 16 00:25:16.070 } 00:25:16.070 }, 00:25:16.070 { 00:25:16.070 "method": "bdev_raid_set_options", 00:25:16.070 "params": { 00:25:16.070 "process_window_size_kb": 1024 00:25:16.070 } 00:25:16.070 }, 00:25:16.070 { 00:25:16.070 "method": "bdev_iscsi_set_options", 00:25:16.070 "params": { 00:25:16.070 "timeout_sec": 30 00:25:16.070 } 00:25:16.070 }, 00:25:16.070 { 00:25:16.070 "method": "bdev_nvme_set_options", 00:25:16.070 "params": { 00:25:16.070 "action_on_timeout": "none", 00:25:16.070 "timeout_us": 0, 00:25:16.070 "timeout_admin_us": 0, 00:25:16.070 "keep_alive_timeout_ms": 10000, 00:25:16.070 "arbitration_burst": 0, 00:25:16.070 "low_priority_weight": 0, 00:25:16.070 "medium_priority_weight": 0, 00:25:16.070 "high_priority_weight": 0, 00:25:16.070 "nvme_adminq_poll_period_us": 10000, 00:25:16.070 "nvme_ioq_poll_period_us": 0, 00:25:16.070 "io_queue_requests": 0, 00:25:16.070 "delay_cmd_submit": true, 00:25:16.070 "transport_retry_count": 4, 00:25:16.070 "bdev_retry_count": 3, 00:25:16.070 "transport_ack_timeout": 0, 00:25:16.070 "ctrlr_loss_timeout_sec": 0, 00:25:16.070 "reconnect_delay_sec": 0, 00:25:16.070 "fast_io_fail_timeout_sec": 0, 00:25:16.070 "disable_auto_failback": false, 00:25:16.070 "generate_uuids": false, 00:25:16.070 "transport_tos": 0, 00:25:16.070 "nvme_error_stat": false, 00:25:16.070 "rdma_srq_size": 0, 00:25:16.070 "io_path_stat": false, 00:25:16.070 "allow_accel_sequence": false, 00:25:16.070 "rdma_max_cq_size": 0, 00:25:16.070 "rdma_cm_event_timeout_ms": 0, 00:25:16.070 "dhchap_digests": [ 00:25:16.070 "sha256", 00:25:16.070 "sha384", 00:25:16.070 "sha512" 00:25:16.070 ], 00:25:16.070 "dhchap_dhgroups": [ 00:25:16.070 "null", 00:25:16.070 "ffdhe2048", 00:25:16.070 "ffdhe3072", 00:25:16.070 "ffdhe4096", 00:25:16.070 "ffdhe6144", 00:25:16.070 "ffdhe8192" 00:25:16.070 ] 00:25:16.070 } 00:25:16.070 }, 00:25:16.070 { 00:25:16.070 "method": "bdev_nvme_set_hotplug", 00:25:16.070 "params": { 00:25:16.070 "period_us": 100000, 00:25:16.070 "enable": false 00:25:16.070 } 00:25:16.070 }, 00:25:16.070 { 00:25:16.070 "method": "bdev_malloc_create", 00:25:16.070 "params": { 00:25:16.070 "name": "malloc0", 00:25:16.070 "num_blocks": 8192, 00:25:16.070 "block_size": 4096, 00:25:16.070 "physical_block_size": 4096, 00:25:16.070 "uuid": "446909af-f64b-4c2f-8746-fe8167cccfd9", 00:25:16.070 "optimal_io_boundary": 0 00:25:16.070 } 00:25:16.070 }, 00:25:16.070 { 00:25:16.070 "method": "bdev_wait_for_examine" 00:25:16.070 } 00:25:16.070 ] 00:25:16.070 }, 00:25:16.070 { 00:25:16.070 "subsystem": "nbd", 00:25:16.070 "config": [] 00:25:16.070 }, 00:25:16.070 { 00:25:16.070 "subsystem": "scheduler", 00:25:16.070 "config": [ 00:25:16.070 { 00:25:16.070 "method": "framework_set_scheduler", 00:25:16.070 "params": { 00:25:16.070 "name": "static" 00:25:16.070 } 00:25:16.070 } 00:25:16.070 ] 00:25:16.070 }, 00:25:16.070 { 00:25:16.070 "subsystem": "nvmf", 00:25:16.070 "config": [ 00:25:16.070 { 00:25:16.070 "method": "nvmf_set_config", 00:25:16.070 "params": { 00:25:16.070 "discovery_filter": "match_any", 00:25:16.070 "admin_cmd_passthru": { 00:25:16.070 "identify_ctrlr": false 00:25:16.070 } 00:25:16.070 } 00:25:16.070 }, 00:25:16.070 { 00:25:16.070 "method": "nvmf_set_max_subsystems", 00:25:16.070 "params": { 00:25:16.070 "max_subsystems": 1024 00:25:16.070 } 00:25:16.070 }, 00:25:16.070 { 00:25:16.070 "method": "nvmf_set_crdt", 00:25:16.070 "params": { 00:25:16.070 "crdt1": 0, 00:25:16.070 "crdt2": 0, 00:25:16.070 "crdt3": 0 00:25:16.070 } 00:25:16.070 }, 00:25:16.070 { 00:25:16.070 "method": "nvmf_create_transport", 00:25:16.070 "params": { 00:25:16.070 "trtype": "TCP", 00:25:16.070 "max_queue_depth": 128, 00:25:16.070 "max_io_qpairs_per_ctrlr": 127, 00:25:16.070 "in_capsule_data_size": 4096, 00:25:16.070 "max_io_size": 131072, 00:25:16.070 "io_unit_size": 131072, 00:25:16.070 "max_aq_depth": 128, 00:25:16.070 "num_shared_buffers": 511, 00:25:16.070 "buf_cache_size": 4294967295, 00:25:16.070 "dif_insert_or_strip": false, 00:25:16.070 "zcopy": false, 00:25:16.070 "c2h_success": false, 00:25:16.070 "sock_priority": 0, 00:25:16.070 "abort_timeout_sec": 1, 00:25:16.070 "ack_timeout": 0, 00:25:16.070 "data_wr_pool_size": 0 00:25:16.070 } 00:25:16.070 }, 00:25:16.070 { 00:25:16.070 "method": "nvmf_create_subsystem", 00:25:16.070 "params": { 00:25:16.070 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:16.070 "allow_any_host": false, 00:25:16.070 "serial_number": "00000000000000000000", 00:25:16.070 "model_number": "SPDK bdev Controller", 00:25:16.070 "max_namespaces": 32, 00:25:16.070 "min_cntlid": 1, 00:25:16.070 "max_cntlid": 65519, 00:25:16.070 "ana_reporting": false 00:25:16.070 } 00:25:16.070 }, 00:25:16.070 { 00:25:16.070 "method": "nvmf_subsystem_add_host", 00:25:16.070 "params": { 00:25:16.070 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:16.070 "host": "nqn.2016-06.io.spdk:host1", 00:25:16.070 "psk": "key0" 00:25:16.070 } 00:25:16.070 }, 00:25:16.070 { 00:25:16.070 "method": "nvmf_subsystem_add_ns", 00:25:16.070 "params": { 00:25:16.070 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:16.070 "namespace": { 00:25:16.070 "nsid": 1, 00:25:16.070 "bdev_name": "malloc0", 00:25:16.070 "nguid": "446909AFF64B4C2F8746FE8167CCCFD9", 00:25:16.070 "uuid": "446909af-f64b-4c2f-8746-fe8167cccfd9", 00:25:16.070 "no_auto_visible": false 00:25:16.070 } 00:25:16.070 } 00:25:16.070 }, 00:25:16.070 { 00:25:16.070 "method": "nvmf_subsystem_add_listener", 00:25:16.070 "params": { 00:25:16.070 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:16.070 "listen_address": { 00:25:16.070 "trtype": "TCP", 00:25:16.070 "adrfam": "IPv4", 00:25:16.070 "traddr": "10.0.0.2", 00:25:16.070 "trsvcid": "4420" 00:25:16.070 }, 00:25:16.070 "secure_channel": true 00:25:16.070 } 00:25:16.070 } 00:25:16.070 ] 00:25:16.070 } 00:25:16.070 ] 00:25:16.070 }' 00:25:16.071 14:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:16.071 14:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:16.071 14:57:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1940798 00:25:16.071 14:57:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:25:16.071 14:57:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1940798 00:25:16.071 14:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1940798 ']' 00:25:16.071 14:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:16.071 14:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:16.071 14:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:16.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:16.071 14:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:16.071 14:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:16.071 [2024-07-14 14:57:55.157654] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:16.071 [2024-07-14 14:57:55.157794] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:16.071 EAL: No free 2048 kB hugepages reported on node 1 00:25:16.071 [2024-07-14 14:57:55.295412] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.332 [2024-07-14 14:57:55.552259] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:16.332 [2024-07-14 14:57:55.552334] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:16.332 [2024-07-14 14:57:55.552364] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:16.332 [2024-07-14 14:57:55.552390] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:16.332 [2024-07-14 14:57:55.552413] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:16.332 [2024-07-14 14:57:55.552552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:16.900 [2024-07-14 14:57:56.102826] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:16.900 [2024-07-14 14:57:56.134819] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:16.900 [2024-07-14 14:57:56.135141] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:16.900 14:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:16.900 14:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:16.900 14:57:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:16.900 14:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:16.900 14:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:16.900 14:57:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:16.900 14:57:56 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=1940948 00:25:16.900 14:57:56 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 1940948 /var/tmp/bdevperf.sock 00:25:16.900 14:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1940948 ']' 00:25:16.900 14:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:16.900 14:57:56 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:25:16.900 14:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:16.900 14:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:16.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:16.900 14:57:56 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:25:16.900 "subsystems": [ 00:25:16.900 { 00:25:16.900 "subsystem": "keyring", 00:25:16.900 "config": [ 00:25:16.900 { 00:25:16.900 "method": "keyring_file_add_key", 00:25:16.900 "params": { 00:25:16.900 "name": "key0", 00:25:16.900 "path": "/tmp/tmp.nU1Vut9UEl" 00:25:16.900 } 00:25:16.900 } 00:25:16.900 ] 00:25:16.900 }, 00:25:16.900 { 00:25:16.900 "subsystem": "iobuf", 00:25:16.900 "config": [ 00:25:16.900 { 00:25:16.900 "method": "iobuf_set_options", 00:25:16.900 "params": { 00:25:16.900 "small_pool_count": 8192, 00:25:16.900 "large_pool_count": 1024, 00:25:16.900 "small_bufsize": 8192, 00:25:16.900 "large_bufsize": 135168 00:25:16.900 } 00:25:16.900 } 00:25:16.900 ] 00:25:16.900 }, 00:25:16.900 { 00:25:16.900 "subsystem": "sock", 00:25:16.900 "config": [ 00:25:16.900 { 00:25:16.900 "method": "sock_set_default_impl", 00:25:16.900 "params": { 00:25:16.900 "impl_name": "posix" 00:25:16.900 } 00:25:16.900 }, 00:25:16.900 { 00:25:16.900 "method": "sock_impl_set_options", 00:25:16.900 "params": { 00:25:16.900 "impl_name": "ssl", 00:25:16.900 "recv_buf_size": 4096, 00:25:16.900 "send_buf_size": 4096, 00:25:16.900 "enable_recv_pipe": true, 00:25:16.900 "enable_quickack": false, 00:25:16.900 "enable_placement_id": 0, 00:25:16.900 "enable_zerocopy_send_server": true, 00:25:16.900 "enable_zerocopy_send_client": false, 00:25:16.900 "zerocopy_threshold": 0, 00:25:16.900 "tls_version": 0, 00:25:16.900 "enable_ktls": false 00:25:16.900 } 00:25:16.900 }, 00:25:16.900 { 00:25:16.900 "method": "sock_impl_set_options", 00:25:16.900 "params": { 00:25:16.900 "impl_name": "posix", 00:25:16.900 "recv_buf_size": 2097152, 00:25:16.900 "send_buf_size": 2097152, 00:25:16.900 "enable_recv_pipe": true, 00:25:16.900 "enable_quickack": false, 00:25:16.900 "enable_placement_id": 0, 00:25:16.900 "enable_zerocopy_send_server": true, 00:25:16.900 "enable_zerocopy_send_client": false, 00:25:16.900 "zerocopy_threshold": 0, 00:25:16.900 "tls_version": 0, 00:25:16.900 "enable_ktls": false 00:25:16.900 } 00:25:16.900 } 00:25:16.900 ] 00:25:16.900 }, 00:25:16.900 { 00:25:16.900 "subsystem": "vmd", 00:25:16.900 "config": [] 00:25:16.900 }, 00:25:16.900 { 00:25:16.900 "subsystem": "accel", 00:25:16.900 "config": [ 00:25:16.900 { 00:25:16.900 "method": "accel_set_options", 00:25:16.900 "params": { 00:25:16.900 "small_cache_size": 128, 00:25:16.900 "large_cache_size": 16, 00:25:16.900 "task_count": 2048, 00:25:16.900 "sequence_count": 2048, 00:25:16.900 "buf_count": 2048 00:25:16.900 } 00:25:16.900 } 00:25:16.900 ] 00:25:16.900 }, 00:25:16.900 { 00:25:16.900 "subsystem": "bdev", 00:25:16.900 "config": [ 00:25:16.900 { 00:25:16.900 "method": "bdev_set_options", 00:25:16.900 "params": { 00:25:16.900 "bdev_io_pool_size": 65535, 00:25:16.900 "bdev_io_cache_size": 256, 00:25:16.900 "bdev_auto_examine": true, 00:25:16.900 "iobuf_small_cache_size": 128, 00:25:16.900 "iobuf_large_cache_size": 16 00:25:16.900 } 00:25:16.900 }, 00:25:16.900 { 00:25:16.900 "method": "bdev_raid_set_options", 00:25:16.900 "params": { 00:25:16.900 "process_window_size_kb": 1024 00:25:16.900 } 00:25:16.900 }, 00:25:16.900 { 00:25:16.900 "method": "bdev_iscsi_set_options", 00:25:16.900 "params": { 00:25:16.900 "timeout_sec": 30 00:25:16.900 } 00:25:16.900 }, 00:25:16.900 { 00:25:16.900 "method": "bdev_nvme_set_options", 00:25:16.900 "params": { 00:25:16.900 "action_on_timeout": "none", 00:25:16.900 "timeout_us": 0, 00:25:16.900 "timeout_admin_us": 0, 00:25:16.900 "keep_alive_timeout_ms": 10000, 00:25:16.900 "arbitration_burst": 0, 00:25:16.900 "low_priority_weight": 0, 00:25:16.900 "medium_priority_weight": 0, 00:25:16.900 "high_priority_weight": 0, 00:25:16.900 "nvme_adminq_poll_period_us": 10000, 00:25:16.900 "nvme_ioq_poll_period_us": 0, 00:25:16.900 "io_queue_requests": 512, 00:25:16.900 "delay_cmd_submit": true, 00:25:16.900 "transport_retry_count": 4, 00:25:16.900 "bdev_retry_count": 3, 00:25:16.900 "transport_ack_timeout": 0, 00:25:16.900 "ctrlr_loss_timeout_sec": 0, 00:25:16.900 "reconnect_delay_sec": 0, 00:25:16.900 "fast_io_fail_timeout_sec": 0, 00:25:16.900 "disable_auto_failback": false, 00:25:16.900 "generate_uuids": false, 00:25:16.900 "transport_tos": 0, 00:25:16.900 "nvme_error_stat": false, 00:25:16.900 "rdma_srq_size": 0, 00:25:16.900 "io_path_stat": false, 00:25:16.900 "allow_accel_sequence": false, 00:25:16.900 "rdma_max_cq_size": 0, 00:25:16.900 "rdma_cm_event_timeout_ms": 0, 00:25:16.900 "dhchap_digests": [ 00:25:16.900 "sha256", 00:25:16.900 "sha384", 00:25:16.900 "sha512" 00:25:16.900 ], 00:25:16.900 "dhchap_dhgroups": [ 00:25:16.900 "null", 00:25:16.901 "ffdhe2048", 00:25:16.901 "ffdhe3072", 00:25:16.901 "ffdhe4096", 00:25:16.901 "ffdhe6144", 00:25:16.901 "ffdhe8192" 00:25:16.901 ] 00:25:16.901 } 00:25:16.901 }, 00:25:16.901 { 00:25:16.901 "method": "bdev_nvme_attach_controller", 00:25:16.901 "params": { 00:25:16.901 "name": "nvme0", 00:25:16.901 "trtype": "TCP", 00:25:16.901 "adrfam": "IPv4", 00:25:16.901 "traddr": "10.0.0.2", 00:25:16.901 "trsvcid": "4420", 00:25:16.901 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:16.901 "prchk_reftag": false, 00:25:16.901 "prchk_guard": false, 00:25:16.901 "ctrlr_loss_timeout_sec": 0, 00:25:16.901 "reconnect_delay_sec": 0, 00:25:16.901 "fast_io_fail_timeout_sec": 0, 00:25:16.901 "psk": "key0", 00:25:16.901 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:16.901 "hdgst": false, 00:25:16.901 "ddgst": false 00:25:16.901 } 00:25:16.901 }, 00:25:16.901 { 00:25:16.901 "method": "bdev_nvme_set_hotplug", 00:25:16.901 "params": { 00:25:16.901 "period_us": 100000, 00:25:16.901 "enable": false 00:25:16.901 } 00:25:16.901 }, 00:25:16.901 { 00:25:16.901 "method": "bdev_enable_histogram", 00:25:16.901 "params": { 00:25:16.901 "name": "nvme0n1", 00:25:16.901 "enable": true 00:25:16.901 } 00:25:16.901 }, 00:25:16.901 { 00:25:16.901 "method": "bdev_wait_for_examine" 00:25:16.901 } 00:25:16.901 ] 00:25:16.901 }, 00:25:16.901 { 00:25:16.901 "subsystem": "nbd", 00:25:16.901 "config": [] 00:25:16.901 } 00:25:16.901 ] 00:25:16.901 }' 00:25:16.901 14:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:16.901 14:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:17.160 [2024-07-14 14:57:56.274827] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:17.160 [2024-07-14 14:57:56.274994] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1940948 ] 00:25:17.160 EAL: No free 2048 kB hugepages reported on node 1 00:25:17.160 [2024-07-14 14:57:56.413092] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:17.418 [2024-07-14 14:57:56.673497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:17.984 [2024-07-14 14:57:57.112904] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:17.984 14:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:17.984 14:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:17.984 14:57:57 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:17.984 14:57:57 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:25:18.242 14:57:57 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.242 14:57:57 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:18.501 Running I/O for 1 seconds... 00:25:19.437 00:25:19.437 Latency(us) 00:25:19.437 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:19.437 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:19.437 Verification LBA range: start 0x0 length 0x2000 00:25:19.437 nvme0n1 : 1.03 2363.53 9.23 0.00 0.00 53523.27 11116.85 51652.08 00:25:19.437 =================================================================================================================== 00:25:19.437 Total : 2363.53 9.23 0.00 0.00 53523.27 11116.85 51652.08 00:25:19.437 0 00:25:19.437 14:57:58 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:25:19.437 14:57:58 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:25:19.437 14:57:58 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:25:19.437 14:57:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:25:19.437 14:57:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:25:19.437 14:57:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:25:19.437 14:57:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:19.437 14:57:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:25:19.437 14:57:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:25:19.437 14:57:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:25:19.437 14:57:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:19.437 nvmf_trace.0 00:25:19.437 14:57:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:25:19.437 14:57:58 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 1940948 00:25:19.437 14:57:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1940948 ']' 00:25:19.437 14:57:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1940948 00:25:19.437 14:57:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:19.723 14:57:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:19.723 14:57:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1940948 00:25:19.723 14:57:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:19.723 14:57:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:19.723 14:57:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1940948' 00:25:19.723 killing process with pid 1940948 00:25:19.723 14:57:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1940948 00:25:19.723 Received shutdown signal, test time was about 1.000000 seconds 00:25:19.723 00:25:19.723 Latency(us) 00:25:19.723 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:19.723 =================================================================================================================== 00:25:19.723 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:19.723 14:57:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1940948 00:25:20.662 14:57:59 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:25:20.663 14:57:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:20.663 14:57:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:25:20.663 14:57:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:20.663 14:57:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:25:20.663 14:57:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:20.663 14:57:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:20.663 rmmod nvme_tcp 00:25:20.663 rmmod nvme_fabrics 00:25:20.663 rmmod nvme_keyring 00:25:20.663 14:57:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:20.663 14:57:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:25:20.663 14:57:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:25:20.663 14:57:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1940798 ']' 00:25:20.663 14:57:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1940798 00:25:20.663 14:57:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1940798 ']' 00:25:20.663 14:57:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1940798 00:25:20.663 14:57:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:20.663 14:57:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:20.663 14:57:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1940798 00:25:20.923 14:57:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:20.923 14:57:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:20.923 14:57:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1940798' 00:25:20.923 killing process with pid 1940798 00:25:20.923 14:57:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1940798 00:25:20.923 14:57:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1940798 00:25:22.301 14:58:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:22.301 14:58:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:22.301 14:58:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:22.301 14:58:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:22.301 14:58:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:22.301 14:58:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:22.301 14:58:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:22.301 14:58:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:24.208 14:58:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:24.208 14:58:03 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.5z6Fzabdf4 /tmp/tmp.InPAqvZpBV /tmp/tmp.nU1Vut9UEl 00:25:24.208 00:25:24.208 real 1m50.780s 00:25:24.208 user 3m2.031s 00:25:24.208 sys 0m25.164s 00:25:24.208 14:58:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:24.209 14:58:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:24.209 ************************************ 00:25:24.209 END TEST nvmf_tls 00:25:24.209 ************************************ 00:25:24.209 14:58:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:24.209 14:58:03 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:24.209 14:58:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:24.209 14:58:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:24.209 14:58:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:24.468 ************************************ 00:25:24.468 START TEST nvmf_fips 00:25:24.468 ************************************ 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:24.468 * Looking for test storage... 00:25:24.468 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:25:24.468 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:25:24.469 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:25:24.469 14:58:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:25:24.469 14:58:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:25:24.469 14:58:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:25:24.469 14:58:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:25:24.469 14:58:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:25:24.469 14:58:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:25:24.469 14:58:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:25:24.469 14:58:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:25:24.469 14:58:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:25:24.469 14:58:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:25:24.469 14:58:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:25:24.469 14:58:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:25:24.469 14:58:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:25:24.469 14:58:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:25:24.469 14:58:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:25:24.469 14:58:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:25:24.469 14:58:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:25:24.469 14:58:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:25:24.469 14:58:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:25:24.469 14:58:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:25:24.469 14:58:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:25:24.469 14:58:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:25:24.469 14:58:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:25:24.469 14:58:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:25:24.469 14:58:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:25:24.469 14:58:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:24.469 14:58:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:25:24.469 14:58:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:24.469 14:58:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:25:24.469 14:58:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:24.469 14:58:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:25:24.469 14:58:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:25:24.469 14:58:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:25:24.469 Error setting digest 00:25:24.469 0002A2D9817F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:25:24.469 0002A2D9817F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:25:24.469 14:58:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:25:24.469 14:58:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:24.469 14:58:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:24.469 14:58:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:24.469 14:58:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:25:24.469 14:58:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:24.469 14:58:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:24.469 14:58:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:24.469 14:58:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:24.469 14:58:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:24.469 14:58:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:24.469 14:58:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:24.469 14:58:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:24.469 14:58:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:24.469 14:58:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:24.469 14:58:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:25:24.469 14:58:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:26.375 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:26.375 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:26.375 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:26.375 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:26.376 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:26.376 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:26.376 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:26.376 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:26.376 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:26.376 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:26.376 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:26.376 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:26.376 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:25:26.376 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:26.376 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:26.376 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:26.376 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:26.376 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:26.376 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:26.376 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:26.376 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:26.376 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:26.376 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:26.376 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:26.376 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:26.376 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:26.376 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:26.635 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:26.635 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:26.635 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:26.635 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:26.635 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:26.635 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:26.635 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:26.635 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:26.635 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:26.635 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:26.635 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:25:26.635 00:25:26.635 --- 10.0.0.2 ping statistics --- 00:25:26.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:26.635 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:25:26.635 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:26.635 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:26.635 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:25:26.635 00:25:26.635 --- 10.0.0.1 ping statistics --- 00:25:26.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:26.635 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:25:26.635 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:26.635 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:25:26.635 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:26.635 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:26.635 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:26.635 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:26.635 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:26.635 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:26.635 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:26.635 14:58:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:25:26.635 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:26.635 14:58:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:26.635 14:58:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:26.635 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1943573 00:25:26.635 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:26.635 14:58:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1943573 00:25:26.635 14:58:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1943573 ']' 00:25:26.635 14:58:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:26.635 14:58:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:26.635 14:58:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:26.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:26.635 14:58:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:26.635 14:58:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:26.895 [2024-07-14 14:58:05.961296] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:26.895 [2024-07-14 14:58:05.961439] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:26.895 EAL: No free 2048 kB hugepages reported on node 1 00:25:26.895 [2024-07-14 14:58:06.101084] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.154 [2024-07-14 14:58:06.369030] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:27.154 [2024-07-14 14:58:06.369116] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:27.154 [2024-07-14 14:58:06.369146] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:27.155 [2024-07-14 14:58:06.369171] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:27.155 [2024-07-14 14:58:06.369194] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:27.155 [2024-07-14 14:58:06.369249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:27.720 14:58:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:27.720 14:58:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:25:27.721 14:58:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:27.721 14:58:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:27.721 14:58:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:27.721 14:58:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:27.721 14:58:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:25:27.721 14:58:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:27.721 14:58:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:27.721 14:58:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:27.721 14:58:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:27.721 14:58:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:27.721 14:58:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:27.721 14:58:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:27.977 [2024-07-14 14:58:07.083462] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:27.977 [2024-07-14 14:58:07.099456] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:27.977 [2024-07-14 14:58:07.099772] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:27.977 [2024-07-14 14:58:07.169691] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:27.977 malloc0 00:25:27.977 14:58:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:27.977 14:58:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1943726 00:25:27.977 14:58:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:27.977 14:58:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1943726 /var/tmp/bdevperf.sock 00:25:27.977 14:58:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1943726 ']' 00:25:27.977 14:58:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:27.977 14:58:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:27.977 14:58:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:27.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:27.978 14:58:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:27.978 14:58:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:28.236 [2024-07-14 14:58:07.314107] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:28.236 [2024-07-14 14:58:07.314256] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1943726 ] 00:25:28.236 EAL: No free 2048 kB hugepages reported on node 1 00:25:28.236 [2024-07-14 14:58:07.444003] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.495 [2024-07-14 14:58:07.713931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:29.060 14:58:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:29.060 14:58:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:25:29.060 14:58:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:29.318 [2024-07-14 14:58:08.502656] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:29.318 [2024-07-14 14:58:08.502845] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:29.318 TLSTESTn1 00:25:29.318 14:58:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:29.577 Running I/O for 10 seconds... 00:25:39.587 00:25:39.587 Latency(us) 00:25:39.587 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:39.587 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:39.587 Verification LBA range: start 0x0 length 0x2000 00:25:39.587 TLSTESTn1 : 10.02 2605.76 10.18 0.00 0.00 49033.23 8592.50 45244.11 00:25:39.587 =================================================================================================================== 00:25:39.587 Total : 2605.76 10.18 0.00 0.00 49033.23 8592.50 45244.11 00:25:39.587 0 00:25:39.587 14:58:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:25:39.587 14:58:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:25:39.587 14:58:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:25:39.587 14:58:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:25:39.587 14:58:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:25:39.587 14:58:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:39.587 14:58:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:25:39.587 14:58:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:25:39.587 14:58:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:25:39.587 14:58:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:39.587 nvmf_trace.0 00:25:39.587 14:58:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:25:39.587 14:58:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1943726 00:25:39.587 14:58:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1943726 ']' 00:25:39.587 14:58:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1943726 00:25:39.587 14:58:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:25:39.587 14:58:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:39.587 14:58:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1943726 00:25:39.587 14:58:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:39.587 14:58:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:39.587 14:58:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1943726' 00:25:39.587 killing process with pid 1943726 00:25:39.587 14:58:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1943726 00:25:39.588 Received shutdown signal, test time was about 10.000000 seconds 00:25:39.588 00:25:39.588 Latency(us) 00:25:39.588 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:39.588 =================================================================================================================== 00:25:39.588 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:39.588 [2024-07-14 14:58:18.895561] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:39.588 14:58:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1943726 00:25:40.968 14:58:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:25:40.968 14:58:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:40.968 14:58:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:25:40.968 14:58:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:40.968 14:58:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:25:40.968 14:58:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:40.968 14:58:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:40.968 rmmod nvme_tcp 00:25:40.968 rmmod nvme_fabrics 00:25:40.968 rmmod nvme_keyring 00:25:40.968 14:58:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:40.968 14:58:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:25:40.968 14:58:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:25:40.968 14:58:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1943573 ']' 00:25:40.968 14:58:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1943573 00:25:40.968 14:58:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1943573 ']' 00:25:40.968 14:58:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1943573 00:25:40.968 14:58:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:25:40.968 14:58:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:40.968 14:58:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1943573 00:25:40.968 14:58:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:40.968 14:58:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:40.968 14:58:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1943573' 00:25:40.968 killing process with pid 1943573 00:25:40.968 14:58:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1943573 00:25:40.968 [2024-07-14 14:58:19.999405] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:40.968 14:58:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1943573 00:25:42.342 14:58:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:42.342 14:58:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:42.342 14:58:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:42.342 14:58:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:42.342 14:58:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:42.342 14:58:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.342 14:58:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:42.342 14:58:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:44.248 14:58:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:44.248 14:58:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:44.248 00:25:44.248 real 0m19.975s 00:25:44.248 user 0m27.167s 00:25:44.248 sys 0m5.185s 00:25:44.248 14:58:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:44.248 14:58:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:44.248 ************************************ 00:25:44.248 END TEST nvmf_fips 00:25:44.248 ************************************ 00:25:44.248 14:58:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:44.248 14:58:23 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:25:44.248 14:58:23 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:44.248 14:58:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:44.248 14:58:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:44.248 14:58:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:44.248 ************************************ 00:25:44.248 START TEST nvmf_fuzz 00:25:44.248 ************************************ 00:25:44.248 14:58:23 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:44.506 * Looking for test storage... 00:25:44.506 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:44.506 14:58:23 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:44.506 14:58:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:25:44.506 14:58:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:44.506 14:58:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:44.506 14:58:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:44.506 14:58:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:44.506 14:58:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:44.506 14:58:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:44.506 14:58:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:44.506 14:58:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:44.506 14:58:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:44.506 14:58:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:44.506 14:58:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:44.506 14:58:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:44.506 14:58:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:44.506 14:58:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:44.506 14:58:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:44.506 14:58:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:44.506 14:58:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:44.506 14:58:23 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:44.506 14:58:23 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:44.506 14:58:23 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:44.506 14:58:23 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.506 14:58:23 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.506 14:58:23 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.506 14:58:23 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:25:44.506 14:58:23 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.506 14:58:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:25:44.506 14:58:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:44.506 14:58:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:44.506 14:58:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:44.506 14:58:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:44.506 14:58:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:44.506 14:58:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:44.506 14:58:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:44.506 14:58:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:44.506 14:58:23 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:25:44.506 14:58:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:44.506 14:58:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:44.506 14:58:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:44.507 14:58:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:44.507 14:58:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:44.507 14:58:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:44.507 14:58:23 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:44.507 14:58:23 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:44.507 14:58:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:44.507 14:58:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:44.507 14:58:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:25:44.507 14:58:23 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:46.411 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:46.411 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:46.411 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:46.411 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:46.412 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:46.412 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:46.412 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:46.412 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:46.412 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:46.412 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:46.412 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:46.412 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:46.412 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:25:46.412 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:46.412 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:46.412 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:46.412 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:46.412 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:46.412 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:46.412 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:46.412 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:46.412 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:46.412 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:46.412 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:46.412 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:46.412 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:46.412 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:46.412 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:46.412 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:46.412 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:46.412 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:46.412 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:46.412 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:46.412 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:46.412 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:46.412 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:46.412 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:46.412 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:25:46.412 00:25:46.412 --- 10.0.0.2 ping statistics --- 00:25:46.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:46.412 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:25:46.412 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:46.412 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:46.412 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:25:46.412 00:25:46.412 --- 10.0.0.1 ping statistics --- 00:25:46.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:46.412 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:25:46.412 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:46.412 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:25:46.412 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:46.412 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:46.412 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:46.412 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:46.412 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:46.412 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:46.412 14:58:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:46.412 14:58:25 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1947239 00:25:46.412 14:58:25 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:46.412 14:58:25 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:46.412 14:58:25 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1947239 00:25:46.412 14:58:25 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@829 -- # '[' -z 1947239 ']' 00:25:46.412 14:58:25 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:46.412 14:58:25 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:46.412 14:58:25 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:46.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:46.412 14:58:25 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:46.412 14:58:25 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:47.791 14:58:26 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:47.791 14:58:26 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@862 -- # return 0 00:25:47.791 14:58:26 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:47.791 14:58:26 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.791 14:58:26 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:47.791 14:58:26 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.791 14:58:26 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:25:47.791 14:58:26 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.791 14:58:26 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:47.791 Malloc0 00:25:47.791 14:58:26 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.791 14:58:26 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:47.791 14:58:26 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.791 14:58:26 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:47.791 14:58:26 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.791 14:58:26 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:47.791 14:58:26 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.791 14:58:26 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:47.791 14:58:26 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.791 14:58:26 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:47.791 14:58:26 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.791 14:58:26 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:47.791 14:58:26 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.791 14:58:26 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:25:47.791 14:58:26 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:26:19.881 Fuzzing completed. Shutting down the fuzz application 00:26:19.881 00:26:19.881 Dumping successful admin opcodes: 00:26:19.881 8, 9, 10, 24, 00:26:19.881 Dumping successful io opcodes: 00:26:19.881 0, 9, 00:26:19.881 NS: 0x200003aefec0 I/O qp, Total commands completed: 334265, total successful commands: 1982, random_seed: 541435520 00:26:19.881 NS: 0x200003aefec0 admin qp, Total commands completed: 42112, total successful commands: 344, random_seed: 857939008 00:26:19.881 14:58:58 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:26:20.817 Fuzzing completed. Shutting down the fuzz application 00:26:20.817 00:26:20.817 Dumping successful admin opcodes: 00:26:20.817 24, 00:26:20.817 Dumping successful io opcodes: 00:26:20.817 00:26:20.817 NS: 0x200003aefec0 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1167606705 00:26:20.817 NS: 0x200003aefec0 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1167808719 00:26:20.817 14:58:59 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:20.817 14:58:59 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.817 14:58:59 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:20.817 14:58:59 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.817 14:58:59 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:26:20.817 14:58:59 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:26:20.817 14:58:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:20.817 14:58:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:26:20.817 14:58:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:20.817 14:58:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:26:20.817 14:58:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:20.817 14:58:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:20.817 rmmod nvme_tcp 00:26:20.817 rmmod nvme_fabrics 00:26:20.817 rmmod nvme_keyring 00:26:20.817 14:59:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:20.817 14:59:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:26:20.817 14:59:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:26:20.817 14:59:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 1947239 ']' 00:26:20.817 14:59:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 1947239 00:26:20.817 14:59:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@948 -- # '[' -z 1947239 ']' 00:26:20.817 14:59:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # kill -0 1947239 00:26:20.817 14:59:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # uname 00:26:20.817 14:59:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:20.817 14:59:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1947239 00:26:20.817 14:59:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:20.817 14:59:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:20.817 14:59:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1947239' 00:26:20.817 killing process with pid 1947239 00:26:20.817 14:59:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@967 -- # kill 1947239 00:26:20.817 14:59:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@972 -- # wait 1947239 00:26:22.725 14:59:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:22.725 14:59:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:22.725 14:59:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:22.725 14:59:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:22.725 14:59:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:22.725 14:59:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:22.725 14:59:01 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:22.725 14:59:01 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:24.647 14:59:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:24.647 14:59:03 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:26:24.647 00:26:24.647 real 0m40.108s 00:26:24.647 user 0m58.044s 00:26:24.647 sys 0m12.835s 00:26:24.647 14:59:03 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:24.647 14:59:03 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:24.647 ************************************ 00:26:24.647 END TEST nvmf_fuzz 00:26:24.647 ************************************ 00:26:24.647 14:59:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:24.647 14:59:03 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:26:24.647 14:59:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:24.647 14:59:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:24.647 14:59:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:24.647 ************************************ 00:26:24.647 START TEST nvmf_multiconnection 00:26:24.647 ************************************ 00:26:24.647 14:59:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:26:24.647 * Looking for test storage... 00:26:24.647 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:24.647 14:59:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:24.647 14:59:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:26:24.647 14:59:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:24.647 14:59:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:24.647 14:59:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:24.647 14:59:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:24.647 14:59:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:24.647 14:59:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:24.647 14:59:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:24.647 14:59:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:24.647 14:59:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:24.647 14:59:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:24.647 14:59:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:24.647 14:59:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:24.647 14:59:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:24.647 14:59:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:24.647 14:59:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:24.647 14:59:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:24.647 14:59:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:24.647 14:59:03 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:24.647 14:59:03 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:24.647 14:59:03 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:24.647 14:59:03 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.647 14:59:03 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.647 14:59:03 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.647 14:59:03 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:26:24.647 14:59:03 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.647 14:59:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:26:24.647 14:59:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:24.647 14:59:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:24.647 14:59:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:24.647 14:59:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:24.647 14:59:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:24.647 14:59:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:24.647 14:59:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:24.647 14:59:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:24.647 14:59:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:24.647 14:59:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:24.647 14:59:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:26:24.647 14:59:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:26:24.647 14:59:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:24.647 14:59:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:24.647 14:59:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:24.647 14:59:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:24.647 14:59:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:24.647 14:59:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:24.647 14:59:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:24.647 14:59:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:24.647 14:59:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:24.647 14:59:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:24.647 14:59:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:26:24.647 14:59:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:26.630 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:26.630 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:26.630 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:26.630 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:26.630 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:26.630 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:26.631 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:26:26.631 00:26:26.631 --- 10.0.0.2 ping statistics --- 00:26:26.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:26.631 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:26:26.631 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:26.631 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:26.631 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:26:26.631 00:26:26.631 --- 10.0.0.1 ping statistics --- 00:26:26.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:26.631 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:26:26.631 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:26.631 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:26:26.631 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:26.631 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:26.631 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:26.631 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:26.631 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:26.631 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:26.631 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:26.631 14:59:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:26:26.631 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:26.631 14:59:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:26.631 14:59:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:26.631 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=1953250 00:26:26.631 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:26.631 14:59:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 1953250 00:26:26.631 14:59:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 1953250 ']' 00:26:26.631 14:59:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:26.631 14:59:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:26.631 14:59:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:26.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:26.631 14:59:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:26.631 14:59:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:26.890 [2024-07-14 14:59:05.957545] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:26:26.890 [2024-07-14 14:59:05.957702] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:26.890 EAL: No free 2048 kB hugepages reported on node 1 00:26:26.890 [2024-07-14 14:59:06.119578] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:27.150 [2024-07-14 14:59:06.396132] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:27.150 [2024-07-14 14:59:06.396205] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:27.150 [2024-07-14 14:59:06.396232] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:27.150 [2024-07-14 14:59:06.396254] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:27.150 [2024-07-14 14:59:06.396286] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:27.150 [2024-07-14 14:59:06.396410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:27.150 [2024-07-14 14:59:06.396469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:27.150 [2024-07-14 14:59:06.397908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:27.150 [2024-07-14 14:59:06.397918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:27.719 14:59:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:27.719 14:59:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:26:27.719 14:59:06 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:27.719 14:59:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:27.719 14:59:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:27.719 14:59:06 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:27.719 14:59:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:27.719 14:59:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.719 14:59:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:27.719 [2024-07-14 14:59:06.875898] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:27.719 14:59:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.719 14:59:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:26:27.719 14:59:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:27.719 14:59:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:27.719 14:59:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.719 14:59:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:27.719 Malloc1 00:26:27.719 14:59:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.719 14:59:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:26:27.719 14:59:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.719 14:59:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:27.719 14:59:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.719 14:59:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:27.719 14:59:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.719 14:59:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:27.719 14:59:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.719 14:59:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:27.719 14:59:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.719 14:59:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:27.719 [2024-07-14 14:59:06.984402] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:27.719 14:59:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.719 14:59:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:27.719 14:59:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:26:27.719 14:59:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.719 14:59:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:27.978 Malloc2 00:26:27.978 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.978 14:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:26:27.978 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.978 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:27.978 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.978 14:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:26:27.978 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.978 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:27.978 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.978 14:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:27.978 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.978 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:27.978 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.978 14:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:27.978 14:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:26:27.978 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.978 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:27.978 Malloc3 00:26:27.978 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.978 14:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:26:27.978 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.978 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:27.978 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.978 14:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:26:27.978 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.978 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:27.978 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.978 14:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:26:27.978 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.978 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:27.979 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.979 14:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:27.979 14:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:26:27.979 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.979 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:27.979 Malloc4 00:26:27.979 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.979 14:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:26:27.979 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.979 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:27.979 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.979 14:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:26:27.979 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.979 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:27.979 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.979 14:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:26:27.979 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.979 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:27.979 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.979 14:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:27.979 14:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:26:27.979 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.237 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.237 Malloc5 00:26:28.237 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.238 14:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:26:28.238 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.238 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.238 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.238 14:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:26:28.238 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.238 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.238 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.238 14:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:26:28.238 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.238 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.238 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.238 14:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:28.238 14:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:26:28.238 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.238 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.238 Malloc6 00:26:28.238 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.238 14:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:26:28.238 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.238 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.238 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.238 14:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:26:28.238 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.238 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.238 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.238 14:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:26:28.238 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.238 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.238 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.238 14:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:28.238 14:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:26:28.238 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.238 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.496 Malloc7 00:26:28.496 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.496 14:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:26:28.496 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.496 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.496 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.496 14:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:26:28.496 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.496 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.496 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.496 14:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:26:28.497 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.497 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.497 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.497 14:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:28.497 14:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:26:28.497 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.497 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.497 Malloc8 00:26:28.497 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.497 14:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:26:28.497 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.497 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.497 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.497 14:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:26:28.497 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.497 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.497 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.497 14:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:26:28.497 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.497 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.497 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.497 14:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:28.497 14:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:26:28.497 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.497 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.497 Malloc9 00:26:28.497 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.497 14:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:26:28.497 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.497 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.497 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.497 14:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:26:28.497 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.497 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.497 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.497 14:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:26:28.497 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.497 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.497 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.497 14:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:28.497 14:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:26:28.497 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.497 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.755 Malloc10 00:26:28.755 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.755 14:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:26:28.755 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.755 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.755 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.755 14:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:26:28.755 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.755 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.755 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.755 14:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:26:28.755 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.755 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.755 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.755 14:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:28.755 14:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:26:28.755 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.756 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.756 Malloc11 00:26:28.756 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.756 14:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:26:28.756 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.756 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.756 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.756 14:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:26:28.756 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.756 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.756 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.756 14:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:26:28.756 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.756 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.756 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.756 14:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:26:28.756 14:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:28.756 14:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:29.324 14:59:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:26:29.324 14:59:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:29.324 14:59:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:29.324 14:59:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:29.324 14:59:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:31.854 14:59:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:31.854 14:59:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:31.854 14:59:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:26:31.854 14:59:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:31.854 14:59:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:31.854 14:59:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:31.854 14:59:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:31.854 14:59:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:26:32.111 14:59:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:26:32.111 14:59:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:32.111 14:59:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:32.111 14:59:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:32.111 14:59:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:34.019 14:59:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:34.019 14:59:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:34.019 14:59:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:26:34.019 14:59:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:34.019 14:59:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:34.019 14:59:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:34.019 14:59:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:34.019 14:59:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:26:34.950 14:59:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:26:34.950 14:59:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:34.950 14:59:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:34.950 14:59:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:34.950 14:59:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:36.844 14:59:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:36.844 14:59:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:36.844 14:59:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:26:36.844 14:59:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:36.844 14:59:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:36.844 14:59:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:36.844 14:59:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:36.844 14:59:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:26:37.775 14:59:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:26:37.775 14:59:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:37.775 14:59:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:37.775 14:59:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:37.775 14:59:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:39.670 14:59:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:39.670 14:59:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:39.670 14:59:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:26:39.670 14:59:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:39.670 14:59:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:39.670 14:59:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:39.670 14:59:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:39.670 14:59:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:26:40.601 14:59:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:26:40.601 14:59:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:40.601 14:59:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:40.601 14:59:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:40.601 14:59:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:42.496 14:59:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:42.496 14:59:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:42.496 14:59:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:26:42.496 14:59:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:42.496 14:59:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:42.496 14:59:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:42.496 14:59:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:42.496 14:59:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:26:43.062 14:59:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:26:43.062 14:59:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:43.062 14:59:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:43.062 14:59:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:43.062 14:59:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:45.637 14:59:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:45.637 14:59:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:45.637 14:59:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:26:45.637 14:59:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:45.637 14:59:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:45.637 14:59:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:45.637 14:59:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:45.637 14:59:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:26:46.203 14:59:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:26:46.203 14:59:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:46.203 14:59:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:46.203 14:59:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:46.203 14:59:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:48.100 14:59:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:48.100 14:59:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:48.100 14:59:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:26:48.100 14:59:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:48.100 14:59:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:48.100 14:59:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:48.100 14:59:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:48.100 14:59:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:26:49.033 14:59:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:26:49.033 14:59:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:49.033 14:59:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:49.033 14:59:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:49.033 14:59:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:50.930 14:59:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:50.930 14:59:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:50.930 14:59:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:26:50.930 14:59:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:50.930 14:59:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:50.930 14:59:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:50.930 14:59:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:50.930 14:59:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:26:51.862 14:59:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:26:51.862 14:59:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:51.862 14:59:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:51.862 14:59:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:51.862 14:59:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:53.757 14:59:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:53.757 14:59:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:53.757 14:59:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:26:53.757 14:59:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:53.757 14:59:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:53.757 14:59:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:53.757 14:59:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:53.757 14:59:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:26:54.322 14:59:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:26:54.322 14:59:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:54.322 14:59:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:54.322 14:59:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:54.322 14:59:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:56.845 14:59:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:56.845 14:59:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:56.845 14:59:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:26:56.845 14:59:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:56.845 14:59:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:56.845 14:59:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:56.845 14:59:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:56.845 14:59:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:26:57.410 14:59:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:26:57.410 14:59:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:57.410 14:59:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:57.410 14:59:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:57.410 14:59:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:59.307 14:59:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:59.307 14:59:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:59.307 14:59:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:26:59.307 14:59:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:59.307 14:59:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:59.307 14:59:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:59.307 14:59:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:26:59.565 [global] 00:26:59.565 thread=1 00:26:59.565 invalidate=1 00:26:59.565 rw=read 00:26:59.565 time_based=1 00:26:59.565 runtime=10 00:26:59.565 ioengine=libaio 00:26:59.565 direct=1 00:26:59.565 bs=262144 00:26:59.565 iodepth=64 00:26:59.565 norandommap=1 00:26:59.565 numjobs=1 00:26:59.565 00:26:59.565 [job0] 00:26:59.565 filename=/dev/nvme0n1 00:26:59.565 [job1] 00:26:59.565 filename=/dev/nvme10n1 00:26:59.565 [job2] 00:26:59.565 filename=/dev/nvme1n1 00:26:59.565 [job3] 00:26:59.565 filename=/dev/nvme2n1 00:26:59.565 [job4] 00:26:59.565 filename=/dev/nvme3n1 00:26:59.565 [job5] 00:26:59.565 filename=/dev/nvme4n1 00:26:59.565 [job6] 00:26:59.565 filename=/dev/nvme5n1 00:26:59.565 [job7] 00:26:59.565 filename=/dev/nvme6n1 00:26:59.565 [job8] 00:26:59.565 filename=/dev/nvme7n1 00:26:59.565 [job9] 00:26:59.565 filename=/dev/nvme8n1 00:26:59.565 [job10] 00:26:59.565 filename=/dev/nvme9n1 00:26:59.565 Could not set queue depth (nvme0n1) 00:26:59.565 Could not set queue depth (nvme10n1) 00:26:59.565 Could not set queue depth (nvme1n1) 00:26:59.565 Could not set queue depth (nvme2n1) 00:26:59.565 Could not set queue depth (nvme3n1) 00:26:59.565 Could not set queue depth (nvme4n1) 00:26:59.565 Could not set queue depth (nvme5n1) 00:26:59.565 Could not set queue depth (nvme6n1) 00:26:59.565 Could not set queue depth (nvme7n1) 00:26:59.565 Could not set queue depth (nvme8n1) 00:26:59.565 Could not set queue depth (nvme9n1) 00:26:59.822 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:59.822 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:59.822 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:59.822 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:59.822 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:59.822 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:59.822 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:59.822 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:59.822 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:59.822 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:59.822 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:59.822 fio-3.35 00:26:59.822 Starting 11 threads 00:27:12.017 00:27:12.017 job0: (groupid=0, jobs=1): err= 0: pid=1957618: Sun Jul 14 14:59:49 2024 00:27:12.017 read: IOPS=774, BW=194MiB/s (203MB/s)(1945MiB/10049msec) 00:27:12.017 slat (usec): min=9, max=130686, avg=1056.07, stdev=4164.18 00:27:12.017 clat (usec): min=1790, max=212499, avg=81562.59, stdev=37020.48 00:27:12.017 lat (usec): min=1855, max=308915, avg=82618.66, stdev=37380.40 00:27:12.017 clat percentiles (msec): 00:27:12.017 | 1.00th=[ 26], 5.00th=[ 39], 10.00th=[ 44], 20.00th=[ 51], 00:27:12.017 | 30.00th=[ 57], 40.00th=[ 65], 50.00th=[ 72], 60.00th=[ 83], 00:27:12.017 | 70.00th=[ 96], 80.00th=[ 112], 90.00th=[ 133], 95.00th=[ 155], 00:27:12.017 | 99.00th=[ 192], 99.50th=[ 199], 99.90th=[ 211], 99.95th=[ 211], 00:27:12.017 | 99.99th=[ 213] 00:27:12.017 bw ( KiB/s): min=120320, max=281600, per=12.22%, avg=197529.60, stdev=60534.54, samples=20 00:27:12.017 iops : min= 470, max= 1100, avg=771.60, stdev=236.46, samples=20 00:27:12.017 lat (msec) : 2=0.01%, 4=0.04%, 10=0.51%, 20=0.12%, 50=18.27% 00:27:12.017 lat (msec) : 100=53.36%, 250=27.69% 00:27:12.017 cpu : usr=0.49%, sys=2.49%, ctx=1037, majf=0, minf=3721 00:27:12.017 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:27:12.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:12.018 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:12.018 issued rwts: total=7779,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:12.018 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:12.018 job1: (groupid=0, jobs=1): err= 0: pid=1957619: Sun Jul 14 14:59:49 2024 00:27:12.018 read: IOPS=474, BW=119MiB/s (124MB/s)(1201MiB/10127msec) 00:27:12.018 slat (usec): min=8, max=158927, avg=1237.58, stdev=5930.55 00:27:12.018 clat (usec): min=1296, max=284615, avg=133568.34, stdev=55434.54 00:27:12.018 lat (usec): min=1363, max=350026, avg=134805.92, stdev=56255.65 00:27:12.018 clat percentiles (msec): 00:27:12.018 | 1.00th=[ 4], 5.00th=[ 28], 10.00th=[ 70], 20.00th=[ 91], 00:27:12.018 | 30.00th=[ 106], 40.00th=[ 122], 50.00th=[ 131], 60.00th=[ 144], 00:27:12.018 | 70.00th=[ 161], 80.00th=[ 186], 90.00th=[ 209], 95.00th=[ 224], 00:27:12.018 | 99.00th=[ 247], 99.50th=[ 271], 99.90th=[ 279], 99.95th=[ 284], 00:27:12.018 | 99.99th=[ 284] 00:27:12.018 bw ( KiB/s): min=67072, max=193024, per=7.51%, avg=121344.00, stdev=34807.28, samples=20 00:27:12.018 iops : min= 262, max= 754, avg=474.00, stdev=135.97, samples=20 00:27:12.018 lat (msec) : 2=0.17%, 4=1.27%, 10=0.83%, 20=1.81%, 50=3.29% 00:27:12.018 lat (msec) : 100=19.32%, 250=72.42%, 500=0.90% 00:27:12.018 cpu : usr=0.26%, sys=1.52%, ctx=959, majf=0, minf=4097 00:27:12.018 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:27:12.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:12.018 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:12.018 issued rwts: total=4804,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:12.018 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:12.018 job2: (groupid=0, jobs=1): err= 0: pid=1957620: Sun Jul 14 14:59:49 2024 00:27:12.018 read: IOPS=460, BW=115MiB/s (121MB/s)(1166MiB/10122msec) 00:27:12.018 slat (usec): min=9, max=125837, avg=1468.81, stdev=6067.70 00:27:12.018 clat (msec): min=6, max=318, avg=137.39, stdev=47.55 00:27:12.018 lat (msec): min=6, max=350, avg=138.86, stdev=48.16 00:27:12.018 clat percentiles (msec): 00:27:12.018 | 1.00th=[ 15], 5.00th=[ 80], 10.00th=[ 92], 20.00th=[ 102], 00:27:12.018 | 30.00th=[ 110], 40.00th=[ 123], 50.00th=[ 133], 60.00th=[ 144], 00:27:12.018 | 70.00th=[ 155], 80.00th=[ 171], 90.00th=[ 207], 95.00th=[ 220], 00:27:12.018 | 99.00th=[ 268], 99.50th=[ 275], 99.90th=[ 317], 99.95th=[ 317], 00:27:12.018 | 99.99th=[ 317] 00:27:12.018 bw ( KiB/s): min=68096, max=156672, per=7.28%, avg=117708.80, stdev=25893.61, samples=20 00:27:12.018 iops : min= 266, max= 612, avg=459.80, stdev=101.15, samples=20 00:27:12.018 lat (msec) : 10=0.49%, 20=1.82%, 50=1.05%, 100=15.29%, 250=79.39% 00:27:12.018 lat (msec) : 500=1.95% 00:27:12.018 cpu : usr=0.22%, sys=1.40%, ctx=879, majf=0, minf=4097 00:27:12.018 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:27:12.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:12.018 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:12.018 issued rwts: total=4662,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:12.018 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:12.018 job3: (groupid=0, jobs=1): err= 0: pid=1957621: Sun Jul 14 14:59:49 2024 00:27:12.018 read: IOPS=452, BW=113MiB/s (119MB/s)(1141MiB/10083msec) 00:27:12.018 slat (usec): min=9, max=87430, avg=2028.20, stdev=6602.41 00:27:12.018 clat (msec): min=26, max=300, avg=139.24, stdev=42.68 00:27:12.018 lat (msec): min=26, max=315, avg=141.27, stdev=43.60 00:27:12.018 clat percentiles (msec): 00:27:12.018 | 1.00th=[ 44], 5.00th=[ 86], 10.00th=[ 94], 20.00th=[ 105], 00:27:12.018 | 30.00th=[ 114], 40.00th=[ 124], 50.00th=[ 132], 60.00th=[ 144], 00:27:12.018 | 70.00th=[ 157], 80.00th=[ 174], 90.00th=[ 203], 95.00th=[ 220], 00:27:12.018 | 99.00th=[ 259], 99.50th=[ 268], 99.90th=[ 284], 99.95th=[ 288], 00:27:12.018 | 99.99th=[ 300] 00:27:12.018 bw ( KiB/s): min=71168, max=186880, per=7.13%, avg=115251.20, stdev=28933.02, samples=20 00:27:12.018 iops : min= 278, max= 730, avg=450.20, stdev=113.02, samples=20 00:27:12.018 lat (msec) : 50=2.28%, 100=12.95%, 250=83.53%, 500=1.25% 00:27:12.018 cpu : usr=0.25%, sys=1.67%, ctx=772, majf=0, minf=4097 00:27:12.018 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:27:12.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:12.018 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:12.018 issued rwts: total=4565,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:12.018 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:12.018 job4: (groupid=0, jobs=1): err= 0: pid=1957625: Sun Jul 14 14:59:49 2024 00:27:12.018 read: IOPS=596, BW=149MiB/s (156MB/s)(1502MiB/10082msec) 00:27:12.018 slat (usec): min=12, max=153121, avg=1587.54, stdev=5707.87 00:27:12.018 clat (msec): min=21, max=335, avg=105.72, stdev=41.28 00:27:12.018 lat (msec): min=21, max=335, avg=107.30, stdev=42.09 00:27:12.018 clat percentiles (msec): 00:27:12.018 | 1.00th=[ 46], 5.00th=[ 59], 10.00th=[ 65], 20.00th=[ 70], 00:27:12.018 | 30.00th=[ 75], 40.00th=[ 85], 50.00th=[ 96], 60.00th=[ 106], 00:27:12.018 | 70.00th=[ 124], 80.00th=[ 146], 90.00th=[ 167], 95.00th=[ 186], 00:27:12.018 | 99.00th=[ 222], 99.50th=[ 234], 99.90th=[ 247], 99.95th=[ 247], 00:27:12.018 | 99.99th=[ 338] 00:27:12.018 bw ( KiB/s): min=67719, max=242176, per=9.42%, avg=152224.35, stdev=51055.24, samples=20 00:27:12.018 iops : min= 264, max= 946, avg=594.60, stdev=199.48, samples=20 00:27:12.018 lat (msec) : 50=1.58%, 100=53.29%, 250=45.08%, 500=0.05% 00:27:12.018 cpu : usr=0.40%, sys=2.17%, ctx=936, majf=0, minf=4097 00:27:12.018 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:27:12.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:12.018 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:12.018 issued rwts: total=6009,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:12.018 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:12.018 job5: (groupid=0, jobs=1): err= 0: pid=1957647: Sun Jul 14 14:59:49 2024 00:27:12.018 read: IOPS=524, BW=131MiB/s (138MB/s)(1329MiB/10127msec) 00:27:12.018 slat (usec): min=8, max=101791, avg=1397.40, stdev=5521.84 00:27:12.018 clat (usec): min=1050, max=300306, avg=120471.01, stdev=57030.85 00:27:12.018 lat (usec): min=1072, max=300340, avg=121868.41, stdev=57972.81 00:27:12.018 clat percentiles (msec): 00:27:12.018 | 1.00th=[ 7], 5.00th=[ 19], 10.00th=[ 42], 20.00th=[ 73], 00:27:12.018 | 30.00th=[ 91], 40.00th=[ 111], 50.00th=[ 124], 60.00th=[ 133], 00:27:12.018 | 70.00th=[ 146], 80.00th=[ 167], 90.00th=[ 201], 95.00th=[ 213], 00:27:12.018 | 99.00th=[ 266], 99.50th=[ 275], 99.90th=[ 288], 99.95th=[ 292], 00:27:12.018 | 99.99th=[ 300] 00:27:12.018 bw ( KiB/s): min=67072, max=254976, per=8.32%, avg=134425.60, stdev=53516.14, samples=20 00:27:12.018 iops : min= 262, max= 996, avg=525.10, stdev=209.05, samples=20 00:27:12.018 lat (msec) : 2=0.02%, 4=0.55%, 10=2.22%, 20=2.84%, 50=6.44% 00:27:12.018 lat (msec) : 100=22.73%, 250=63.81%, 500=1.39% 00:27:12.018 cpu : usr=0.38%, sys=1.78%, ctx=959, majf=0, minf=4097 00:27:12.018 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:27:12.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:12.018 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:12.018 issued rwts: total=5314,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:12.018 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:12.018 job6: (groupid=0, jobs=1): err= 0: pid=1957660: Sun Jul 14 14:59:49 2024 00:27:12.018 read: IOPS=604, BW=151MiB/s (158MB/s)(1532MiB/10132msec) 00:27:12.018 slat (usec): min=8, max=121344, avg=1028.52, stdev=4938.96 00:27:12.018 clat (usec): min=1141, max=292143, avg=104737.69, stdev=52433.82 00:27:12.018 lat (usec): min=1179, max=349015, avg=105766.21, stdev=52945.17 00:27:12.018 clat percentiles (msec): 00:27:12.018 | 1.00th=[ 14], 5.00th=[ 35], 10.00th=[ 45], 20.00th=[ 63], 00:27:12.018 | 30.00th=[ 74], 40.00th=[ 86], 50.00th=[ 97], 60.00th=[ 107], 00:27:12.018 | 70.00th=[ 118], 80.00th=[ 144], 90.00th=[ 180], 95.00th=[ 213], 00:27:12.018 | 99.00th=[ 255], 99.50th=[ 271], 99.90th=[ 292], 99.95th=[ 292], 00:27:12.018 | 99.99th=[ 292] 00:27:12.018 bw ( KiB/s): min=72704, max=288768, per=9.60%, avg=155212.80, stdev=60581.40, samples=20 00:27:12.018 iops : min= 284, max= 1128, avg=606.30, stdev=236.65, samples=20 00:27:12.018 lat (msec) : 2=0.13%, 4=0.39%, 10=0.28%, 20=0.46%, 50=10.17% 00:27:12.018 lat (msec) : 100=41.76%, 250=45.64%, 500=1.18% 00:27:12.018 cpu : usr=0.33%, sys=1.59%, ctx=1039, majf=0, minf=4097 00:27:12.018 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:27:12.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:12.018 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:12.018 issued rwts: total=6126,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:12.018 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:12.018 job7: (groupid=0, jobs=1): err= 0: pid=1957672: Sun Jul 14 14:59:49 2024 00:27:12.018 read: IOPS=664, BW=166MiB/s (174MB/s)(1662MiB/10010msec) 00:27:12.018 slat (usec): min=8, max=145660, avg=1019.57, stdev=4849.01 00:27:12.018 clat (usec): min=1147, max=312400, avg=95308.32, stdev=52920.45 00:27:12.018 lat (usec): min=1208, max=345942, avg=96327.89, stdev=53504.51 00:27:12.018 clat percentiles (msec): 00:27:12.018 | 1.00th=[ 8], 5.00th=[ 29], 10.00th=[ 37], 20.00th=[ 46], 00:27:12.018 | 30.00th=[ 60], 40.00th=[ 71], 50.00th=[ 90], 60.00th=[ 106], 00:27:12.018 | 70.00th=[ 123], 80.00th=[ 136], 90.00th=[ 163], 95.00th=[ 199], 00:27:12.018 | 99.00th=[ 245], 99.50th=[ 251], 99.90th=[ 275], 99.95th=[ 284], 00:27:12.018 | 99.99th=[ 313] 00:27:12.018 bw ( KiB/s): min=69632, max=348160, per=10.43%, avg=168563.25, stdev=65481.76, samples=20 00:27:12.018 iops : min= 272, max= 1360, avg=658.45, stdev=255.79, samples=20 00:27:12.018 lat (msec) : 2=0.02%, 4=0.09%, 10=1.25%, 20=1.97%, 50=19.87% 00:27:12.018 lat (msec) : 100=33.41%, 250=42.91%, 500=0.48% 00:27:12.018 cpu : usr=0.46%, sys=1.98%, ctx=1083, majf=0, minf=4097 00:27:12.018 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:27:12.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:12.018 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:12.018 issued rwts: total=6647,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:12.019 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:12.019 job8: (groupid=0, jobs=1): err= 0: pid=1957729: Sun Jul 14 14:59:49 2024 00:27:12.019 read: IOPS=651, BW=163MiB/s (171MB/s)(1638MiB/10052msec) 00:27:12.019 slat (usec): min=8, max=138936, avg=1023.90, stdev=5524.35 00:27:12.019 clat (usec): min=1183, max=328836, avg=97078.98, stdev=61446.95 00:27:12.019 lat (usec): min=1199, max=328868, avg=98102.88, stdev=62302.54 00:27:12.019 clat percentiles (msec): 00:27:12.019 | 1.00th=[ 5], 5.00th=[ 11], 10.00th=[ 23], 20.00th=[ 37], 00:27:12.019 | 30.00th=[ 52], 40.00th=[ 74], 50.00th=[ 94], 60.00th=[ 110], 00:27:12.019 | 70.00th=[ 128], 80.00th=[ 146], 90.00th=[ 190], 95.00th=[ 213], 00:27:12.019 | 99.00th=[ 247], 99.50th=[ 262], 99.90th=[ 279], 99.95th=[ 284], 00:27:12.019 | 99.99th=[ 330] 00:27:12.019 bw ( KiB/s): min=73216, max=363520, per=10.27%, avg=166105.80, stdev=78667.00, samples=20 00:27:12.019 iops : min= 286, max= 1420, avg=648.85, stdev=307.29, samples=20 00:27:12.019 lat (msec) : 2=0.26%, 4=0.58%, 10=4.04%, 20=4.30%, 50=20.16% 00:27:12.019 lat (msec) : 100=22.80%, 250=47.02%, 500=0.82% 00:27:12.019 cpu : usr=0.33%, sys=2.01%, ctx=1161, majf=0, minf=4097 00:27:12.019 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:27:12.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:12.019 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:12.019 issued rwts: total=6552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:12.019 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:12.019 job9: (groupid=0, jobs=1): err= 0: pid=1957757: Sun Jul 14 14:59:49 2024 00:27:12.019 read: IOPS=524, BW=131MiB/s (137MB/s)(1322MiB/10082msec) 00:27:12.019 slat (usec): min=13, max=82870, avg=1776.85, stdev=5984.28 00:27:12.019 clat (msec): min=25, max=314, avg=120.14, stdev=57.19 00:27:12.019 lat (msec): min=29, max=344, avg=121.92, stdev=58.11 00:27:12.019 clat percentiles (msec): 00:27:12.019 | 1.00th=[ 34], 5.00th=[ 40], 10.00th=[ 46], 20.00th=[ 62], 00:27:12.019 | 30.00th=[ 84], 40.00th=[ 104], 50.00th=[ 120], 60.00th=[ 131], 00:27:12.019 | 70.00th=[ 148], 80.00th=[ 169], 90.00th=[ 203], 95.00th=[ 220], 00:27:12.019 | 99.00th=[ 275], 99.50th=[ 284], 99.90th=[ 309], 99.95th=[ 313], 00:27:12.019 | 99.99th=[ 313] 00:27:12.019 bw ( KiB/s): min=68608, max=314368, per=8.27%, avg=133770.80, stdev=67504.70, samples=20 00:27:12.019 iops : min= 268, max= 1228, avg=522.50, stdev=263.71, samples=20 00:27:12.019 lat (msec) : 50=13.22%, 100=25.47%, 250=59.80%, 500=1.51% 00:27:12.019 cpu : usr=0.42%, sys=1.80%, ctx=824, majf=0, minf=4097 00:27:12.019 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:27:12.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:12.019 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:12.019 issued rwts: total=5288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:12.019 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:12.019 job10: (groupid=0, jobs=1): err= 0: pid=1957769: Sun Jul 14 14:59:49 2024 00:27:12.019 read: IOPS=615, BW=154MiB/s (161MB/s)(1560MiB/10129msec) 00:27:12.019 slat (usec): min=10, max=171387, avg=1481.75, stdev=6061.25 00:27:12.019 clat (msec): min=2, max=317, avg=102.36, stdev=47.90 00:27:12.019 lat (msec): min=2, max=317, avg=103.84, stdev=48.68 00:27:12.019 clat percentiles (msec): 00:27:12.019 | 1.00th=[ 7], 5.00th=[ 21], 10.00th=[ 42], 20.00th=[ 68], 00:27:12.019 | 30.00th=[ 79], 40.00th=[ 87], 50.00th=[ 97], 60.00th=[ 110], 00:27:12.019 | 70.00th=[ 127], 80.00th=[ 142], 90.00th=[ 163], 95.00th=[ 188], 00:27:12.019 | 99.00th=[ 230], 99.50th=[ 241], 99.90th=[ 249], 99.95th=[ 264], 00:27:12.019 | 99.99th=[ 317] 00:27:12.019 bw ( KiB/s): min=96256, max=240640, per=9.78%, avg=158054.40, stdev=46730.56, samples=20 00:27:12.019 iops : min= 376, max= 940, avg=617.40, stdev=182.54, samples=20 00:27:12.019 lat (msec) : 4=0.03%, 10=2.60%, 20=2.23%, 50=7.66%, 100=40.70% 00:27:12.019 lat (msec) : 250=46.68%, 500=0.10% 00:27:12.019 cpu : usr=0.37%, sys=2.15%, ctx=885, majf=0, minf=4097 00:27:12.019 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:27:12.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:12.019 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:12.019 issued rwts: total=6238,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:12.019 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:12.019 00:27:12.019 Run status group 0 (all jobs): 00:27:12.019 READ: bw=1579MiB/s (1655MB/s), 113MiB/s-194MiB/s (119MB/s-203MB/s), io=15.6GiB (16.8GB), run=10010-10132msec 00:27:12.019 00:27:12.019 Disk stats (read/write): 00:27:12.019 nvme0n1: ios=15274/0, merge=0/0, ticks=1238817/0, in_queue=1238817, util=97.00% 00:27:12.019 nvme10n1: ios=9424/0, merge=0/0, ticks=1237536/0, in_queue=1237536, util=97.21% 00:27:12.019 nvme1n1: ios=9150/0, merge=0/0, ticks=1238955/0, in_queue=1238955, util=97.48% 00:27:12.019 nvme2n1: ios=8887/0, merge=0/0, ticks=1235086/0, in_queue=1235086, util=97.64% 00:27:12.019 nvme3n1: ios=11784/0, merge=0/0, ticks=1237358/0, in_queue=1237358, util=97.70% 00:27:12.019 nvme4n1: ios=10430/0, merge=0/0, ticks=1233776/0, in_queue=1233776, util=98.09% 00:27:12.019 nvme5n1: ios=12115/0, merge=0/0, ticks=1241199/0, in_queue=1241199, util=98.28% 00:27:12.019 nvme6n1: ios=12865/0, merge=0/0, ticks=1242286/0, in_queue=1242286, util=98.38% 00:27:12.019 nvme7n1: ios=12868/0, merge=0/0, ticks=1241162/0, in_queue=1241162, util=98.86% 00:27:12.019 nvme8n1: ios=10335/0, merge=0/0, ticks=1231359/0, in_queue=1231359, util=99.08% 00:27:12.019 nvme9n1: ios=12265/0, merge=0/0, ticks=1229958/0, in_queue=1229958, util=99.23% 00:27:12.019 14:59:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:27:12.019 [global] 00:27:12.019 thread=1 00:27:12.019 invalidate=1 00:27:12.019 rw=randwrite 00:27:12.019 time_based=1 00:27:12.019 runtime=10 00:27:12.019 ioengine=libaio 00:27:12.019 direct=1 00:27:12.019 bs=262144 00:27:12.019 iodepth=64 00:27:12.019 norandommap=1 00:27:12.019 numjobs=1 00:27:12.019 00:27:12.019 [job0] 00:27:12.019 filename=/dev/nvme0n1 00:27:12.019 [job1] 00:27:12.019 filename=/dev/nvme10n1 00:27:12.019 [job2] 00:27:12.019 filename=/dev/nvme1n1 00:27:12.019 [job3] 00:27:12.019 filename=/dev/nvme2n1 00:27:12.019 [job4] 00:27:12.019 filename=/dev/nvme3n1 00:27:12.019 [job5] 00:27:12.019 filename=/dev/nvme4n1 00:27:12.019 [job6] 00:27:12.019 filename=/dev/nvme5n1 00:27:12.019 [job7] 00:27:12.019 filename=/dev/nvme6n1 00:27:12.019 [job8] 00:27:12.019 filename=/dev/nvme7n1 00:27:12.019 [job9] 00:27:12.019 filename=/dev/nvme8n1 00:27:12.019 [job10] 00:27:12.019 filename=/dev/nvme9n1 00:27:12.019 Could not set queue depth (nvme0n1) 00:27:12.019 Could not set queue depth (nvme10n1) 00:27:12.019 Could not set queue depth (nvme1n1) 00:27:12.019 Could not set queue depth (nvme2n1) 00:27:12.019 Could not set queue depth (nvme3n1) 00:27:12.019 Could not set queue depth (nvme4n1) 00:27:12.019 Could not set queue depth (nvme5n1) 00:27:12.019 Could not set queue depth (nvme6n1) 00:27:12.019 Could not set queue depth (nvme7n1) 00:27:12.019 Could not set queue depth (nvme8n1) 00:27:12.019 Could not set queue depth (nvme9n1) 00:27:12.019 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:12.019 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:12.019 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:12.019 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:12.019 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:12.019 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:12.019 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:12.019 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:12.019 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:12.019 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:12.019 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:12.019 fio-3.35 00:27:12.019 Starting 11 threads 00:27:22.025 00:27:22.025 job0: (groupid=0, jobs=1): err= 0: pid=1958792: Sun Jul 14 15:00:00 2024 00:27:22.025 write: IOPS=325, BW=81.4MiB/s (85.4MB/s)(823MiB/10108msec); 0 zone resets 00:27:22.025 slat (usec): min=23, max=118016, avg=2585.92, stdev=6398.42 00:27:22.025 clat (msec): min=6, max=474, avg=193.75, stdev=100.26 00:27:22.025 lat (msec): min=8, max=474, avg=196.33, stdev=101.66 00:27:22.025 clat percentiles (msec): 00:27:22.025 | 1.00th=[ 20], 5.00th=[ 42], 10.00th=[ 73], 20.00th=[ 92], 00:27:22.025 | 30.00th=[ 129], 40.00th=[ 159], 50.00th=[ 178], 60.00th=[ 209], 00:27:22.025 | 70.00th=[ 268], 80.00th=[ 305], 90.00th=[ 334], 95.00th=[ 359], 00:27:22.025 | 99.00th=[ 393], 99.50th=[ 405], 99.90th=[ 456], 99.95th=[ 456], 00:27:22.025 | 99.99th=[ 477] 00:27:22.025 bw ( KiB/s): min=47104, max=170496, per=7.29%, avg=82699.90, stdev=33510.89, samples=20 00:27:22.025 iops : min= 184, max= 666, avg=323.00, stdev=130.85, samples=20 00:27:22.025 lat (msec) : 10=0.09%, 20=0.94%, 50=4.65%, 100=15.82%, 250=46.77% 00:27:22.025 lat (msec) : 500=31.73% 00:27:22.025 cpu : usr=1.11%, sys=0.94%, ctx=1330, majf=0, minf=1 00:27:22.025 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:27:22.025 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:22.025 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:22.025 issued rwts: total=0,3293,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:22.025 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:22.025 job1: (groupid=0, jobs=1): err= 0: pid=1958793: Sun Jul 14 15:00:00 2024 00:27:22.025 write: IOPS=313, BW=78.4MiB/s (82.3MB/s)(799MiB/10188msec); 0 zone resets 00:27:22.025 slat (usec): min=19, max=70188, avg=2775.76, stdev=6036.51 00:27:22.025 clat (msec): min=10, max=580, avg=201.08, stdev=86.08 00:27:22.025 lat (msec): min=10, max=580, avg=203.86, stdev=87.33 00:27:22.025 clat percentiles (msec): 00:27:22.025 | 1.00th=[ 35], 5.00th=[ 89], 10.00th=[ 111], 20.00th=[ 133], 00:27:22.025 | 30.00th=[ 148], 40.00th=[ 171], 50.00th=[ 190], 60.00th=[ 203], 00:27:22.025 | 70.00th=[ 220], 80.00th=[ 259], 90.00th=[ 342], 95.00th=[ 363], 00:27:22.025 | 99.00th=[ 409], 99.50th=[ 502], 99.90th=[ 558], 99.95th=[ 584], 00:27:22.025 | 99.99th=[ 584] 00:27:22.025 bw ( KiB/s): min=40960, max=131334, per=7.07%, avg=80233.00, stdev=28601.81, samples=20 00:27:22.025 iops : min= 160, max= 513, avg=313.40, stdev=111.72, samples=20 00:27:22.025 lat (msec) : 20=0.41%, 50=1.13%, 100=5.41%, 250=71.72%, 500=20.77% 00:27:22.025 lat (msec) : 750=0.56% 00:27:22.025 cpu : usr=0.87%, sys=0.92%, ctx=1201, majf=0, minf=1 00:27:22.025 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:27:22.025 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:22.025 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:22.025 issued rwts: total=0,3197,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:22.025 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:22.025 job2: (groupid=0, jobs=1): err= 0: pid=1958794: Sun Jul 14 15:00:00 2024 00:27:22.025 write: IOPS=466, BW=117MiB/s (122MB/s)(1184MiB/10154msec); 0 zone resets 00:27:22.025 slat (usec): min=17, max=143040, avg=1474.45, stdev=5000.63 00:27:22.025 clat (usec): min=1411, max=557971, avg=135604.89, stdev=94583.29 00:27:22.025 lat (usec): min=1545, max=558099, avg=137079.35, stdev=95280.42 00:27:22.025 clat percentiles (msec): 00:27:22.025 | 1.00th=[ 7], 5.00th=[ 17], 10.00th=[ 32], 20.00th=[ 55], 00:27:22.025 | 30.00th=[ 68], 40.00th=[ 101], 50.00th=[ 116], 60.00th=[ 146], 00:27:22.025 | 70.00th=[ 171], 80.00th=[ 197], 90.00th=[ 271], 95.00th=[ 342], 00:27:22.025 | 99.00th=[ 414], 99.50th=[ 422], 99.90th=[ 485], 99.95th=[ 550], 00:27:22.025 | 99.99th=[ 558] 00:27:22.025 bw ( KiB/s): min=50688, max=238080, per=10.55%, avg=119630.15, stdev=52022.38, samples=20 00:27:22.025 iops : min= 198, max= 930, avg=467.25, stdev=203.23, samples=20 00:27:22.025 lat (msec) : 2=0.04%, 4=0.23%, 10=1.73%, 20=4.10%, 50=9.35% 00:27:22.025 lat (msec) : 100=24.39%, 250=47.72%, 500=12.37%, 750=0.06% 00:27:22.025 cpu : usr=1.23%, sys=1.33%, ctx=2541, majf=0, minf=1 00:27:22.026 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:27:22.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:22.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:22.026 issued rwts: total=0,4736,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:22.026 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:22.026 job3: (groupid=0, jobs=1): err= 0: pid=1958807: Sun Jul 14 15:00:00 2024 00:27:22.026 write: IOPS=402, BW=101MiB/s (106MB/s)(1022MiB/10159msec); 0 zone resets 00:27:22.026 slat (usec): min=19, max=146489, avg=1726.43, stdev=5394.32 00:27:22.026 clat (usec): min=1655, max=449377, avg=157135.70, stdev=96349.81 00:27:22.026 lat (usec): min=1833, max=449440, avg=158862.13, stdev=97573.30 00:27:22.026 clat percentiles (msec): 00:27:22.026 | 1.00th=[ 6], 5.00th=[ 14], 10.00th=[ 26], 20.00th=[ 58], 00:27:22.026 | 30.00th=[ 99], 40.00th=[ 129], 50.00th=[ 159], 60.00th=[ 184], 00:27:22.026 | 70.00th=[ 207], 80.00th=[ 241], 90.00th=[ 279], 95.00th=[ 317], 00:27:22.026 | 99.00th=[ 414], 99.50th=[ 435], 99.90th=[ 443], 99.95th=[ 443], 00:27:22.026 | 99.99th=[ 451] 00:27:22.026 bw ( KiB/s): min=45056, max=216064, per=9.09%, avg=103055.40, stdev=44963.45, samples=20 00:27:22.026 iops : min= 176, max= 844, avg=402.55, stdev=175.64, samples=20 00:27:22.026 lat (msec) : 2=0.10%, 4=0.42%, 10=2.40%, 20=4.74%, 50=10.61% 00:27:22.026 lat (msec) : 100=14.38%, 250=49.94%, 500=17.41% 00:27:22.026 cpu : usr=1.11%, sys=1.33%, ctx=2418, majf=0, minf=1 00:27:22.026 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:27:22.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:22.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:22.026 issued rwts: total=0,4089,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:22.026 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:22.026 job4: (groupid=0, jobs=1): err= 0: pid=1958808: Sun Jul 14 15:00:00 2024 00:27:22.026 write: IOPS=509, BW=127MiB/s (133MB/s)(1286MiB/10105msec); 0 zone resets 00:27:22.026 slat (usec): min=18, max=103719, avg=1407.74, stdev=4927.27 00:27:22.026 clat (usec): min=1261, max=467662, avg=124199.05, stdev=89029.40 00:27:22.026 lat (usec): min=1337, max=467750, avg=125606.80, stdev=90127.03 00:27:22.026 clat percentiles (msec): 00:27:22.026 | 1.00th=[ 5], 5.00th=[ 24], 10.00th=[ 38], 20.00th=[ 55], 00:27:22.026 | 30.00th=[ 65], 40.00th=[ 85], 50.00th=[ 103], 60.00th=[ 116], 00:27:22.026 | 70.00th=[ 140], 80.00th=[ 197], 90.00th=[ 247], 95.00th=[ 313], 00:27:22.026 | 99.00th=[ 401], 99.50th=[ 414], 99.90th=[ 447], 99.95th=[ 456], 00:27:22.026 | 99.99th=[ 468] 00:27:22.026 bw ( KiB/s): min=43008, max=328192, per=11.47%, avg=130078.10, stdev=74045.43, samples=20 00:27:22.026 iops : min= 168, max= 1282, avg=508.05, stdev=289.28, samples=20 00:27:22.026 lat (msec) : 2=0.17%, 4=0.72%, 10=1.26%, 20=2.00%, 50=10.23% 00:27:22.026 lat (msec) : 100=34.53%, 250=41.47%, 500=9.62% 00:27:22.026 cpu : usr=1.32%, sys=1.61%, ctx=2723, majf=0, minf=1 00:27:22.026 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:27:22.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:22.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:22.026 issued rwts: total=0,5144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:22.026 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:22.026 job5: (groupid=0, jobs=1): err= 0: pid=1958809: Sun Jul 14 15:00:00 2024 00:27:22.026 write: IOPS=451, BW=113MiB/s (118MB/s)(1142MiB/10121msec); 0 zone resets 00:27:22.026 slat (usec): min=16, max=105755, avg=1528.81, stdev=4740.51 00:27:22.026 clat (usec): min=1181, max=515922, avg=140213.21, stdev=95245.84 00:27:22.026 lat (usec): min=1226, max=540932, avg=141742.02, stdev=96526.83 00:27:22.026 clat percentiles (msec): 00:27:22.026 | 1.00th=[ 6], 5.00th=[ 17], 10.00th=[ 33], 20.00th=[ 51], 00:27:22.026 | 30.00th=[ 83], 40.00th=[ 102], 50.00th=[ 131], 60.00th=[ 144], 00:27:22.026 | 70.00th=[ 169], 80.00th=[ 205], 90.00th=[ 288], 95.00th=[ 338], 00:27:22.026 | 99.00th=[ 422], 99.50th=[ 430], 99.90th=[ 439], 99.95th=[ 514], 00:27:22.026 | 99.99th=[ 518] 00:27:22.026 bw ( KiB/s): min=53248, max=218624, per=10.17%, avg=115331.85, stdev=43900.65, samples=20 00:27:22.026 iops : min= 208, max= 854, avg=450.45, stdev=171.53, samples=20 00:27:22.026 lat (msec) : 2=0.20%, 4=0.42%, 10=2.08%, 20=3.33%, 50=13.57% 00:27:22.026 lat (msec) : 100=18.65%, 250=47.85%, 500=13.81%, 750=0.09% 00:27:22.026 cpu : usr=1.27%, sys=1.51%, ctx=2809, majf=0, minf=1 00:27:22.026 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:27:22.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:22.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:22.026 issued rwts: total=0,4568,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:22.026 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:22.026 job6: (groupid=0, jobs=1): err= 0: pid=1958810: Sun Jul 14 15:00:00 2024 00:27:22.026 write: IOPS=474, BW=119MiB/s (124MB/s)(1192MiB/10055msec); 0 zone resets 00:27:22.026 slat (usec): min=16, max=67162, avg=1791.18, stdev=4294.84 00:27:22.026 clat (usec): min=1696, max=295674, avg=133089.01, stdev=70881.12 00:27:22.026 lat (msec): min=2, max=295, avg=134.88, stdev=71.91 00:27:22.026 clat percentiles (msec): 00:27:22.026 | 1.00th=[ 6], 5.00th=[ 24], 10.00th=[ 54], 20.00th=[ 71], 00:27:22.026 | 30.00th=[ 92], 40.00th=[ 103], 50.00th=[ 117], 60.00th=[ 144], 00:27:22.026 | 70.00th=[ 174], 80.00th=[ 205], 90.00th=[ 239], 95.00th=[ 264], 00:27:22.026 | 99.00th=[ 284], 99.50th=[ 288], 99.90th=[ 296], 99.95th=[ 296], 00:27:22.026 | 99.99th=[ 296] 00:27:22.026 bw ( KiB/s): min=59392, max=211456, per=10.62%, avg=120426.25, stdev=48864.58, samples=20 00:27:22.026 iops : min= 232, max= 826, avg=470.35, stdev=190.92, samples=20 00:27:22.026 lat (msec) : 2=0.04%, 4=0.29%, 10=1.62%, 20=2.22%, 50=4.22% 00:27:22.026 lat (msec) : 100=29.85%, 250=54.04%, 500=7.72% 00:27:22.026 cpu : usr=1.33%, sys=1.55%, ctx=1982, majf=0, minf=1 00:27:22.026 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:27:22.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:22.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:22.026 issued rwts: total=0,4767,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:22.026 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:22.026 job7: (groupid=0, jobs=1): err= 0: pid=1958811: Sun Jul 14 15:00:00 2024 00:27:22.026 write: IOPS=329, BW=82.3MiB/s (86.3MB/s)(836MiB/10153msec); 0 zone resets 00:27:22.026 slat (usec): min=20, max=169621, avg=2320.09, stdev=6887.70 00:27:22.026 clat (msec): min=2, max=489, avg=191.81, stdev=107.92 00:27:22.026 lat (msec): min=2, max=489, avg=194.13, stdev=109.50 00:27:22.026 clat percentiles (msec): 00:27:22.026 | 1.00th=[ 6], 5.00th=[ 18], 10.00th=[ 36], 20.00th=[ 101], 00:27:22.026 | 30.00th=[ 131], 40.00th=[ 157], 50.00th=[ 182], 60.00th=[ 209], 00:27:22.026 | 70.00th=[ 249], 80.00th=[ 300], 90.00th=[ 342], 95.00th=[ 380], 00:27:22.026 | 99.00th=[ 422], 99.50th=[ 430], 99.90th=[ 472], 99.95th=[ 489], 00:27:22.026 | 99.99th=[ 489] 00:27:22.026 bw ( KiB/s): min=38912, max=169984, per=7.40%, avg=83992.35, stdev=35566.75, samples=20 00:27:22.026 iops : min= 152, max= 664, avg=328.00, stdev=138.81, samples=20 00:27:22.026 lat (msec) : 4=0.39%, 10=1.97%, 20=3.95%, 50=5.23%, 100=8.10% 00:27:22.026 lat (msec) : 250=50.39%, 500=29.96% 00:27:22.026 cpu : usr=0.89%, sys=0.99%, ctx=1722, majf=0, minf=1 00:27:22.026 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:27:22.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:22.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:22.026 issued rwts: total=0,3344,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:22.026 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:22.026 job8: (groupid=0, jobs=1): err= 0: pid=1958812: Sun Jul 14 15:00:00 2024 00:27:22.026 write: IOPS=416, BW=104MiB/s (109MB/s)(1061MiB/10183msec); 0 zone resets 00:27:22.026 slat (usec): min=17, max=158999, avg=1236.09, stdev=5393.75 00:27:22.026 clat (usec): min=1189, max=635848, avg=152249.77, stdev=101009.01 00:27:22.026 lat (usec): min=1223, max=635907, avg=153485.85, stdev=101940.54 00:27:22.026 clat percentiles (msec): 00:27:22.026 | 1.00th=[ 5], 5.00th=[ 13], 10.00th=[ 20], 20.00th=[ 45], 00:27:22.026 | 30.00th=[ 87], 40.00th=[ 126], 50.00th=[ 157], 60.00th=[ 182], 00:27:22.026 | 70.00th=[ 203], 80.00th=[ 228], 90.00th=[ 268], 95.00th=[ 317], 00:27:22.026 | 99.00th=[ 439], 99.50th=[ 542], 99.90th=[ 634], 99.95th=[ 634], 00:27:22.026 | 99.99th=[ 634] 00:27:22.026 bw ( KiB/s): min=66048, max=186368, per=9.43%, avg=107019.55, stdev=34508.62, samples=20 00:27:22.026 iops : min= 258, max= 728, avg=418.00, stdev=134.79, samples=20 00:27:22.026 lat (msec) : 2=0.35%, 4=0.54%, 10=2.43%, 20=6.95%, 50=10.98% 00:27:22.026 lat (msec) : 100=11.64%, 250=53.24%, 500=13.22%, 750=0.64% 00:27:22.026 cpu : usr=1.21%, sys=1.38%, ctx=3071, majf=0, minf=1 00:27:22.026 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:27:22.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:22.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:22.026 issued rwts: total=0,4243,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:22.026 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:22.026 job9: (groupid=0, jobs=1): err= 0: pid=1958813: Sun Jul 14 15:00:00 2024 00:27:22.026 write: IOPS=403, BW=101MiB/s (106MB/s)(1026MiB/10164msec); 0 zone resets 00:27:22.026 slat (usec): min=15, max=147948, avg=1574.64, stdev=5752.17 00:27:22.026 clat (usec): min=1040, max=450010, avg=156937.87, stdev=110674.33 00:27:22.026 lat (usec): min=1081, max=450077, avg=158512.51, stdev=111928.04 00:27:22.026 clat percentiles (msec): 00:27:22.026 | 1.00th=[ 4], 5.00th=[ 9], 10.00th=[ 16], 20.00th=[ 38], 00:27:22.026 | 30.00th=[ 62], 40.00th=[ 140], 50.00th=[ 161], 60.00th=[ 186], 00:27:22.026 | 70.00th=[ 211], 80.00th=[ 245], 90.00th=[ 326], 95.00th=[ 355], 00:27:22.026 | 99.00th=[ 401], 99.50th=[ 430], 99.90th=[ 451], 99.95th=[ 451], 00:27:22.026 | 99.99th=[ 451] 00:27:22.026 bw ( KiB/s): min=38912, max=246784, per=9.12%, avg=103398.40, stdev=49230.43, samples=20 00:27:22.027 iops : min= 152, max= 964, avg=403.85, stdev=192.31, samples=20 00:27:22.027 lat (msec) : 2=0.44%, 4=0.93%, 10=5.10%, 20=6.07%, 50=13.75% 00:27:22.027 lat (msec) : 100=10.02%, 250=44.03%, 500=19.67% 00:27:22.027 cpu : usr=1.13%, sys=1.35%, ctx=2738, majf=0, minf=1 00:27:22.027 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:27:22.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:22.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:22.027 issued rwts: total=0,4102,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:22.027 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:22.027 job10: (groupid=0, jobs=1): err= 0: pid=1958814: Sun Jul 14 15:00:00 2024 00:27:22.027 write: IOPS=360, BW=90.1MiB/s (94.4MB/s)(915MiB/10156msec); 0 zone resets 00:27:22.027 slat (usec): min=17, max=64723, avg=1776.65, stdev=5340.55 00:27:22.027 clat (usec): min=1118, max=376808, avg=175755.16, stdev=106884.93 00:27:22.027 lat (usec): min=1179, max=394980, avg=177531.81, stdev=108213.82 00:27:22.027 clat percentiles (msec): 00:27:22.027 | 1.00th=[ 4], 5.00th=[ 12], 10.00th=[ 21], 20.00th=[ 74], 00:27:22.027 | 30.00th=[ 109], 40.00th=[ 146], 50.00th=[ 167], 60.00th=[ 186], 00:27:22.027 | 70.00th=[ 245], 80.00th=[ 296], 90.00th=[ 326], 95.00th=[ 347], 00:27:22.027 | 99.00th=[ 372], 99.50th=[ 376], 99.90th=[ 376], 99.95th=[ 376], 00:27:22.027 | 99.99th=[ 376] 00:27:22.027 bw ( KiB/s): min=52736, max=159232, per=8.12%, avg=92057.60, stdev=30024.34, samples=20 00:27:22.027 iops : min= 206, max= 622, avg=359.60, stdev=117.28, samples=20 00:27:22.027 lat (msec) : 2=0.52%, 4=0.55%, 10=3.28%, 20=5.25%, 50=7.49% 00:27:22.027 lat (msec) : 100=9.54%, 250=44.14%, 500=29.24% 00:27:22.027 cpu : usr=0.94%, sys=1.13%, ctx=2251, majf=0, minf=1 00:27:22.027 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:27:22.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:22.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:22.027 issued rwts: total=0,3659,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:22.027 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:22.027 00:27:22.027 Run status group 0 (all jobs): 00:27:22.027 WRITE: bw=1108MiB/s (1162MB/s), 78.4MiB/s-127MiB/s (82.3MB/s-133MB/s), io=11.0GiB (11.8GB), run=10055-10188msec 00:27:22.027 00:27:22.027 Disk stats (read/write): 00:27:22.027 nvme0n1: ios=47/6372, merge=0/0, ticks=1295/1197467, in_queue=1198762, util=99.51% 00:27:22.027 nvme10n1: ios=49/6374, merge=0/0, ticks=259/1238131, in_queue=1238390, util=98.05% 00:27:22.027 nvme1n1: ios=47/9288, merge=0/0, ticks=2630/1204386, in_queue=1207016, util=99.97% 00:27:22.027 nvme2n1: ios=46/8019, merge=0/0, ticks=2685/1213161, in_queue=1215846, util=100.00% 00:27:22.027 nvme3n1: ios=51/10096, merge=0/0, ticks=3353/1209956, in_queue=1213309, util=100.00% 00:27:22.027 nvme4n1: ios=0/8939, merge=0/0, ticks=0/1204464, in_queue=1204464, util=98.11% 00:27:22.027 nvme5n1: ios=45/9238, merge=0/0, ticks=2668/1215101, in_queue=1217769, util=100.00% 00:27:22.027 nvme6n1: ios=50/6501, merge=0/0, ticks=2083/1198176, in_queue=1200259, util=100.00% 00:27:22.027 nvme7n1: ios=39/8470, merge=0/0, ticks=1481/1239128, in_queue=1240609, util=100.00% 00:27:22.027 nvme8n1: ios=45/7984, merge=0/0, ticks=87/1216527, in_queue=1216614, util=99.51% 00:27:22.027 nvme9n1: ios=46/7136, merge=0/0, ticks=2401/1208994, in_queue=1211395, util=100.00% 00:27:22.027 15:00:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:27:22.027 15:00:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:27:22.027 15:00:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:22.027 15:00:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:22.027 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:22.027 15:00:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:27:22.027 15:00:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:22.027 15:00:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:22.027 15:00:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:27:22.027 15:00:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:22.027 15:00:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:27:22.027 15:00:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:22.027 15:00:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:22.027 15:00:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.027 15:00:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:22.027 15:00:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.027 15:00:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:22.027 15:00:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:27:22.027 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:27:22.027 15:00:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:27:22.027 15:00:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:22.027 15:00:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:22.027 15:00:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:27:22.027 15:00:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:22.027 15:00:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:27:22.027 15:00:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:22.027 15:00:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:22.027 15:00:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.027 15:00:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:22.027 15:00:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.027 15:00:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:22.027 15:00:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:27:22.286 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:27:22.286 15:00:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:27:22.286 15:00:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:22.286 15:00:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:22.286 15:00:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:27:22.286 15:00:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:22.286 15:00:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:27:22.286 15:00:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:22.286 15:00:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:22.286 15:00:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.286 15:00:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:22.286 15:00:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.286 15:00:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:22.286 15:00:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:27:22.850 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:27:22.850 15:00:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:27:22.850 15:00:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:22.850 15:00:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:22.850 15:00:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:27:22.850 15:00:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:22.850 15:00:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:27:22.850 15:00:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:22.850 15:00:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:27:22.850 15:00:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.850 15:00:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:22.850 15:00:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.850 15:00:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:22.850 15:00:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:27:23.108 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:27:23.108 15:00:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:27:23.108 15:00:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:23.108 15:00:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:23.108 15:00:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:27:23.108 15:00:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:23.108 15:00:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:27:23.108 15:00:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:23.108 15:00:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:27:23.108 15:00:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.108 15:00:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:23.108 15:00:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.108 15:00:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:23.108 15:00:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:27:23.367 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:27:23.367 15:00:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:27:23.367 15:00:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:23.367 15:00:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:23.367 15:00:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:27:23.367 15:00:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:23.367 15:00:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:27:23.367 15:00:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:23.367 15:00:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:27:23.367 15:00:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.367 15:00:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:23.367 15:00:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.367 15:00:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:23.367 15:00:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:27:23.626 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:27:23.626 15:00:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:27:23.626 15:00:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:23.626 15:00:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:23.626 15:00:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:27:23.626 15:00:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:23.626 15:00:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:27:23.626 15:00:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:23.626 15:00:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:27:23.626 15:00:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.626 15:00:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:23.626 15:00:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.626 15:00:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:23.626 15:00:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:27:23.886 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:27:23.886 15:00:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:27:23.886 15:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:23.886 15:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:23.886 15:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:27:23.886 15:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:23.886 15:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:27:23.886 15:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:23.886 15:00:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:27:23.886 15:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.886 15:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:23.886 15:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.886 15:00:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:23.886 15:00:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:27:24.143 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:27:24.143 15:00:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:27:24.143 15:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:24.143 15:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:24.143 15:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:27:24.143 15:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:24.143 15:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:27:24.143 15:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:24.143 15:00:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:27:24.143 15:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.143 15:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.143 15:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.143 15:00:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:24.143 15:00:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:27:24.401 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:27:24.401 15:00:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:27:24.401 15:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:24.401 15:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:24.401 15:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:27:24.401 15:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:24.401 15:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:27:24.401 15:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:24.401 15:00:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:27:24.401 15:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.401 15:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.401 15:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.401 15:00:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:24.401 15:00:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:27:24.658 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:27:24.658 15:00:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:27:24.658 15:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:24.658 15:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:24.658 15:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:27:24.658 15:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:24.658 15:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:27:24.658 15:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:24.658 15:00:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:27:24.658 15:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.658 15:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.658 15:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.658 15:00:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:27:24.658 15:00:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:27:24.658 15:00:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:27:24.658 15:00:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:24.658 15:00:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:27:24.658 15:00:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:24.658 15:00:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:27:24.658 15:00:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:24.658 15:00:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:24.658 rmmod nvme_tcp 00:27:24.658 rmmod nvme_fabrics 00:27:24.658 rmmod nvme_keyring 00:27:24.658 15:00:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:24.658 15:00:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:27:24.658 15:00:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:27:24.658 15:00:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 1953250 ']' 00:27:24.658 15:00:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 1953250 00:27:24.658 15:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 1953250 ']' 00:27:24.658 15:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # kill -0 1953250 00:27:24.658 15:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # uname 00:27:24.658 15:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:24.658 15:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1953250 00:27:24.658 15:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:24.658 15:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:24.658 15:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1953250' 00:27:24.658 killing process with pid 1953250 00:27:24.658 15:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@967 -- # kill 1953250 00:27:24.658 15:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@972 -- # wait 1953250 00:27:27.941 15:00:07 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:27.941 15:00:07 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:27.941 15:00:07 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:27.941 15:00:07 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:27.941 15:00:07 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:27.941 15:00:07 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:27.941 15:00:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:27.941 15:00:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:29.841 15:00:09 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:29.841 00:27:29.841 real 1m5.385s 00:27:29.841 user 3m42.760s 00:27:29.841 sys 0m21.401s 00:27:29.841 15:00:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:29.841 15:00:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:29.841 ************************************ 00:27:29.841 END TEST nvmf_multiconnection 00:27:29.841 ************************************ 00:27:29.841 15:00:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:29.841 15:00:09 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:29.841 15:00:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:29.841 15:00:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:29.841 15:00:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:29.841 ************************************ 00:27:29.841 START TEST nvmf_initiator_timeout 00:27:29.841 ************************************ 00:27:29.841 15:00:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:30.102 * Looking for test storage... 00:27:30.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:30.102 15:00:09 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:30.102 15:00:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:27:30.102 15:00:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:30.102 15:00:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:30.102 15:00:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:30.102 15:00:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:30.102 15:00:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:30.102 15:00:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:30.102 15:00:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:30.102 15:00:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:30.102 15:00:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:30.102 15:00:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:30.102 15:00:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:30.102 15:00:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:30.102 15:00:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:30.102 15:00:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:30.102 15:00:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:30.102 15:00:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:30.102 15:00:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:30.102 15:00:09 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:30.102 15:00:09 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:30.102 15:00:09 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:30.102 15:00:09 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.102 15:00:09 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.102 15:00:09 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.102 15:00:09 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:27:30.102 15:00:09 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.102 15:00:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:27:30.102 15:00:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:30.102 15:00:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:30.102 15:00:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:30.102 15:00:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:30.102 15:00:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:30.102 15:00:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:30.102 15:00:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:30.102 15:00:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:30.102 15:00:09 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:30.102 15:00:09 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:30.102 15:00:09 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:27:30.102 15:00:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:30.102 15:00:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:30.102 15:00:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:30.102 15:00:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:30.102 15:00:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:30.102 15:00:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:30.102 15:00:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:30.102 15:00:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:30.102 15:00:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:30.102 15:00:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:30.102 15:00:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:27:30.102 15:00:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:32.017 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:32.017 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:32.017 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:32.017 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:32.017 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:32.018 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:32.018 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:32.018 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:32.018 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:27:32.018 00:27:32.018 --- 10.0.0.2 ping statistics --- 00:27:32.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:32.018 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:27:32.018 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:32.018 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:32.018 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:27:32.018 00:27:32.018 --- 10.0.0.1 ping statistics --- 00:27:32.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:32.018 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:27:32.018 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:32.018 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:27:32.018 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:32.018 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:32.018 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:32.018 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:32.018 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:32.018 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:32.018 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:32.018 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:27:32.018 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:32.018 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:32.018 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:32.018 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=1963150 00:27:32.018 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 1963150 00:27:32.018 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@829 -- # '[' -z 1963150 ']' 00:27:32.018 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:32.018 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:32.018 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:32.018 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:32.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:32.018 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:32.018 15:00:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:32.275 [2024-07-14 15:00:11.359562] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:32.275 [2024-07-14 15:00:11.359753] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:32.275 EAL: No free 2048 kB hugepages reported on node 1 00:27:32.275 [2024-07-14 15:00:11.519986] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:32.531 [2024-07-14 15:00:11.783613] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:32.531 [2024-07-14 15:00:11.783683] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:32.531 [2024-07-14 15:00:11.783711] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:32.531 [2024-07-14 15:00:11.783733] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:32.531 [2024-07-14 15:00:11.783756] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:32.531 [2024-07-14 15:00:11.783892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:32.531 [2024-07-14 15:00:11.783947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:32.531 [2024-07-14 15:00:11.783968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:32.531 [2024-07-14 15:00:11.783979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:33.097 15:00:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:33.097 15:00:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # return 0 00:27:33.097 15:00:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:33.097 15:00:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:33.097 15:00:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:33.097 15:00:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:33.097 15:00:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:27:33.097 15:00:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:33.097 15:00:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.097 15:00:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:33.097 Malloc0 00:27:33.097 15:00:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.097 15:00:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:27:33.097 15:00:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.097 15:00:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:33.097 Delay0 00:27:33.097 15:00:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.097 15:00:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:33.097 15:00:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.097 15:00:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:33.097 [2024-07-14 15:00:12.333529] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:33.097 15:00:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.097 15:00:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:27:33.097 15:00:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.097 15:00:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:33.097 15:00:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.097 15:00:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:33.097 15:00:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.097 15:00:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:33.097 15:00:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.097 15:00:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:33.097 15:00:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.097 15:00:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:33.097 [2024-07-14 15:00:12.362922] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:33.097 15:00:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.097 15:00:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:34.031 15:00:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:27:34.031 15:00:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:27:34.031 15:00:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:34.031 15:00:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:34.031 15:00:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:27:35.925 15:00:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:35.925 15:00:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:35.925 15:00:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:27:35.925 15:00:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:35.925 15:00:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:35.925 15:00:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:27:35.925 15:00:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=1963589 00:27:35.925 15:00:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:27:35.925 15:00:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:27:35.925 [global] 00:27:35.925 thread=1 00:27:35.925 invalidate=1 00:27:35.925 rw=write 00:27:35.925 time_based=1 00:27:35.925 runtime=60 00:27:35.925 ioengine=libaio 00:27:35.925 direct=1 00:27:35.925 bs=4096 00:27:35.925 iodepth=1 00:27:35.925 norandommap=0 00:27:35.925 numjobs=1 00:27:35.925 00:27:35.925 verify_dump=1 00:27:35.925 verify_backlog=512 00:27:35.925 verify_state_save=0 00:27:35.925 do_verify=1 00:27:35.925 verify=crc32c-intel 00:27:35.925 [job0] 00:27:35.925 filename=/dev/nvme0n1 00:27:35.925 Could not set queue depth (nvme0n1) 00:27:35.925 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:35.925 fio-3.35 00:27:35.925 Starting 1 thread 00:27:39.200 15:00:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:27:39.200 15:00:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.200 15:00:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:39.200 true 00:27:39.200 15:00:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.200 15:00:18 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:27:39.200 15:00:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.200 15:00:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:39.200 true 00:27:39.200 15:00:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.200 15:00:18 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:27:39.200 15:00:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.200 15:00:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:39.200 true 00:27:39.200 15:00:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.200 15:00:18 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:27:39.200 15:00:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.200 15:00:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:39.200 true 00:27:39.200 15:00:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.200 15:00:18 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:27:41.752 15:00:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:27:41.752 15:00:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.752 15:00:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:41.752 true 00:27:41.752 15:00:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.752 15:00:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:27:41.752 15:00:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.752 15:00:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:41.752 true 00:27:41.752 15:00:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.752 15:00:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:27:41.752 15:00:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.752 15:00:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:41.752 true 00:27:41.752 15:00:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.752 15:00:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:27:41.752 15:00:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.752 15:00:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:42.010 true 00:27:42.010 15:00:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.010 15:00:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:27:42.010 15:00:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 1963589 00:28:38.219 00:28:38.219 job0: (groupid=0, jobs=1): err= 0: pid=1963662: Sun Jul 14 15:01:15 2024 00:28:38.219 read: IOPS=72, BW=290KiB/s (297kB/s)(17.0MiB/60036msec) 00:28:38.220 slat (nsec): min=5158, max=51603, avg=10929.16, stdev=7099.71 00:28:38.220 clat (usec): min=294, max=41166k, avg=13488.73, stdev=624346.51 00:28:38.220 lat (usec): min=299, max=41166k, avg=13499.66, stdev=624346.54 00:28:38.220 clat percentiles (usec): 00:28:38.220 | 1.00th=[ 306], 5.00th=[ 314], 10.00th=[ 318], 00:28:38.220 | 20.00th=[ 326], 30.00th=[ 334], 40.00th=[ 347], 00:28:38.220 | 50.00th=[ 359], 60.00th=[ 375], 70.00th=[ 396], 00:28:38.220 | 80.00th=[ 478], 90.00th=[ 537], 95.00th=[ 41157], 00:28:38.220 | 99.00th=[ 41157], 99.50th=[ 41157], 99.90th=[ 42206], 00:28:38.220 | 99.95th=[ 44827], 99.99th=[17112761] 00:28:38.220 write: IOPS=76, BW=307KiB/s (314kB/s)(18.0MiB/60036msec); 0 zone resets 00:28:38.220 slat (nsec): min=6423, max=66643, avg=12888.18, stdev=8912.62 00:28:38.220 clat (usec): min=206, max=472, avg=270.45, stdev=48.09 00:28:38.220 lat (usec): min=213, max=490, avg=283.34, stdev=54.90 00:28:38.220 clat percentiles (usec): 00:28:38.220 | 1.00th=[ 219], 5.00th=[ 225], 10.00th=[ 229], 20.00th=[ 235], 00:28:38.220 | 30.00th=[ 239], 40.00th=[ 245], 50.00th=[ 251], 60.00th=[ 260], 00:28:38.220 | 70.00th=[ 285], 80.00th=[ 306], 90.00th=[ 355], 95.00th=[ 379], 00:28:38.220 | 99.00th=[ 408], 99.50th=[ 412], 99.90th=[ 457], 99.95th=[ 469], 00:28:38.220 | 99.99th=[ 474] 00:28:38.220 bw ( KiB/s): min= 2168, max= 8192, per=100.00%, avg=5266.29, stdev=1911.33, samples=7 00:28:38.220 iops : min= 542, max= 2048, avg=1316.57, stdev=477.83, samples=7 00:28:38.220 lat (usec) : 250=25.85%, 500=67.63%, 750=2.14% 00:28:38.220 lat (msec) : 4=0.01%, 50=4.35%, >=2000=0.01% 00:28:38.220 cpu : usr=0.14%, sys=0.24%, ctx=8957, majf=0, minf=2 00:28:38.220 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:38.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:38.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:38.220 issued rwts: total=4348,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:38.220 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:38.220 00:28:38.220 Run status group 0 (all jobs): 00:28:38.220 READ: bw=290KiB/s (297kB/s), 290KiB/s-290KiB/s (297kB/s-297kB/s), io=17.0MiB (17.8MB), run=60036-60036msec 00:28:38.220 WRITE: bw=307KiB/s (314kB/s), 307KiB/s-307KiB/s (314kB/s-314kB/s), io=18.0MiB (18.9MB), run=60036-60036msec 00:28:38.220 00:28:38.220 Disk stats (read/write): 00:28:38.220 nvme0n1: ios=4443/4608, merge=0/0, ticks=18502/1175, in_queue=19677, util=99.57% 00:28:38.220 15:01:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:38.220 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:38.220 15:01:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:28:38.220 15:01:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:28:38.220 15:01:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:38.220 15:01:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:38.220 15:01:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:38.220 15:01:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:38.220 15:01:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:28:38.220 15:01:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:28:38.220 15:01:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:28:38.220 nvmf hotplug test: fio successful as expected 00:28:38.220 15:01:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:38.220 15:01:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.220 15:01:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:38.220 15:01:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.220 15:01:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:28:38.220 15:01:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:28:38.220 15:01:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:28:38.220 15:01:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:38.220 15:01:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:28:38.220 15:01:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:38.220 15:01:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:28:38.220 15:01:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:38.220 15:01:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:38.220 rmmod nvme_tcp 00:28:38.220 rmmod nvme_fabrics 00:28:38.220 rmmod nvme_keyring 00:28:38.220 15:01:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:38.220 15:01:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:28:38.220 15:01:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:28:38.220 15:01:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 1963150 ']' 00:28:38.220 15:01:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 1963150 00:28:38.220 15:01:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@948 -- # '[' -z 1963150 ']' 00:28:38.220 15:01:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # kill -0 1963150 00:28:38.220 15:01:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # uname 00:28:38.220 15:01:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:38.220 15:01:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1963150 00:28:38.220 15:01:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:38.220 15:01:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:38.220 15:01:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1963150' 00:28:38.220 killing process with pid 1963150 00:28:38.220 15:01:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # kill 1963150 00:28:38.220 15:01:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # wait 1963150 00:28:38.220 15:01:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:38.220 15:01:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:38.220 15:01:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:38.220 15:01:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:38.220 15:01:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:38.220 15:01:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:38.220 15:01:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:38.220 15:01:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:40.123 15:01:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:40.123 00:28:40.123 real 1m9.946s 00:28:40.123 user 4m15.317s 00:28:40.123 sys 0m6.636s 00:28:40.123 15:01:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:40.123 15:01:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:40.123 ************************************ 00:28:40.123 END TEST nvmf_initiator_timeout 00:28:40.123 ************************************ 00:28:40.123 15:01:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:40.123 15:01:19 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:28:40.123 15:01:19 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:28:40.123 15:01:19 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:28:40.123 15:01:19 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:28:40.123 15:01:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:42.027 15:01:20 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:42.027 15:01:20 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:28:42.027 15:01:20 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:42.027 15:01:20 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:42.027 15:01:20 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:42.028 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:42.028 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:42.028 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:42.028 15:01:20 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:42.028 15:01:21 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:42.028 15:01:21 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:42.028 15:01:21 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:42.028 15:01:21 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:42.028 15:01:21 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:42.028 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:42.028 15:01:21 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:42.028 15:01:21 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:42.028 15:01:21 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:42.028 15:01:21 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:28:42.028 15:01:21 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:42.028 15:01:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:42.028 15:01:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:42.028 15:01:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:42.028 ************************************ 00:28:42.028 START TEST nvmf_perf_adq 00:28:42.028 ************************************ 00:28:42.028 15:01:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:42.028 * Looking for test storage... 00:28:42.028 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:42.028 15:01:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:42.028 15:01:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:28:42.028 15:01:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:42.028 15:01:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:42.028 15:01:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:42.028 15:01:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:42.028 15:01:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:42.028 15:01:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:42.028 15:01:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:42.028 15:01:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:42.028 15:01:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:42.028 15:01:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:42.028 15:01:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:42.028 15:01:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:42.028 15:01:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:42.028 15:01:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:42.028 15:01:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:42.028 15:01:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:42.028 15:01:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:42.028 15:01:21 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:42.028 15:01:21 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:42.028 15:01:21 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:42.028 15:01:21 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.028 15:01:21 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.028 15:01:21 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.028 15:01:21 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:28:42.028 15:01:21 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.028 15:01:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:28:42.028 15:01:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:42.028 15:01:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:42.028 15:01:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:42.028 15:01:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:42.028 15:01:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:42.028 15:01:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:42.028 15:01:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:42.028 15:01:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:42.028 15:01:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:28:42.028 15:01:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:28:42.028 15:01:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:43.928 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:43.928 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:28:43.928 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:43.928 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:43.928 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:43.929 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:43.929 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:43.929 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:43.929 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:28:43.929 15:01:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:28:44.494 15:01:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:28:46.392 15:01:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:28:51.669 15:01:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:28:51.669 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:51.669 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:51.669 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:51.669 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:51.669 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:51.669 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:51.669 15:01:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:51.669 15:01:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:51.669 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:51.669 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:51.669 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:28:51.669 15:01:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:51.669 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:51.669 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:28:51.669 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:51.669 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:51.669 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:51.669 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:51.669 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:51.669 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:28:51.669 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:51.669 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:28:51.669 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:28:51.669 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:28:51.669 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:28:51.669 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:28:51.669 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:28:51.669 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:51.669 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:51.669 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:51.669 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:51.669 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:51.669 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:51.669 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:51.669 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:51.669 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:51.669 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:51.669 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:51.669 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:51.669 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:51.670 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:51.670 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:51.670 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:51.670 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:51.670 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:51.670 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:28:51.670 00:28:51.670 --- 10.0.0.2 ping statistics --- 00:28:51.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:51.670 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:51.670 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:51.670 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:28:51.670 00:28:51.670 --- 10.0.0.1 ping statistics --- 00:28:51.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:51.670 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1975295 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1975295 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1975295 ']' 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:51.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:51.670 15:01:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:51.671 15:01:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:51.671 [2024-07-14 15:01:30.953047] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:51.671 [2024-07-14 15:01:30.953207] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:51.929 EAL: No free 2048 kB hugepages reported on node 1 00:28:51.929 [2024-07-14 15:01:31.098526] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:52.188 [2024-07-14 15:01:31.367644] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:52.188 [2024-07-14 15:01:31.367714] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:52.188 [2024-07-14 15:01:31.367742] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:52.188 [2024-07-14 15:01:31.367763] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:52.188 [2024-07-14 15:01:31.367787] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:52.188 [2024-07-14 15:01:31.367922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:52.188 [2024-07-14 15:01:31.367971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:52.188 [2024-07-14 15:01:31.367987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:52.188 [2024-07-14 15:01:31.367995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:52.789 15:01:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:52.789 15:01:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:28:52.789 15:01:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:52.789 15:01:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:52.789 15:01:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:52.789 15:01:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:52.789 15:01:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:28:52.789 15:01:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:52.789 15:01:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:52.789 15:01:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.789 15:01:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:52.789 15:01:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.789 15:01:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:52.789 15:01:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:28:52.789 15:01:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.789 15:01:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:52.789 15:01:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.789 15:01:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:52.789 15:01:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.789 15:01:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:53.048 15:01:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.048 15:01:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:28:53.048 15:01:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.048 15:01:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:53.048 [2024-07-14 15:01:32.311416] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:53.048 15:01:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.048 15:01:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:53.048 15:01:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.048 15:01:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:53.306 Malloc1 00:28:53.306 15:01:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.306 15:01:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:53.306 15:01:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.306 15:01:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:53.306 15:01:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.306 15:01:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:53.306 15:01:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.306 15:01:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:53.306 15:01:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.306 15:01:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:53.306 15:01:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.306 15:01:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:53.306 [2024-07-14 15:01:32.418202] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:53.306 15:01:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.306 15:01:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1975462 00:28:53.306 15:01:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:28:53.306 15:01:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:53.306 EAL: No free 2048 kB hugepages reported on node 1 00:28:55.205 15:01:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:28:55.205 15:01:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.205 15:01:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:55.205 15:01:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.205 15:01:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:28:55.205 "tick_rate": 2700000000, 00:28:55.205 "poll_groups": [ 00:28:55.205 { 00:28:55.205 "name": "nvmf_tgt_poll_group_000", 00:28:55.205 "admin_qpairs": 1, 00:28:55.205 "io_qpairs": 1, 00:28:55.205 "current_admin_qpairs": 1, 00:28:55.205 "current_io_qpairs": 1, 00:28:55.205 "pending_bdev_io": 0, 00:28:55.205 "completed_nvme_io": 17575, 00:28:55.205 "transports": [ 00:28:55.205 { 00:28:55.205 "trtype": "TCP" 00:28:55.205 } 00:28:55.205 ] 00:28:55.205 }, 00:28:55.205 { 00:28:55.205 "name": "nvmf_tgt_poll_group_001", 00:28:55.205 "admin_qpairs": 0, 00:28:55.205 "io_qpairs": 1, 00:28:55.205 "current_admin_qpairs": 0, 00:28:55.205 "current_io_qpairs": 1, 00:28:55.205 "pending_bdev_io": 0, 00:28:55.205 "completed_nvme_io": 17482, 00:28:55.205 "transports": [ 00:28:55.205 { 00:28:55.205 "trtype": "TCP" 00:28:55.205 } 00:28:55.205 ] 00:28:55.205 }, 00:28:55.205 { 00:28:55.205 "name": "nvmf_tgt_poll_group_002", 00:28:55.205 "admin_qpairs": 0, 00:28:55.205 "io_qpairs": 1, 00:28:55.205 "current_admin_qpairs": 0, 00:28:55.205 "current_io_qpairs": 1, 00:28:55.205 "pending_bdev_io": 0, 00:28:55.205 "completed_nvme_io": 16825, 00:28:55.205 "transports": [ 00:28:55.205 { 00:28:55.205 "trtype": "TCP" 00:28:55.205 } 00:28:55.205 ] 00:28:55.205 }, 00:28:55.205 { 00:28:55.205 "name": "nvmf_tgt_poll_group_003", 00:28:55.205 "admin_qpairs": 0, 00:28:55.205 "io_qpairs": 1, 00:28:55.205 "current_admin_qpairs": 0, 00:28:55.205 "current_io_qpairs": 1, 00:28:55.205 "pending_bdev_io": 0, 00:28:55.205 "completed_nvme_io": 16859, 00:28:55.205 "transports": [ 00:28:55.205 { 00:28:55.205 "trtype": "TCP" 00:28:55.205 } 00:28:55.205 ] 00:28:55.205 } 00:28:55.206 ] 00:28:55.206 }' 00:28:55.206 15:01:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:28:55.206 15:01:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:28:55.206 15:01:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:28:55.206 15:01:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:28:55.206 15:01:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1975462 00:29:05.170 Initializing NVMe Controllers 00:29:05.170 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:05.170 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:29:05.170 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:29:05.170 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:29:05.170 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:29:05.170 Initialization complete. Launching workers. 00:29:05.170 ======================================================== 00:29:05.170 Latency(us) 00:29:05.170 Device Information : IOPS MiB/s Average min max 00:29:05.170 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9328.69 36.44 6860.41 2967.71 10510.86 00:29:05.170 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9577.49 37.41 6683.39 2924.94 10392.83 00:29:05.170 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9236.69 36.08 6929.33 3195.21 11702.01 00:29:05.170 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9681.19 37.82 6612.14 2821.06 9590.77 00:29:05.170 ======================================================== 00:29:05.170 Total : 37824.06 147.75 6768.87 2821.06 11702.01 00:29:05.170 00:29:05.170 15:01:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:29:05.170 15:01:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:05.170 15:01:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:29:05.170 15:01:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:05.170 15:01:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:29:05.170 15:01:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:05.170 15:01:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:05.170 rmmod nvme_tcp 00:29:05.170 rmmod nvme_fabrics 00:29:05.170 rmmod nvme_keyring 00:29:05.170 15:01:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:05.170 15:01:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:29:05.170 15:01:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:29:05.170 15:01:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1975295 ']' 00:29:05.170 15:01:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1975295 00:29:05.170 15:01:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1975295 ']' 00:29:05.170 15:01:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1975295 00:29:05.170 15:01:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:29:05.170 15:01:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:05.170 15:01:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1975295 00:29:05.170 15:01:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:05.170 15:01:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:05.170 15:01:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1975295' 00:29:05.170 killing process with pid 1975295 00:29:05.170 15:01:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1975295 00:29:05.170 15:01:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1975295 00:29:05.170 15:01:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:05.170 15:01:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:05.170 15:01:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:05.170 15:01:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:05.170 15:01:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:05.170 15:01:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:05.170 15:01:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:05.170 15:01:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.073 15:01:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:07.073 15:01:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:29:07.073 15:01:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:29:07.638 15:01:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:29:09.545 15:01:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:29:14.821 15:01:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:29:14.821 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:14.822 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:14.822 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:14.822 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:14.822 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:14.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:14.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:29:14.822 00:29:14.822 --- 10.0.0.2 ping statistics --- 00:29:14.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:14.822 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:14.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:14.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:29:14.822 00:29:14.822 --- 10.0.0.1 ping statistics --- 00:29:14.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:14.822 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:29:14.822 net.core.busy_poll = 1 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:29:14.822 net.core.busy_read = 1 00:29:14.822 15:01:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:29:14.823 15:01:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:29:14.823 15:01:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:29:14.823 15:01:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:29:14.823 15:01:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:29:14.823 15:01:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:14.823 15:01:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:14.823 15:01:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:14.823 15:01:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:14.823 15:01:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1978202 00:29:14.823 15:01:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:14.823 15:01:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1978202 00:29:14.823 15:01:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1978202 ']' 00:29:14.823 15:01:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:14.823 15:01:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:14.823 15:01:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:14.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:14.823 15:01:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:14.823 15:01:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:15.082 [2024-07-14 15:01:54.205086] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:15.082 [2024-07-14 15:01:54.205267] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:15.082 EAL: No free 2048 kB hugepages reported on node 1 00:29:15.082 [2024-07-14 15:01:54.344687] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:15.340 [2024-07-14 15:01:54.576199] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:15.340 [2024-07-14 15:01:54.576271] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:15.340 [2024-07-14 15:01:54.576296] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:15.340 [2024-07-14 15:01:54.576314] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:15.340 [2024-07-14 15:01:54.576332] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:15.340 [2024-07-14 15:01:54.576455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:15.340 [2024-07-14 15:01:54.576506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:15.340 [2024-07-14 15:01:54.576545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:15.340 [2024-07-14 15:01:54.576557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:15.905 15:01:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:15.905 15:01:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:29:15.905 15:01:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:15.905 15:01:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:15.905 15:01:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:15.905 15:01:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:15.905 15:01:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:29:15.905 15:01:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:29:15.905 15:01:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:29:15.905 15:01:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.905 15:01:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:15.905 15:01:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.164 15:01:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:29:16.164 15:01:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:29:16.164 15:01:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:16.164 15:01:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:16.164 15:01:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.164 15:01:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:29:16.164 15:01:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:16.165 15:01:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:16.423 15:01:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.423 15:01:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:29:16.423 15:01:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:16.423 15:01:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:16.423 [2024-07-14 15:01:55.592105] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:16.423 15:01:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.423 15:01:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:16.423 15:01:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:16.423 15:01:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:16.423 Malloc1 00:29:16.423 15:01:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.423 15:01:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:16.423 15:01:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:16.423 15:01:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:16.423 15:01:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.423 15:01:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:16.423 15:01:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:16.423 15:01:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:16.423 15:01:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.423 15:01:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:16.423 15:01:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:16.423 15:01:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:16.423 [2024-07-14 15:01:55.697082] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:16.423 15:01:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.423 15:01:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1978485 00:29:16.423 15:01:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:29:16.423 15:01:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:16.682 EAL: No free 2048 kB hugepages reported on node 1 00:29:18.582 15:01:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:29:18.582 15:01:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:18.582 15:01:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:18.582 15:01:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:18.582 15:01:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:29:18.582 "tick_rate": 2700000000, 00:29:18.582 "poll_groups": [ 00:29:18.582 { 00:29:18.582 "name": "nvmf_tgt_poll_group_000", 00:29:18.582 "admin_qpairs": 1, 00:29:18.582 "io_qpairs": 2, 00:29:18.582 "current_admin_qpairs": 1, 00:29:18.582 "current_io_qpairs": 2, 00:29:18.582 "pending_bdev_io": 0, 00:29:18.582 "completed_nvme_io": 18937, 00:29:18.582 "transports": [ 00:29:18.582 { 00:29:18.582 "trtype": "TCP" 00:29:18.582 } 00:29:18.582 ] 00:29:18.582 }, 00:29:18.582 { 00:29:18.582 "name": "nvmf_tgt_poll_group_001", 00:29:18.582 "admin_qpairs": 0, 00:29:18.582 "io_qpairs": 2, 00:29:18.582 "current_admin_qpairs": 0, 00:29:18.582 "current_io_qpairs": 2, 00:29:18.582 "pending_bdev_io": 0, 00:29:18.582 "completed_nvme_io": 19328, 00:29:18.582 "transports": [ 00:29:18.582 { 00:29:18.582 "trtype": "TCP" 00:29:18.582 } 00:29:18.582 ] 00:29:18.582 }, 00:29:18.582 { 00:29:18.582 "name": "nvmf_tgt_poll_group_002", 00:29:18.582 "admin_qpairs": 0, 00:29:18.582 "io_qpairs": 0, 00:29:18.582 "current_admin_qpairs": 0, 00:29:18.582 "current_io_qpairs": 0, 00:29:18.582 "pending_bdev_io": 0, 00:29:18.582 "completed_nvme_io": 0, 00:29:18.582 "transports": [ 00:29:18.582 { 00:29:18.582 "trtype": "TCP" 00:29:18.582 } 00:29:18.582 ] 00:29:18.582 }, 00:29:18.582 { 00:29:18.582 "name": "nvmf_tgt_poll_group_003", 00:29:18.582 "admin_qpairs": 0, 00:29:18.582 "io_qpairs": 0, 00:29:18.582 "current_admin_qpairs": 0, 00:29:18.582 "current_io_qpairs": 0, 00:29:18.582 "pending_bdev_io": 0, 00:29:18.582 "completed_nvme_io": 0, 00:29:18.582 "transports": [ 00:29:18.582 { 00:29:18.582 "trtype": "TCP" 00:29:18.582 } 00:29:18.582 ] 00:29:18.582 } 00:29:18.582 ] 00:29:18.582 }' 00:29:18.582 15:01:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:29:18.582 15:01:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:29:18.582 15:01:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:29:18.582 15:01:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:29:18.582 15:01:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1978485 00:29:26.725 Initializing NVMe Controllers 00:29:26.725 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:26.725 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:29:26.725 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:29:26.725 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:29:26.725 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:29:26.725 Initialization complete. Launching workers. 00:29:26.725 ======================================================== 00:29:26.725 Latency(us) 00:29:26.725 Device Information : IOPS MiB/s Average min max 00:29:26.725 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5355.90 20.92 11953.91 2245.79 59553.39 00:29:26.725 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5138.90 20.07 12492.90 1956.47 59351.87 00:29:26.725 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5442.10 21.26 11763.50 2341.38 58066.72 00:29:26.725 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5401.30 21.10 11852.25 2284.41 58166.06 00:29:26.725 ======================================================== 00:29:26.725 Total : 21338.19 83.35 12009.42 1956.47 59553.39 00:29:26.725 00:29:26.725 15:02:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:29:26.725 15:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:26.725 15:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:29:26.725 15:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:26.725 15:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:29:26.725 15:02:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:26.725 15:02:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:26.725 rmmod nvme_tcp 00:29:26.725 rmmod nvme_fabrics 00:29:26.985 rmmod nvme_keyring 00:29:26.985 15:02:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:26.985 15:02:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:29:26.985 15:02:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:29:26.985 15:02:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1978202 ']' 00:29:26.985 15:02:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1978202 00:29:26.985 15:02:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1978202 ']' 00:29:26.985 15:02:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1978202 00:29:26.985 15:02:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:29:26.985 15:02:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:26.985 15:02:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1978202 00:29:26.985 15:02:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:26.985 15:02:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:26.985 15:02:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1978202' 00:29:26.985 killing process with pid 1978202 00:29:26.985 15:02:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1978202 00:29:26.985 15:02:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1978202 00:29:28.360 15:02:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:28.360 15:02:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:28.360 15:02:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:28.360 15:02:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:28.360 15:02:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:28.360 15:02:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:28.360 15:02:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:28.360 15:02:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:30.262 15:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:30.262 15:02:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:29:30.262 00:29:30.262 real 0m48.540s 00:29:30.262 user 2m53.524s 00:29:30.262 sys 0m9.650s 00:29:30.262 15:02:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:30.262 15:02:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:30.262 ************************************ 00:29:30.262 END TEST nvmf_perf_adq 00:29:30.262 ************************************ 00:29:30.520 15:02:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:30.520 15:02:09 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:30.520 15:02:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:30.520 15:02:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:30.520 15:02:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:30.520 ************************************ 00:29:30.520 START TEST nvmf_shutdown 00:29:30.520 ************************************ 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:30.520 * Looking for test storage... 00:29:30.520 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:30.520 ************************************ 00:29:30.520 START TEST nvmf_shutdown_tc1 00:29:30.520 ************************************ 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:29:30.520 15:02:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:32.423 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:32.423 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:32.423 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:32.423 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:29:32.423 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:32.424 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:32.424 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:32.424 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:32.424 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:32.424 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:32.424 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:32.424 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:32.424 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:32.424 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:32.424 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:32.424 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:32.424 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:32.424 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:32.424 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:32.424 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:32.424 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:32.424 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:32.424 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:32.424 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:32.424 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:32.682 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:32.682 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:32.682 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:32.682 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:29:32.682 00:29:32.682 --- 10.0.0.2 ping statistics --- 00:29:32.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:32.682 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:29:32.682 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:32.682 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:32.682 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:29:32.682 00:29:32.682 --- 10.0.0.1 ping statistics --- 00:29:32.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:32.682 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:29:32.682 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:32.682 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:29:32.682 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:32.682 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:32.682 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:32.682 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:32.682 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:32.682 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:32.682 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:32.682 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:29:32.682 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:32.682 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:32.682 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:32.682 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1981773 00:29:32.682 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:32.682 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1981773 00:29:32.682 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1981773 ']' 00:29:32.682 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:32.682 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:32.682 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:32.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:32.682 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:32.682 15:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:32.682 [2024-07-14 15:02:11.852835] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:32.682 [2024-07-14 15:02:11.853001] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:32.682 EAL: No free 2048 kB hugepages reported on node 1 00:29:32.942 [2024-07-14 15:02:11.990741] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:33.201 [2024-07-14 15:02:12.252479] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:33.201 [2024-07-14 15:02:12.252556] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:33.201 [2024-07-14 15:02:12.252583] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:33.201 [2024-07-14 15:02:12.252605] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:33.201 [2024-07-14 15:02:12.252626] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:33.201 [2024-07-14 15:02:12.252767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:33.201 [2024-07-14 15:02:12.252819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:33.201 [2024-07-14 15:02:12.252932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:33.201 [2024-07-14 15:02:12.252939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:33.768 15:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:33.768 15:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:29:33.768 15:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:33.768 15:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:33.768 15:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:33.768 15:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:33.768 15:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:33.768 15:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.768 15:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:33.768 [2024-07-14 15:02:12.833331] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:33.768 15:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.768 15:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:29:33.768 15:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:29:33.768 15:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:33.768 15:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:33.768 15:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:33.768 15:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:33.768 15:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:33.768 15:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:33.768 15:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:33.768 15:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:33.768 15:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:33.768 15:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:33.768 15:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:33.768 15:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:33.768 15:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:33.768 15:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:33.768 15:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:33.768 15:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:33.768 15:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:33.768 15:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:33.768 15:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:33.768 15:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:33.768 15:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:33.768 15:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:33.768 15:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:33.768 15:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:29:33.768 15:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.768 15:02:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:33.768 Malloc1 00:29:33.768 [2024-07-14 15:02:12.961346] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:33.768 Malloc2 00:29:34.025 Malloc3 00:29:34.025 Malloc4 00:29:34.025 Malloc5 00:29:34.282 Malloc6 00:29:34.282 Malloc7 00:29:34.541 Malloc8 00:29:34.541 Malloc9 00:29:34.541 Malloc10 00:29:34.800 15:02:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.800 15:02:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:29:34.800 15:02:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:34.800 15:02:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:34.800 15:02:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1982082 00:29:34.800 15:02:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1982082 /var/tmp/bdevperf.sock 00:29:34.800 15:02:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1982082 ']' 00:29:34.800 15:02:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:29:34.800 15:02:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:34.800 15:02:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:34.800 15:02:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:34.800 15:02:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:29:34.800 15:02:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:34.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:34.800 15:02:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:29:34.800 15:02:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:34.800 15:02:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:34.800 15:02:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:34.800 15:02:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:34.800 { 00:29:34.800 "params": { 00:29:34.800 "name": "Nvme$subsystem", 00:29:34.800 "trtype": "$TEST_TRANSPORT", 00:29:34.800 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:34.800 "adrfam": "ipv4", 00:29:34.800 "trsvcid": "$NVMF_PORT", 00:29:34.800 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:34.800 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:34.800 "hdgst": ${hdgst:-false}, 00:29:34.800 "ddgst": ${ddgst:-false} 00:29:34.800 }, 00:29:34.800 "method": "bdev_nvme_attach_controller" 00:29:34.800 } 00:29:34.800 EOF 00:29:34.800 )") 00:29:34.800 15:02:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:34.800 15:02:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:34.800 15:02:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:34.800 { 00:29:34.800 "params": { 00:29:34.800 "name": "Nvme$subsystem", 00:29:34.800 "trtype": "$TEST_TRANSPORT", 00:29:34.800 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:34.800 "adrfam": "ipv4", 00:29:34.800 "trsvcid": "$NVMF_PORT", 00:29:34.800 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:34.800 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:34.800 "hdgst": ${hdgst:-false}, 00:29:34.800 "ddgst": ${ddgst:-false} 00:29:34.800 }, 00:29:34.800 "method": "bdev_nvme_attach_controller" 00:29:34.800 } 00:29:34.800 EOF 00:29:34.800 )") 00:29:34.800 15:02:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:34.800 15:02:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:34.800 15:02:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:34.800 { 00:29:34.800 "params": { 00:29:34.800 "name": "Nvme$subsystem", 00:29:34.800 "trtype": "$TEST_TRANSPORT", 00:29:34.800 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:34.800 "adrfam": "ipv4", 00:29:34.800 "trsvcid": "$NVMF_PORT", 00:29:34.800 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:34.800 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:34.800 "hdgst": ${hdgst:-false}, 00:29:34.800 "ddgst": ${ddgst:-false} 00:29:34.800 }, 00:29:34.800 "method": "bdev_nvme_attach_controller" 00:29:34.800 } 00:29:34.800 EOF 00:29:34.800 )") 00:29:34.800 15:02:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:34.800 15:02:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:34.800 15:02:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:34.800 { 00:29:34.800 "params": { 00:29:34.800 "name": "Nvme$subsystem", 00:29:34.800 "trtype": "$TEST_TRANSPORT", 00:29:34.800 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:34.800 "adrfam": "ipv4", 00:29:34.800 "trsvcid": "$NVMF_PORT", 00:29:34.800 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:34.800 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:34.800 "hdgst": ${hdgst:-false}, 00:29:34.800 "ddgst": ${ddgst:-false} 00:29:34.800 }, 00:29:34.800 "method": "bdev_nvme_attach_controller" 00:29:34.800 } 00:29:34.800 EOF 00:29:34.800 )") 00:29:34.800 15:02:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:34.800 15:02:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:34.800 15:02:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:34.800 { 00:29:34.800 "params": { 00:29:34.800 "name": "Nvme$subsystem", 00:29:34.800 "trtype": "$TEST_TRANSPORT", 00:29:34.800 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:34.800 "adrfam": "ipv4", 00:29:34.800 "trsvcid": "$NVMF_PORT", 00:29:34.800 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:34.800 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:34.800 "hdgst": ${hdgst:-false}, 00:29:34.800 "ddgst": ${ddgst:-false} 00:29:34.800 }, 00:29:34.800 "method": "bdev_nvme_attach_controller" 00:29:34.800 } 00:29:34.800 EOF 00:29:34.800 )") 00:29:34.800 15:02:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:34.800 15:02:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:34.800 15:02:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:34.800 { 00:29:34.800 "params": { 00:29:34.800 "name": "Nvme$subsystem", 00:29:34.800 "trtype": "$TEST_TRANSPORT", 00:29:34.800 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:34.800 "adrfam": "ipv4", 00:29:34.800 "trsvcid": "$NVMF_PORT", 00:29:34.800 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:34.800 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:34.800 "hdgst": ${hdgst:-false}, 00:29:34.800 "ddgst": ${ddgst:-false} 00:29:34.800 }, 00:29:34.800 "method": "bdev_nvme_attach_controller" 00:29:34.800 } 00:29:34.800 EOF 00:29:34.800 )") 00:29:34.800 15:02:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:34.800 15:02:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:34.800 15:02:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:34.800 { 00:29:34.800 "params": { 00:29:34.800 "name": "Nvme$subsystem", 00:29:34.800 "trtype": "$TEST_TRANSPORT", 00:29:34.800 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:34.800 "adrfam": "ipv4", 00:29:34.800 "trsvcid": "$NVMF_PORT", 00:29:34.800 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:34.800 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:34.800 "hdgst": ${hdgst:-false}, 00:29:34.800 "ddgst": ${ddgst:-false} 00:29:34.800 }, 00:29:34.800 "method": "bdev_nvme_attach_controller" 00:29:34.800 } 00:29:34.800 EOF 00:29:34.800 )") 00:29:34.800 15:02:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:34.800 15:02:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:34.800 15:02:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:34.800 { 00:29:34.800 "params": { 00:29:34.800 "name": "Nvme$subsystem", 00:29:34.800 "trtype": "$TEST_TRANSPORT", 00:29:34.800 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:34.800 "adrfam": "ipv4", 00:29:34.800 "trsvcid": "$NVMF_PORT", 00:29:34.800 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:34.800 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:34.800 "hdgst": ${hdgst:-false}, 00:29:34.800 "ddgst": ${ddgst:-false} 00:29:34.800 }, 00:29:34.800 "method": "bdev_nvme_attach_controller" 00:29:34.800 } 00:29:34.800 EOF 00:29:34.800 )") 00:29:34.800 15:02:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:34.800 15:02:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:34.800 15:02:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:34.800 { 00:29:34.800 "params": { 00:29:34.800 "name": "Nvme$subsystem", 00:29:34.800 "trtype": "$TEST_TRANSPORT", 00:29:34.800 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:34.800 "adrfam": "ipv4", 00:29:34.800 "trsvcid": "$NVMF_PORT", 00:29:34.800 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:34.800 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:34.800 "hdgst": ${hdgst:-false}, 00:29:34.800 "ddgst": ${ddgst:-false} 00:29:34.800 }, 00:29:34.800 "method": "bdev_nvme_attach_controller" 00:29:34.800 } 00:29:34.800 EOF 00:29:34.800 )") 00:29:34.800 15:02:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:34.800 15:02:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:34.800 15:02:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:34.800 { 00:29:34.800 "params": { 00:29:34.800 "name": "Nvme$subsystem", 00:29:34.800 "trtype": "$TEST_TRANSPORT", 00:29:34.800 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:34.800 "adrfam": "ipv4", 00:29:34.800 "trsvcid": "$NVMF_PORT", 00:29:34.800 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:34.800 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:34.800 "hdgst": ${hdgst:-false}, 00:29:34.800 "ddgst": ${ddgst:-false} 00:29:34.800 }, 00:29:34.800 "method": "bdev_nvme_attach_controller" 00:29:34.800 } 00:29:34.800 EOF 00:29:34.800 )") 00:29:34.800 15:02:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:34.800 15:02:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:29:34.800 15:02:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:29:34.800 15:02:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:34.800 "params": { 00:29:34.801 "name": "Nvme1", 00:29:34.801 "trtype": "tcp", 00:29:34.801 "traddr": "10.0.0.2", 00:29:34.801 "adrfam": "ipv4", 00:29:34.801 "trsvcid": "4420", 00:29:34.801 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:34.801 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:34.801 "hdgst": false, 00:29:34.801 "ddgst": false 00:29:34.801 }, 00:29:34.801 "method": "bdev_nvme_attach_controller" 00:29:34.801 },{ 00:29:34.801 "params": { 00:29:34.801 "name": "Nvme2", 00:29:34.801 "trtype": "tcp", 00:29:34.801 "traddr": "10.0.0.2", 00:29:34.801 "adrfam": "ipv4", 00:29:34.801 "trsvcid": "4420", 00:29:34.801 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:34.801 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:34.801 "hdgst": false, 00:29:34.801 "ddgst": false 00:29:34.801 }, 00:29:34.801 "method": "bdev_nvme_attach_controller" 00:29:34.801 },{ 00:29:34.801 "params": { 00:29:34.801 "name": "Nvme3", 00:29:34.801 "trtype": "tcp", 00:29:34.801 "traddr": "10.0.0.2", 00:29:34.801 "adrfam": "ipv4", 00:29:34.801 "trsvcid": "4420", 00:29:34.801 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:34.801 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:34.801 "hdgst": false, 00:29:34.801 "ddgst": false 00:29:34.801 }, 00:29:34.801 "method": "bdev_nvme_attach_controller" 00:29:34.801 },{ 00:29:34.801 "params": { 00:29:34.801 "name": "Nvme4", 00:29:34.801 "trtype": "tcp", 00:29:34.801 "traddr": "10.0.0.2", 00:29:34.801 "adrfam": "ipv4", 00:29:34.801 "trsvcid": "4420", 00:29:34.801 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:34.801 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:34.801 "hdgst": false, 00:29:34.801 "ddgst": false 00:29:34.801 }, 00:29:34.801 "method": "bdev_nvme_attach_controller" 00:29:34.801 },{ 00:29:34.801 "params": { 00:29:34.801 "name": "Nvme5", 00:29:34.801 "trtype": "tcp", 00:29:34.801 "traddr": "10.0.0.2", 00:29:34.801 "adrfam": "ipv4", 00:29:34.801 "trsvcid": "4420", 00:29:34.801 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:34.801 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:34.801 "hdgst": false, 00:29:34.801 "ddgst": false 00:29:34.801 }, 00:29:34.801 "method": "bdev_nvme_attach_controller" 00:29:34.801 },{ 00:29:34.801 "params": { 00:29:34.801 "name": "Nvme6", 00:29:34.801 "trtype": "tcp", 00:29:34.801 "traddr": "10.0.0.2", 00:29:34.801 "adrfam": "ipv4", 00:29:34.801 "trsvcid": "4420", 00:29:34.801 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:34.801 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:34.801 "hdgst": false, 00:29:34.801 "ddgst": false 00:29:34.801 }, 00:29:34.801 "method": "bdev_nvme_attach_controller" 00:29:34.801 },{ 00:29:34.801 "params": { 00:29:34.801 "name": "Nvme7", 00:29:34.801 "trtype": "tcp", 00:29:34.801 "traddr": "10.0.0.2", 00:29:34.801 "adrfam": "ipv4", 00:29:34.801 "trsvcid": "4420", 00:29:34.801 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:34.801 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:34.801 "hdgst": false, 00:29:34.801 "ddgst": false 00:29:34.801 }, 00:29:34.801 "method": "bdev_nvme_attach_controller" 00:29:34.801 },{ 00:29:34.801 "params": { 00:29:34.801 "name": "Nvme8", 00:29:34.801 "trtype": "tcp", 00:29:34.801 "traddr": "10.0.0.2", 00:29:34.801 "adrfam": "ipv4", 00:29:34.801 "trsvcid": "4420", 00:29:34.801 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:34.801 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:34.801 "hdgst": false, 00:29:34.801 "ddgst": false 00:29:34.801 }, 00:29:34.801 "method": "bdev_nvme_attach_controller" 00:29:34.801 },{ 00:29:34.801 "params": { 00:29:34.801 "name": "Nvme9", 00:29:34.801 "trtype": "tcp", 00:29:34.801 "traddr": "10.0.0.2", 00:29:34.801 "adrfam": "ipv4", 00:29:34.801 "trsvcid": "4420", 00:29:34.801 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:34.801 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:34.801 "hdgst": false, 00:29:34.801 "ddgst": false 00:29:34.801 }, 00:29:34.801 "method": "bdev_nvme_attach_controller" 00:29:34.801 },{ 00:29:34.801 "params": { 00:29:34.801 "name": "Nvme10", 00:29:34.801 "trtype": "tcp", 00:29:34.801 "traddr": "10.0.0.2", 00:29:34.801 "adrfam": "ipv4", 00:29:34.801 "trsvcid": "4420", 00:29:34.801 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:34.801 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:34.801 "hdgst": false, 00:29:34.801 "ddgst": false 00:29:34.801 }, 00:29:34.801 "method": "bdev_nvme_attach_controller" 00:29:34.801 }' 00:29:34.801 [2024-07-14 15:02:13.965408] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:34.801 [2024-07-14 15:02:13.965541] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:29:34.801 EAL: No free 2048 kB hugepages reported on node 1 00:29:34.801 [2024-07-14 15:02:14.098421] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:35.060 [2024-07-14 15:02:14.340147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:37.593 15:02:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:37.593 15:02:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:29:37.593 15:02:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:37.593 15:02:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.593 15:02:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:37.593 15:02:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.593 15:02:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1982082 00:29:37.593 15:02:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:29:37.593 15:02:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:29:38.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1982082 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:29:38.527 15:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1981773 00:29:38.527 15:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:38.527 15:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:38.527 15:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:29:38.527 15:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:29:38.527 15:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:38.527 15:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:38.527 { 00:29:38.527 "params": { 00:29:38.527 "name": "Nvme$subsystem", 00:29:38.527 "trtype": "$TEST_TRANSPORT", 00:29:38.527 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:38.527 "adrfam": "ipv4", 00:29:38.527 "trsvcid": "$NVMF_PORT", 00:29:38.527 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:38.527 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:38.527 "hdgst": ${hdgst:-false}, 00:29:38.527 "ddgst": ${ddgst:-false} 00:29:38.527 }, 00:29:38.527 "method": "bdev_nvme_attach_controller" 00:29:38.527 } 00:29:38.527 EOF 00:29:38.527 )") 00:29:38.527 15:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:38.527 15:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:38.527 15:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:38.527 { 00:29:38.527 "params": { 00:29:38.527 "name": "Nvme$subsystem", 00:29:38.527 "trtype": "$TEST_TRANSPORT", 00:29:38.527 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:38.527 "adrfam": "ipv4", 00:29:38.527 "trsvcid": "$NVMF_PORT", 00:29:38.527 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:38.527 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:38.527 "hdgst": ${hdgst:-false}, 00:29:38.527 "ddgst": ${ddgst:-false} 00:29:38.527 }, 00:29:38.527 "method": "bdev_nvme_attach_controller" 00:29:38.527 } 00:29:38.527 EOF 00:29:38.527 )") 00:29:38.527 15:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:38.527 15:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:38.527 15:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:38.527 { 00:29:38.527 "params": { 00:29:38.527 "name": "Nvme$subsystem", 00:29:38.527 "trtype": "$TEST_TRANSPORT", 00:29:38.527 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:38.527 "adrfam": "ipv4", 00:29:38.527 "trsvcid": "$NVMF_PORT", 00:29:38.527 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:38.527 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:38.527 "hdgst": ${hdgst:-false}, 00:29:38.527 "ddgst": ${ddgst:-false} 00:29:38.527 }, 00:29:38.527 "method": "bdev_nvme_attach_controller" 00:29:38.527 } 00:29:38.527 EOF 00:29:38.527 )") 00:29:38.527 15:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:38.527 15:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:38.527 15:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:38.527 { 00:29:38.527 "params": { 00:29:38.527 "name": "Nvme$subsystem", 00:29:38.527 "trtype": "$TEST_TRANSPORT", 00:29:38.527 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:38.527 "adrfam": "ipv4", 00:29:38.527 "trsvcid": "$NVMF_PORT", 00:29:38.527 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:38.527 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:38.527 "hdgst": ${hdgst:-false}, 00:29:38.527 "ddgst": ${ddgst:-false} 00:29:38.527 }, 00:29:38.527 "method": "bdev_nvme_attach_controller" 00:29:38.527 } 00:29:38.527 EOF 00:29:38.527 )") 00:29:38.527 15:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:38.527 15:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:38.527 15:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:38.527 { 00:29:38.527 "params": { 00:29:38.527 "name": "Nvme$subsystem", 00:29:38.527 "trtype": "$TEST_TRANSPORT", 00:29:38.527 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:38.527 "adrfam": "ipv4", 00:29:38.527 "trsvcid": "$NVMF_PORT", 00:29:38.527 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:38.527 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:38.527 "hdgst": ${hdgst:-false}, 00:29:38.528 "ddgst": ${ddgst:-false} 00:29:38.528 }, 00:29:38.528 "method": "bdev_nvme_attach_controller" 00:29:38.528 } 00:29:38.528 EOF 00:29:38.528 )") 00:29:38.528 15:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:38.528 15:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:38.528 15:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:38.528 { 00:29:38.528 "params": { 00:29:38.528 "name": "Nvme$subsystem", 00:29:38.528 "trtype": "$TEST_TRANSPORT", 00:29:38.528 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:38.528 "adrfam": "ipv4", 00:29:38.528 "trsvcid": "$NVMF_PORT", 00:29:38.528 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:38.528 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:38.528 "hdgst": ${hdgst:-false}, 00:29:38.528 "ddgst": ${ddgst:-false} 00:29:38.528 }, 00:29:38.528 "method": "bdev_nvme_attach_controller" 00:29:38.528 } 00:29:38.528 EOF 00:29:38.528 )") 00:29:38.528 15:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:38.528 15:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:38.528 15:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:38.528 { 00:29:38.528 "params": { 00:29:38.528 "name": "Nvme$subsystem", 00:29:38.528 "trtype": "$TEST_TRANSPORT", 00:29:38.528 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:38.528 "adrfam": "ipv4", 00:29:38.528 "trsvcid": "$NVMF_PORT", 00:29:38.528 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:38.528 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:38.528 "hdgst": ${hdgst:-false}, 00:29:38.528 "ddgst": ${ddgst:-false} 00:29:38.528 }, 00:29:38.528 "method": "bdev_nvme_attach_controller" 00:29:38.528 } 00:29:38.528 EOF 00:29:38.528 )") 00:29:38.528 15:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:38.528 15:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:38.528 15:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:38.528 { 00:29:38.528 "params": { 00:29:38.528 "name": "Nvme$subsystem", 00:29:38.528 "trtype": "$TEST_TRANSPORT", 00:29:38.528 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:38.528 "adrfam": "ipv4", 00:29:38.528 "trsvcid": "$NVMF_PORT", 00:29:38.528 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:38.528 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:38.528 "hdgst": ${hdgst:-false}, 00:29:38.528 "ddgst": ${ddgst:-false} 00:29:38.528 }, 00:29:38.528 "method": "bdev_nvme_attach_controller" 00:29:38.528 } 00:29:38.528 EOF 00:29:38.528 )") 00:29:38.528 15:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:38.528 15:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:38.528 15:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:38.528 { 00:29:38.528 "params": { 00:29:38.528 "name": "Nvme$subsystem", 00:29:38.528 "trtype": "$TEST_TRANSPORT", 00:29:38.528 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:38.528 "adrfam": "ipv4", 00:29:38.528 "trsvcid": "$NVMF_PORT", 00:29:38.528 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:38.528 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:38.528 "hdgst": ${hdgst:-false}, 00:29:38.528 "ddgst": ${ddgst:-false} 00:29:38.528 }, 00:29:38.528 "method": "bdev_nvme_attach_controller" 00:29:38.528 } 00:29:38.528 EOF 00:29:38.528 )") 00:29:38.528 15:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:38.528 15:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:38.528 15:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:38.528 { 00:29:38.528 "params": { 00:29:38.528 "name": "Nvme$subsystem", 00:29:38.528 "trtype": "$TEST_TRANSPORT", 00:29:38.528 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:38.528 "adrfam": "ipv4", 00:29:38.528 "trsvcid": "$NVMF_PORT", 00:29:38.528 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:38.528 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:38.528 "hdgst": ${hdgst:-false}, 00:29:38.528 "ddgst": ${ddgst:-false} 00:29:38.528 }, 00:29:38.528 "method": "bdev_nvme_attach_controller" 00:29:38.528 } 00:29:38.528 EOF 00:29:38.528 )") 00:29:38.528 15:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:38.528 15:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:29:38.528 15:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:29:38.528 15:02:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:38.528 "params": { 00:29:38.528 "name": "Nvme1", 00:29:38.528 "trtype": "tcp", 00:29:38.528 "traddr": "10.0.0.2", 00:29:38.528 "adrfam": "ipv4", 00:29:38.528 "trsvcid": "4420", 00:29:38.528 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:38.528 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:38.528 "hdgst": false, 00:29:38.528 "ddgst": false 00:29:38.528 }, 00:29:38.528 "method": "bdev_nvme_attach_controller" 00:29:38.528 },{ 00:29:38.528 "params": { 00:29:38.528 "name": "Nvme2", 00:29:38.528 "trtype": "tcp", 00:29:38.528 "traddr": "10.0.0.2", 00:29:38.528 "adrfam": "ipv4", 00:29:38.528 "trsvcid": "4420", 00:29:38.528 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:38.528 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:38.528 "hdgst": false, 00:29:38.528 "ddgst": false 00:29:38.528 }, 00:29:38.528 "method": "bdev_nvme_attach_controller" 00:29:38.528 },{ 00:29:38.528 "params": { 00:29:38.528 "name": "Nvme3", 00:29:38.528 "trtype": "tcp", 00:29:38.528 "traddr": "10.0.0.2", 00:29:38.528 "adrfam": "ipv4", 00:29:38.528 "trsvcid": "4420", 00:29:38.528 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:38.528 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:38.528 "hdgst": false, 00:29:38.528 "ddgst": false 00:29:38.528 }, 00:29:38.528 "method": "bdev_nvme_attach_controller" 00:29:38.528 },{ 00:29:38.528 "params": { 00:29:38.528 "name": "Nvme4", 00:29:38.528 "trtype": "tcp", 00:29:38.528 "traddr": "10.0.0.2", 00:29:38.528 "adrfam": "ipv4", 00:29:38.528 "trsvcid": "4420", 00:29:38.528 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:38.528 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:38.528 "hdgst": false, 00:29:38.528 "ddgst": false 00:29:38.528 }, 00:29:38.528 "method": "bdev_nvme_attach_controller" 00:29:38.528 },{ 00:29:38.528 "params": { 00:29:38.528 "name": "Nvme5", 00:29:38.528 "trtype": "tcp", 00:29:38.528 "traddr": "10.0.0.2", 00:29:38.528 "adrfam": "ipv4", 00:29:38.529 "trsvcid": "4420", 00:29:38.529 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:38.529 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:38.529 "hdgst": false, 00:29:38.529 "ddgst": false 00:29:38.529 }, 00:29:38.529 "method": "bdev_nvme_attach_controller" 00:29:38.529 },{ 00:29:38.529 "params": { 00:29:38.529 "name": "Nvme6", 00:29:38.529 "trtype": "tcp", 00:29:38.529 "traddr": "10.0.0.2", 00:29:38.529 "adrfam": "ipv4", 00:29:38.529 "trsvcid": "4420", 00:29:38.529 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:38.529 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:38.529 "hdgst": false, 00:29:38.529 "ddgst": false 00:29:38.529 }, 00:29:38.529 "method": "bdev_nvme_attach_controller" 00:29:38.529 },{ 00:29:38.529 "params": { 00:29:38.529 "name": "Nvme7", 00:29:38.529 "trtype": "tcp", 00:29:38.529 "traddr": "10.0.0.2", 00:29:38.529 "adrfam": "ipv4", 00:29:38.529 "trsvcid": "4420", 00:29:38.529 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:38.529 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:38.529 "hdgst": false, 00:29:38.529 "ddgst": false 00:29:38.529 }, 00:29:38.529 "method": "bdev_nvme_attach_controller" 00:29:38.529 },{ 00:29:38.529 "params": { 00:29:38.529 "name": "Nvme8", 00:29:38.529 "trtype": "tcp", 00:29:38.529 "traddr": "10.0.0.2", 00:29:38.529 "adrfam": "ipv4", 00:29:38.529 "trsvcid": "4420", 00:29:38.529 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:38.529 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:38.529 "hdgst": false, 00:29:38.529 "ddgst": false 00:29:38.529 }, 00:29:38.529 "method": "bdev_nvme_attach_controller" 00:29:38.529 },{ 00:29:38.529 "params": { 00:29:38.529 "name": "Nvme9", 00:29:38.529 "trtype": "tcp", 00:29:38.529 "traddr": "10.0.0.2", 00:29:38.529 "adrfam": "ipv4", 00:29:38.529 "trsvcid": "4420", 00:29:38.529 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:38.529 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:38.529 "hdgst": false, 00:29:38.529 "ddgst": false 00:29:38.529 }, 00:29:38.529 "method": "bdev_nvme_attach_controller" 00:29:38.529 },{ 00:29:38.529 "params": { 00:29:38.529 "name": "Nvme10", 00:29:38.529 "trtype": "tcp", 00:29:38.529 "traddr": "10.0.0.2", 00:29:38.529 "adrfam": "ipv4", 00:29:38.529 "trsvcid": "4420", 00:29:38.529 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:38.529 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:38.529 "hdgst": false, 00:29:38.529 "ddgst": false 00:29:38.529 }, 00:29:38.529 "method": "bdev_nvme_attach_controller" 00:29:38.529 }' 00:29:38.529 [2024-07-14 15:02:17.795266] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:38.529 [2024-07-14 15:02:17.795397] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1982516 ] 00:29:38.787 EAL: No free 2048 kB hugepages reported on node 1 00:29:38.787 [2024-07-14 15:02:17.923486] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:39.046 [2024-07-14 15:02:18.164023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:40.950 Running I/O for 1 seconds... 00:29:42.327 00:29:42.327 Latency(us) 00:29:42.327 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:42.327 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:42.327 Verification LBA range: start 0x0 length 0x400 00:29:42.327 Nvme1n1 : 1.08 183.27 11.45 0.00 0.00 336110.68 23787.14 298261.62 00:29:42.327 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:42.327 Verification LBA range: start 0x0 length 0x400 00:29:42.327 Nvme2n1 : 1.10 177.50 11.09 0.00 0.00 348555.79 5170.06 312242.63 00:29:42.327 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:42.327 Verification LBA range: start 0x0 length 0x400 00:29:42.327 Nvme3n1 : 1.19 214.50 13.41 0.00 0.00 283421.39 23107.51 304475.40 00:29:42.327 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:42.327 Verification LBA range: start 0x0 length 0x400 00:29:42.327 Nvme4n1 : 1.19 215.16 13.45 0.00 0.00 279196.63 24466.77 307582.29 00:29:42.327 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:42.327 Verification LBA range: start 0x0 length 0x400 00:29:42.327 Nvme5n1 : 1.16 165.02 10.31 0.00 0.00 357004.33 25826.04 309135.74 00:29:42.327 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:42.327 Verification LBA range: start 0x0 length 0x400 00:29:42.327 Nvme6n1 : 1.20 213.17 13.32 0.00 0.00 272150.19 24369.68 304475.40 00:29:42.327 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:42.327 Verification LBA range: start 0x0 length 0x400 00:29:42.327 Nvme7n1 : 1.22 210.31 13.14 0.00 0.00 270596.17 22524.97 310689.19 00:29:42.327 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:42.327 Verification LBA range: start 0x0 length 0x400 00:29:42.327 Nvme8n1 : 1.21 215.85 13.49 0.00 0.00 258831.60 2548.62 304475.40 00:29:42.327 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:42.327 Verification LBA range: start 0x0 length 0x400 00:29:42.327 Nvme9n1 : 1.18 162.40 10.15 0.00 0.00 337194.60 23301.69 347971.89 00:29:42.327 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:42.327 Verification LBA range: start 0x0 length 0x400 00:29:42.327 Nvme10n1 : 1.23 213.45 13.34 0.00 0.00 253043.26 3883.61 313796.08 00:29:42.327 =================================================================================================================== 00:29:42.327 Total : 1970.62 123.16 0.00 0.00 294614.50 2548.62 347971.89 00:29:43.267 15:02:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:29:43.267 15:02:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:29:43.267 15:02:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:43.267 15:02:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:43.267 15:02:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:29:43.267 15:02:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:43.267 15:02:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:29:43.267 15:02:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:43.267 15:02:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:29:43.267 15:02:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:43.267 15:02:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:43.267 rmmod nvme_tcp 00:29:43.267 rmmod nvme_fabrics 00:29:43.267 rmmod nvme_keyring 00:29:43.267 15:02:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:43.267 15:02:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:29:43.267 15:02:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:29:43.267 15:02:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1981773 ']' 00:29:43.267 15:02:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1981773 00:29:43.267 15:02:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 1981773 ']' 00:29:43.267 15:02:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 1981773 00:29:43.267 15:02:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:29:43.267 15:02:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:43.267 15:02:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1981773 00:29:43.267 15:02:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:43.267 15:02:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:43.267 15:02:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1981773' 00:29:43.267 killing process with pid 1981773 00:29:43.267 15:02:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 1981773 00:29:43.267 15:02:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 1981773 00:29:46.588 15:02:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:46.588 15:02:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:46.588 15:02:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:46.588 15:02:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:46.588 15:02:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:46.588 15:02:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:46.588 15:02:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:46.588 15:02:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:48.496 00:29:48.496 real 0m17.704s 00:29:48.496 user 0m57.820s 00:29:48.496 sys 0m3.744s 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:48.496 ************************************ 00:29:48.496 END TEST nvmf_shutdown_tc1 00:29:48.496 ************************************ 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:48.496 ************************************ 00:29:48.496 START TEST nvmf_shutdown_tc2 00:29:48.496 ************************************ 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:48.496 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:48.496 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:48.496 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:48.496 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:48.496 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:48.496 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:29:48.496 00:29:48.496 --- 10.0.0.2 ping statistics --- 00:29:48.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:48.496 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:48.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:48.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:29:48.496 00:29:48.496 --- 10.0.0.1 ping statistics --- 00:29:48.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:48.496 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1983787 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1983787 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1983787 ']' 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:48.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:48.496 15:02:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:48.496 [2024-07-14 15:02:27.714951] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:48.496 [2024-07-14 15:02:27.715096] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:48.496 EAL: No free 2048 kB hugepages reported on node 1 00:29:48.754 [2024-07-14 15:02:27.857090] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:49.012 [2024-07-14 15:02:28.115366] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:49.012 [2024-07-14 15:02:28.115431] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:49.012 [2024-07-14 15:02:28.115469] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:49.012 [2024-07-14 15:02:28.115489] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:49.012 [2024-07-14 15:02:28.115510] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:49.012 [2024-07-14 15:02:28.115645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:49.012 [2024-07-14 15:02:28.115699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:49.012 [2024-07-14 15:02:28.115738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:49.012 [2024-07-14 15:02:28.115762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:49.580 15:02:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:49.580 15:02:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:29:49.580 15:02:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:49.580 15:02:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:49.580 15:02:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:49.580 15:02:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:49.581 15:02:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:49.581 15:02:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:49.581 15:02:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:49.581 [2024-07-14 15:02:28.698281] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:49.581 15:02:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:49.581 15:02:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:29:49.581 15:02:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:29:49.581 15:02:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:49.581 15:02:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:49.581 15:02:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:49.581 15:02:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:49.581 15:02:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:49.581 15:02:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:49.581 15:02:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:49.581 15:02:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:49.581 15:02:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:49.581 15:02:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:49.581 15:02:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:49.581 15:02:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:49.581 15:02:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:49.581 15:02:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:49.581 15:02:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:49.581 15:02:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:49.581 15:02:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:49.581 15:02:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:49.581 15:02:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:49.581 15:02:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:49.581 15:02:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:49.581 15:02:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:49.581 15:02:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:49.581 15:02:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:29:49.581 15:02:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:49.581 15:02:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:49.581 Malloc1 00:29:49.581 [2024-07-14 15:02:28.840290] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:49.840 Malloc2 00:29:49.840 Malloc3 00:29:49.840 Malloc4 00:29:50.098 Malloc5 00:29:50.098 Malloc6 00:29:50.355 Malloc7 00:29:50.355 Malloc8 00:29:50.355 Malloc9 00:29:50.614 Malloc10 00:29:50.614 15:02:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.614 15:02:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:29:50.614 15:02:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:50.614 15:02:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:50.614 15:02:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1984099 00:29:50.614 15:02:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1984099 /var/tmp/bdevperf.sock 00:29:50.614 15:02:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1984099 ']' 00:29:50.614 15:02:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:50.614 15:02:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:50.614 15:02:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:50.614 15:02:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:50.614 15:02:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:50.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:50.614 15:02:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:29:50.614 15:02:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:50.614 15:02:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:29:50.614 15:02:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:50.614 15:02:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:50.614 15:02:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:50.614 { 00:29:50.614 "params": { 00:29:50.614 "name": "Nvme$subsystem", 00:29:50.614 "trtype": "$TEST_TRANSPORT", 00:29:50.614 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:50.614 "adrfam": "ipv4", 00:29:50.614 "trsvcid": "$NVMF_PORT", 00:29:50.614 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:50.614 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:50.614 "hdgst": ${hdgst:-false}, 00:29:50.614 "ddgst": ${ddgst:-false} 00:29:50.614 }, 00:29:50.614 "method": "bdev_nvme_attach_controller" 00:29:50.614 } 00:29:50.614 EOF 00:29:50.614 )") 00:29:50.614 15:02:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:50.614 15:02:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:50.614 15:02:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:50.614 { 00:29:50.614 "params": { 00:29:50.614 "name": "Nvme$subsystem", 00:29:50.614 "trtype": "$TEST_TRANSPORT", 00:29:50.614 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:50.614 "adrfam": "ipv4", 00:29:50.614 "trsvcid": "$NVMF_PORT", 00:29:50.614 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:50.614 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:50.614 "hdgst": ${hdgst:-false}, 00:29:50.614 "ddgst": ${ddgst:-false} 00:29:50.614 }, 00:29:50.614 "method": "bdev_nvme_attach_controller" 00:29:50.614 } 00:29:50.614 EOF 00:29:50.614 )") 00:29:50.614 15:02:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:50.614 15:02:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:50.614 15:02:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:50.614 { 00:29:50.614 "params": { 00:29:50.614 "name": "Nvme$subsystem", 00:29:50.614 "trtype": "$TEST_TRANSPORT", 00:29:50.614 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:50.614 "adrfam": "ipv4", 00:29:50.614 "trsvcid": "$NVMF_PORT", 00:29:50.614 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:50.614 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:50.614 "hdgst": ${hdgst:-false}, 00:29:50.614 "ddgst": ${ddgst:-false} 00:29:50.614 }, 00:29:50.614 "method": "bdev_nvme_attach_controller" 00:29:50.614 } 00:29:50.614 EOF 00:29:50.614 )") 00:29:50.614 15:02:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:50.614 15:02:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:50.614 15:02:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:50.614 { 00:29:50.614 "params": { 00:29:50.614 "name": "Nvme$subsystem", 00:29:50.614 "trtype": "$TEST_TRANSPORT", 00:29:50.614 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:50.614 "adrfam": "ipv4", 00:29:50.614 "trsvcid": "$NVMF_PORT", 00:29:50.614 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:50.614 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:50.614 "hdgst": ${hdgst:-false}, 00:29:50.614 "ddgst": ${ddgst:-false} 00:29:50.614 }, 00:29:50.614 "method": "bdev_nvme_attach_controller" 00:29:50.614 } 00:29:50.614 EOF 00:29:50.614 )") 00:29:50.614 15:02:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:50.614 15:02:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:50.614 15:02:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:50.614 { 00:29:50.614 "params": { 00:29:50.614 "name": "Nvme$subsystem", 00:29:50.614 "trtype": "$TEST_TRANSPORT", 00:29:50.614 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:50.614 "adrfam": "ipv4", 00:29:50.614 "trsvcid": "$NVMF_PORT", 00:29:50.614 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:50.614 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:50.614 "hdgst": ${hdgst:-false}, 00:29:50.614 "ddgst": ${ddgst:-false} 00:29:50.614 }, 00:29:50.614 "method": "bdev_nvme_attach_controller" 00:29:50.614 } 00:29:50.614 EOF 00:29:50.614 )") 00:29:50.614 15:02:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:50.614 15:02:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:50.614 15:02:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:50.614 { 00:29:50.614 "params": { 00:29:50.614 "name": "Nvme$subsystem", 00:29:50.614 "trtype": "$TEST_TRANSPORT", 00:29:50.614 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:50.614 "adrfam": "ipv4", 00:29:50.614 "trsvcid": "$NVMF_PORT", 00:29:50.614 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:50.615 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:50.615 "hdgst": ${hdgst:-false}, 00:29:50.615 "ddgst": ${ddgst:-false} 00:29:50.615 }, 00:29:50.615 "method": "bdev_nvme_attach_controller" 00:29:50.615 } 00:29:50.615 EOF 00:29:50.615 )") 00:29:50.615 15:02:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:50.615 15:02:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:50.615 15:02:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:50.615 { 00:29:50.615 "params": { 00:29:50.615 "name": "Nvme$subsystem", 00:29:50.615 "trtype": "$TEST_TRANSPORT", 00:29:50.615 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:50.615 "adrfam": "ipv4", 00:29:50.615 "trsvcid": "$NVMF_PORT", 00:29:50.615 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:50.615 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:50.615 "hdgst": ${hdgst:-false}, 00:29:50.615 "ddgst": ${ddgst:-false} 00:29:50.615 }, 00:29:50.615 "method": "bdev_nvme_attach_controller" 00:29:50.615 } 00:29:50.615 EOF 00:29:50.615 )") 00:29:50.615 15:02:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:50.615 15:02:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:50.615 15:02:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:50.615 { 00:29:50.615 "params": { 00:29:50.615 "name": "Nvme$subsystem", 00:29:50.615 "trtype": "$TEST_TRANSPORT", 00:29:50.615 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:50.615 "adrfam": "ipv4", 00:29:50.615 "trsvcid": "$NVMF_PORT", 00:29:50.615 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:50.615 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:50.615 "hdgst": ${hdgst:-false}, 00:29:50.615 "ddgst": ${ddgst:-false} 00:29:50.615 }, 00:29:50.615 "method": "bdev_nvme_attach_controller" 00:29:50.615 } 00:29:50.615 EOF 00:29:50.615 )") 00:29:50.615 15:02:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:50.615 15:02:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:50.615 15:02:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:50.615 { 00:29:50.615 "params": { 00:29:50.615 "name": "Nvme$subsystem", 00:29:50.615 "trtype": "$TEST_TRANSPORT", 00:29:50.615 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:50.615 "adrfam": "ipv4", 00:29:50.615 "trsvcid": "$NVMF_PORT", 00:29:50.615 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:50.615 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:50.615 "hdgst": ${hdgst:-false}, 00:29:50.615 "ddgst": ${ddgst:-false} 00:29:50.615 }, 00:29:50.615 "method": "bdev_nvme_attach_controller" 00:29:50.615 } 00:29:50.615 EOF 00:29:50.615 )") 00:29:50.615 15:02:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:50.615 15:02:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:50.615 15:02:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:50.615 { 00:29:50.615 "params": { 00:29:50.615 "name": "Nvme$subsystem", 00:29:50.615 "trtype": "$TEST_TRANSPORT", 00:29:50.615 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:50.615 "adrfam": "ipv4", 00:29:50.615 "trsvcid": "$NVMF_PORT", 00:29:50.615 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:50.615 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:50.615 "hdgst": ${hdgst:-false}, 00:29:50.615 "ddgst": ${ddgst:-false} 00:29:50.615 }, 00:29:50.615 "method": "bdev_nvme_attach_controller" 00:29:50.615 } 00:29:50.615 EOF 00:29:50.615 )") 00:29:50.615 15:02:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:50.615 15:02:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:29:50.615 15:02:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:29:50.615 15:02:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:50.615 "params": { 00:29:50.615 "name": "Nvme1", 00:29:50.615 "trtype": "tcp", 00:29:50.615 "traddr": "10.0.0.2", 00:29:50.615 "adrfam": "ipv4", 00:29:50.615 "trsvcid": "4420", 00:29:50.615 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:50.615 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:50.615 "hdgst": false, 00:29:50.615 "ddgst": false 00:29:50.615 }, 00:29:50.615 "method": "bdev_nvme_attach_controller" 00:29:50.615 },{ 00:29:50.615 "params": { 00:29:50.615 "name": "Nvme2", 00:29:50.615 "trtype": "tcp", 00:29:50.615 "traddr": "10.0.0.2", 00:29:50.615 "adrfam": "ipv4", 00:29:50.615 "trsvcid": "4420", 00:29:50.615 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:50.615 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:50.615 "hdgst": false, 00:29:50.615 "ddgst": false 00:29:50.615 }, 00:29:50.615 "method": "bdev_nvme_attach_controller" 00:29:50.615 },{ 00:29:50.615 "params": { 00:29:50.615 "name": "Nvme3", 00:29:50.615 "trtype": "tcp", 00:29:50.615 "traddr": "10.0.0.2", 00:29:50.615 "adrfam": "ipv4", 00:29:50.615 "trsvcid": "4420", 00:29:50.615 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:50.615 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:50.615 "hdgst": false, 00:29:50.615 "ddgst": false 00:29:50.615 }, 00:29:50.615 "method": "bdev_nvme_attach_controller" 00:29:50.615 },{ 00:29:50.615 "params": { 00:29:50.615 "name": "Nvme4", 00:29:50.615 "trtype": "tcp", 00:29:50.615 "traddr": "10.0.0.2", 00:29:50.615 "adrfam": "ipv4", 00:29:50.615 "trsvcid": "4420", 00:29:50.615 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:50.615 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:50.615 "hdgst": false, 00:29:50.615 "ddgst": false 00:29:50.615 }, 00:29:50.615 "method": "bdev_nvme_attach_controller" 00:29:50.615 },{ 00:29:50.615 "params": { 00:29:50.615 "name": "Nvme5", 00:29:50.615 "trtype": "tcp", 00:29:50.615 "traddr": "10.0.0.2", 00:29:50.615 "adrfam": "ipv4", 00:29:50.615 "trsvcid": "4420", 00:29:50.615 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:50.615 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:50.615 "hdgst": false, 00:29:50.615 "ddgst": false 00:29:50.615 }, 00:29:50.615 "method": "bdev_nvme_attach_controller" 00:29:50.615 },{ 00:29:50.615 "params": { 00:29:50.615 "name": "Nvme6", 00:29:50.615 "trtype": "tcp", 00:29:50.615 "traddr": "10.0.0.2", 00:29:50.615 "adrfam": "ipv4", 00:29:50.615 "trsvcid": "4420", 00:29:50.615 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:50.615 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:50.615 "hdgst": false, 00:29:50.615 "ddgst": false 00:29:50.615 }, 00:29:50.615 "method": "bdev_nvme_attach_controller" 00:29:50.615 },{ 00:29:50.615 "params": { 00:29:50.615 "name": "Nvme7", 00:29:50.615 "trtype": "tcp", 00:29:50.615 "traddr": "10.0.0.2", 00:29:50.615 "adrfam": "ipv4", 00:29:50.615 "trsvcid": "4420", 00:29:50.615 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:50.615 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:50.615 "hdgst": false, 00:29:50.615 "ddgst": false 00:29:50.615 }, 00:29:50.615 "method": "bdev_nvme_attach_controller" 00:29:50.615 },{ 00:29:50.615 "params": { 00:29:50.615 "name": "Nvme8", 00:29:50.615 "trtype": "tcp", 00:29:50.615 "traddr": "10.0.0.2", 00:29:50.615 "adrfam": "ipv4", 00:29:50.615 "trsvcid": "4420", 00:29:50.615 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:50.615 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:50.615 "hdgst": false, 00:29:50.615 "ddgst": false 00:29:50.615 }, 00:29:50.615 "method": "bdev_nvme_attach_controller" 00:29:50.615 },{ 00:29:50.615 "params": { 00:29:50.615 "name": "Nvme9", 00:29:50.615 "trtype": "tcp", 00:29:50.615 "traddr": "10.0.0.2", 00:29:50.615 "adrfam": "ipv4", 00:29:50.615 "trsvcid": "4420", 00:29:50.615 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:50.615 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:50.615 "hdgst": false, 00:29:50.615 "ddgst": false 00:29:50.615 }, 00:29:50.615 "method": "bdev_nvme_attach_controller" 00:29:50.615 },{ 00:29:50.615 "params": { 00:29:50.615 "name": "Nvme10", 00:29:50.615 "trtype": "tcp", 00:29:50.615 "traddr": "10.0.0.2", 00:29:50.615 "adrfam": "ipv4", 00:29:50.615 "trsvcid": "4420", 00:29:50.615 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:50.615 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:50.615 "hdgst": false, 00:29:50.615 "ddgst": false 00:29:50.615 }, 00:29:50.615 "method": "bdev_nvme_attach_controller" 00:29:50.616 }' 00:29:50.616 [2024-07-14 15:02:29.852871] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:50.616 [2024-07-14 15:02:29.853027] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1984099 ] 00:29:50.873 EAL: No free 2048 kB hugepages reported on node 1 00:29:50.873 [2024-07-14 15:02:29.983838] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.133 [2024-07-14 15:02:30.221491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:53.031 Running I/O for 10 seconds... 00:29:53.289 15:02:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:53.289 15:02:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:29:53.289 15:02:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:53.289 15:02:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.289 15:02:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.289 15:02:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.289 15:02:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:53.289 15:02:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:53.289 15:02:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:29:53.289 15:02:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:29:53.289 15:02:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:29:53.289 15:02:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:29:53.289 15:02:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:29:53.289 15:02:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:53.289 15:02:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:29:53.289 15:02:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.289 15:02:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.546 15:02:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.546 15:02:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:29:53.546 15:02:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:29:53.546 15:02:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:29:53.805 15:02:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:29:53.805 15:02:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:29:53.805 15:02:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:53.805 15:02:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:29:53.805 15:02:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.805 15:02:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.805 15:02:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.805 15:02:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=136 00:29:53.805 15:02:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 136 -ge 100 ']' 00:29:53.805 15:02:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:29:53.805 15:02:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:29:53.805 15:02:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:29:53.805 15:02:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1984099 00:29:53.805 15:02:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1984099 ']' 00:29:53.805 15:02:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1984099 00:29:53.805 15:02:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:29:53.805 15:02:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:53.805 15:02:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1984099 00:29:53.805 15:02:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:53.805 15:02:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:53.805 15:02:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1984099' 00:29:53.805 killing process with pid 1984099 00:29:53.805 15:02:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1984099 00:29:53.805 15:02:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1984099 00:29:53.805 Received shutdown signal, test time was about 0.942785 seconds 00:29:53.805 00:29:53.806 Latency(us) 00:29:53.806 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:53.806 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:53.806 Verification LBA range: start 0x0 length 0x400 00:29:53.806 Nvme1n1 : 0.91 216.21 13.51 0.00 0.00 290570.35 4369.07 281173.71 00:29:53.806 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:53.806 Verification LBA range: start 0x0 length 0x400 00:29:53.806 Nvme2n1 : 0.92 212.99 13.31 0.00 0.00 288188.00 7864.32 293601.28 00:29:53.806 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:53.806 Verification LBA range: start 0x0 length 0x400 00:29:53.806 Nvme3n1 : 0.90 213.81 13.36 0.00 0.00 281726.93 22524.97 293601.28 00:29:53.806 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:53.806 Verification LBA range: start 0x0 length 0x400 00:29:53.806 Nvme4n1 : 0.88 217.44 13.59 0.00 0.00 270200.73 20097.71 301368.51 00:29:53.806 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:53.806 Verification LBA range: start 0x0 length 0x400 00:29:53.806 Nvme5n1 : 0.86 148.16 9.26 0.00 0.00 386184.53 24660.95 320009.86 00:29:53.806 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:53.806 Verification LBA range: start 0x0 length 0x400 00:29:53.806 Nvme6n1 : 0.93 205.84 12.86 0.00 0.00 273615.08 24660.95 304475.40 00:29:53.806 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:53.806 Verification LBA range: start 0x0 length 0x400 00:29:53.806 Nvme7n1 : 0.92 208.32 13.02 0.00 0.00 263354.60 25243.50 309135.74 00:29:53.806 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:53.806 Verification LBA range: start 0x0 length 0x400 00:29:53.806 Nvme8n1 : 0.90 212.95 13.31 0.00 0.00 250269.65 27379.48 312242.63 00:29:53.806 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:53.806 Verification LBA range: start 0x0 length 0x400 00:29:53.806 Nvme9n1 : 0.88 145.63 9.10 0.00 0.00 353945.22 26020.22 351078.78 00:29:53.806 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:53.806 Verification LBA range: start 0x0 length 0x400 00:29:53.806 Nvme10n1 : 0.94 200.65 12.54 0.00 0.00 253343.19 31068.92 293601.28 00:29:53.806 =================================================================================================================== 00:29:53.806 Total : 1981.99 123.87 0.00 0.00 285577.50 4369.07 351078.78 00:29:55.181 15:02:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:29:56.119 15:02:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1983787 00:29:56.119 15:02:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:29:56.119 15:02:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:29:56.119 15:02:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:56.119 15:02:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:56.119 15:02:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:29:56.119 15:02:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:56.119 15:02:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:29:56.119 15:02:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:56.119 15:02:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:29:56.119 15:02:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:56.119 15:02:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:56.119 rmmod nvme_tcp 00:29:56.119 rmmod nvme_fabrics 00:29:56.119 rmmod nvme_keyring 00:29:56.119 15:02:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:56.119 15:02:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:29:56.119 15:02:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:29:56.119 15:02:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1983787 ']' 00:29:56.119 15:02:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1983787 00:29:56.119 15:02:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1983787 ']' 00:29:56.119 15:02:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1983787 00:29:56.119 15:02:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:29:56.119 15:02:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:56.119 15:02:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1983787 00:29:56.119 15:02:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:56.119 15:02:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:56.119 15:02:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1983787' 00:29:56.119 killing process with pid 1983787 00:29:56.119 15:02:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1983787 00:29:56.119 15:02:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1983787 00:29:59.404 15:02:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:59.405 15:02:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:59.405 15:02:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:59.405 15:02:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:59.405 15:02:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:59.405 15:02:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:59.405 15:02:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:59.405 15:02:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:01.306 00:30:01.306 real 0m12.707s 00:30:01.306 user 0m42.426s 00:30:01.306 sys 0m1.917s 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:01.306 ************************************ 00:30:01.306 END TEST nvmf_shutdown_tc2 00:30:01.306 ************************************ 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:01.306 ************************************ 00:30:01.306 START TEST nvmf_shutdown_tc3 00:30:01.306 ************************************ 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:01.306 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:01.306 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:01.306 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:01.306 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:01.306 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:01.307 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:01.307 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:01.307 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:01.307 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:01.307 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:01.307 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:01.307 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:01.307 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:01.307 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:01.307 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:01.307 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:01.307 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:01.307 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:01.307 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:01.307 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:01.307 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:01.307 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:01.307 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:01.307 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:01.307 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:01.307 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:01.307 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:30:01.307 00:30:01.307 --- 10.0.0.2 ping statistics --- 00:30:01.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:01.307 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:30:01.307 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:01.307 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:01.307 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:30:01.307 00:30:01.307 --- 10.0.0.1 ping statistics --- 00:30:01.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:01.307 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:30:01.307 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:01.307 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:30:01.307 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:01.307 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:01.307 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:01.307 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:01.307 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:01.307 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:01.307 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:01.307 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:30:01.307 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:01.307 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:01.307 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:01.307 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1985409 00:30:01.307 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:01.307 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1985409 00:30:01.307 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1985409 ']' 00:30:01.307 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:01.307 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:01.307 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:01.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:01.307 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:01.307 15:02:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:01.307 [2024-07-14 15:02:40.495941] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:01.307 [2024-07-14 15:02:40.496074] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:01.307 EAL: No free 2048 kB hugepages reported on node 1 00:30:01.565 [2024-07-14 15:02:40.638231] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:01.824 [2024-07-14 15:02:40.900806] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:01.824 [2024-07-14 15:02:40.900886] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:01.824 [2024-07-14 15:02:40.900917] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:01.824 [2024-07-14 15:02:40.900940] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:01.824 [2024-07-14 15:02:40.900963] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:01.824 [2024-07-14 15:02:40.901103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:01.824 [2024-07-14 15:02:40.901194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:01.824 [2024-07-14 15:02:40.901229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:01.824 [2024-07-14 15:02:40.901238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:30:02.391 15:02:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:02.391 15:02:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:30:02.391 15:02:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:02.391 15:02:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:02.391 15:02:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:02.391 15:02:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:02.391 15:02:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:02.391 15:02:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:02.391 15:02:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:02.391 [2024-07-14 15:02:41.486310] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:02.391 15:02:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:02.391 15:02:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:30:02.391 15:02:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:30:02.391 15:02:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:02.391 15:02:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:02.391 15:02:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:02.391 15:02:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:02.391 15:02:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:02.391 15:02:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:02.391 15:02:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:02.391 15:02:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:02.391 15:02:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:02.391 15:02:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:02.391 15:02:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:02.391 15:02:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:02.391 15:02:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:02.391 15:02:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:02.391 15:02:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:02.391 15:02:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:02.391 15:02:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:02.391 15:02:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:02.392 15:02:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:02.392 15:02:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:02.392 15:02:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:02.392 15:02:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:02.392 15:02:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:02.392 15:02:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:30:02.392 15:02:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:02.392 15:02:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:02.392 Malloc1 00:30:02.392 [2024-07-14 15:02:41.613555] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:02.392 Malloc2 00:30:02.650 Malloc3 00:30:02.651 Malloc4 00:30:02.909 Malloc5 00:30:02.909 Malloc6 00:30:02.909 Malloc7 00:30:03.168 Malloc8 00:30:03.168 Malloc9 00:30:03.428 Malloc10 00:30:03.428 15:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:03.428 15:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:30:03.428 15:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:03.428 15:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:03.428 15:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1985717 00:30:03.428 15:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1985717 /var/tmp/bdevperf.sock 00:30:03.428 15:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1985717 ']' 00:30:03.428 15:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:03.428 15:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:03.428 15:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:03.428 15:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:03.428 15:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:03.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:03.428 15:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:30:03.428 15:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:03.428 15:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:30:03.428 15:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:03.428 15:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:03.428 15:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:03.428 { 00:30:03.428 "params": { 00:30:03.428 "name": "Nvme$subsystem", 00:30:03.428 "trtype": "$TEST_TRANSPORT", 00:30:03.428 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:03.428 "adrfam": "ipv4", 00:30:03.428 "trsvcid": "$NVMF_PORT", 00:30:03.428 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:03.428 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:03.428 "hdgst": ${hdgst:-false}, 00:30:03.428 "ddgst": ${ddgst:-false} 00:30:03.428 }, 00:30:03.428 "method": "bdev_nvme_attach_controller" 00:30:03.428 } 00:30:03.428 EOF 00:30:03.428 )") 00:30:03.428 15:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:03.428 15:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:03.428 15:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:03.428 { 00:30:03.428 "params": { 00:30:03.428 "name": "Nvme$subsystem", 00:30:03.428 "trtype": "$TEST_TRANSPORT", 00:30:03.428 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:03.428 "adrfam": "ipv4", 00:30:03.428 "trsvcid": "$NVMF_PORT", 00:30:03.428 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:03.428 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:03.428 "hdgst": ${hdgst:-false}, 00:30:03.428 "ddgst": ${ddgst:-false} 00:30:03.428 }, 00:30:03.428 "method": "bdev_nvme_attach_controller" 00:30:03.428 } 00:30:03.428 EOF 00:30:03.428 )") 00:30:03.428 15:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:03.428 15:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:03.428 15:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:03.428 { 00:30:03.428 "params": { 00:30:03.428 "name": "Nvme$subsystem", 00:30:03.428 "trtype": "$TEST_TRANSPORT", 00:30:03.428 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:03.428 "adrfam": "ipv4", 00:30:03.428 "trsvcid": "$NVMF_PORT", 00:30:03.428 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:03.428 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:03.428 "hdgst": ${hdgst:-false}, 00:30:03.428 "ddgst": ${ddgst:-false} 00:30:03.428 }, 00:30:03.428 "method": "bdev_nvme_attach_controller" 00:30:03.428 } 00:30:03.428 EOF 00:30:03.428 )") 00:30:03.428 15:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:03.428 15:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:03.428 15:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:03.428 { 00:30:03.428 "params": { 00:30:03.428 "name": "Nvme$subsystem", 00:30:03.428 "trtype": "$TEST_TRANSPORT", 00:30:03.428 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:03.428 "adrfam": "ipv4", 00:30:03.428 "trsvcid": "$NVMF_PORT", 00:30:03.429 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:03.429 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:03.429 "hdgst": ${hdgst:-false}, 00:30:03.429 "ddgst": ${ddgst:-false} 00:30:03.429 }, 00:30:03.429 "method": "bdev_nvme_attach_controller" 00:30:03.429 } 00:30:03.429 EOF 00:30:03.429 )") 00:30:03.429 15:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:03.429 15:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:03.429 15:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:03.429 { 00:30:03.429 "params": { 00:30:03.429 "name": "Nvme$subsystem", 00:30:03.429 "trtype": "$TEST_TRANSPORT", 00:30:03.429 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:03.429 "adrfam": "ipv4", 00:30:03.429 "trsvcid": "$NVMF_PORT", 00:30:03.429 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:03.429 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:03.429 "hdgst": ${hdgst:-false}, 00:30:03.429 "ddgst": ${ddgst:-false} 00:30:03.429 }, 00:30:03.429 "method": "bdev_nvme_attach_controller" 00:30:03.429 } 00:30:03.429 EOF 00:30:03.429 )") 00:30:03.429 15:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:03.429 15:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:03.429 15:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:03.429 { 00:30:03.429 "params": { 00:30:03.429 "name": "Nvme$subsystem", 00:30:03.429 "trtype": "$TEST_TRANSPORT", 00:30:03.429 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:03.429 "adrfam": "ipv4", 00:30:03.429 "trsvcid": "$NVMF_PORT", 00:30:03.429 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:03.429 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:03.429 "hdgst": ${hdgst:-false}, 00:30:03.429 "ddgst": ${ddgst:-false} 00:30:03.429 }, 00:30:03.429 "method": "bdev_nvme_attach_controller" 00:30:03.429 } 00:30:03.429 EOF 00:30:03.429 )") 00:30:03.429 15:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:03.429 15:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:03.429 15:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:03.429 { 00:30:03.429 "params": { 00:30:03.429 "name": "Nvme$subsystem", 00:30:03.429 "trtype": "$TEST_TRANSPORT", 00:30:03.429 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:03.429 "adrfam": "ipv4", 00:30:03.429 "trsvcid": "$NVMF_PORT", 00:30:03.429 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:03.429 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:03.429 "hdgst": ${hdgst:-false}, 00:30:03.429 "ddgst": ${ddgst:-false} 00:30:03.429 }, 00:30:03.429 "method": "bdev_nvme_attach_controller" 00:30:03.429 } 00:30:03.429 EOF 00:30:03.429 )") 00:30:03.429 15:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:03.429 15:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:03.429 15:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:03.429 { 00:30:03.429 "params": { 00:30:03.429 "name": "Nvme$subsystem", 00:30:03.429 "trtype": "$TEST_TRANSPORT", 00:30:03.429 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:03.429 "adrfam": "ipv4", 00:30:03.429 "trsvcid": "$NVMF_PORT", 00:30:03.429 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:03.429 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:03.429 "hdgst": ${hdgst:-false}, 00:30:03.429 "ddgst": ${ddgst:-false} 00:30:03.429 }, 00:30:03.429 "method": "bdev_nvme_attach_controller" 00:30:03.429 } 00:30:03.429 EOF 00:30:03.429 )") 00:30:03.429 15:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:03.429 15:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:03.429 15:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:03.429 { 00:30:03.429 "params": { 00:30:03.429 "name": "Nvme$subsystem", 00:30:03.429 "trtype": "$TEST_TRANSPORT", 00:30:03.429 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:03.429 "adrfam": "ipv4", 00:30:03.429 "trsvcid": "$NVMF_PORT", 00:30:03.429 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:03.429 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:03.429 "hdgst": ${hdgst:-false}, 00:30:03.429 "ddgst": ${ddgst:-false} 00:30:03.429 }, 00:30:03.429 "method": "bdev_nvme_attach_controller" 00:30:03.429 } 00:30:03.429 EOF 00:30:03.429 )") 00:30:03.429 15:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:03.429 15:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:03.429 15:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:03.429 { 00:30:03.429 "params": { 00:30:03.429 "name": "Nvme$subsystem", 00:30:03.429 "trtype": "$TEST_TRANSPORT", 00:30:03.429 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:03.429 "adrfam": "ipv4", 00:30:03.429 "trsvcid": "$NVMF_PORT", 00:30:03.429 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:03.429 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:03.429 "hdgst": ${hdgst:-false}, 00:30:03.429 "ddgst": ${ddgst:-false} 00:30:03.429 }, 00:30:03.429 "method": "bdev_nvme_attach_controller" 00:30:03.429 } 00:30:03.429 EOF 00:30:03.429 )") 00:30:03.429 15:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:03.429 15:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:30:03.429 15:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:30:03.429 15:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:03.429 "params": { 00:30:03.429 "name": "Nvme1", 00:30:03.429 "trtype": "tcp", 00:30:03.429 "traddr": "10.0.0.2", 00:30:03.429 "adrfam": "ipv4", 00:30:03.429 "trsvcid": "4420", 00:30:03.429 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:03.429 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:03.429 "hdgst": false, 00:30:03.429 "ddgst": false 00:30:03.429 }, 00:30:03.429 "method": "bdev_nvme_attach_controller" 00:30:03.429 },{ 00:30:03.429 "params": { 00:30:03.429 "name": "Nvme2", 00:30:03.429 "trtype": "tcp", 00:30:03.429 "traddr": "10.0.0.2", 00:30:03.429 "adrfam": "ipv4", 00:30:03.429 "trsvcid": "4420", 00:30:03.429 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:03.429 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:03.429 "hdgst": false, 00:30:03.429 "ddgst": false 00:30:03.429 }, 00:30:03.429 "method": "bdev_nvme_attach_controller" 00:30:03.429 },{ 00:30:03.429 "params": { 00:30:03.429 "name": "Nvme3", 00:30:03.429 "trtype": "tcp", 00:30:03.429 "traddr": "10.0.0.2", 00:30:03.429 "adrfam": "ipv4", 00:30:03.429 "trsvcid": "4420", 00:30:03.429 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:03.429 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:03.429 "hdgst": false, 00:30:03.429 "ddgst": false 00:30:03.429 }, 00:30:03.429 "method": "bdev_nvme_attach_controller" 00:30:03.429 },{ 00:30:03.429 "params": { 00:30:03.429 "name": "Nvme4", 00:30:03.429 "trtype": "tcp", 00:30:03.429 "traddr": "10.0.0.2", 00:30:03.429 "adrfam": "ipv4", 00:30:03.429 "trsvcid": "4420", 00:30:03.429 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:03.429 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:03.429 "hdgst": false, 00:30:03.429 "ddgst": false 00:30:03.429 }, 00:30:03.429 "method": "bdev_nvme_attach_controller" 00:30:03.429 },{ 00:30:03.429 "params": { 00:30:03.429 "name": "Nvme5", 00:30:03.429 "trtype": "tcp", 00:30:03.429 "traddr": "10.0.0.2", 00:30:03.429 "adrfam": "ipv4", 00:30:03.429 "trsvcid": "4420", 00:30:03.429 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:03.429 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:03.429 "hdgst": false, 00:30:03.429 "ddgst": false 00:30:03.429 }, 00:30:03.429 "method": "bdev_nvme_attach_controller" 00:30:03.429 },{ 00:30:03.429 "params": { 00:30:03.429 "name": "Nvme6", 00:30:03.429 "trtype": "tcp", 00:30:03.429 "traddr": "10.0.0.2", 00:30:03.429 "adrfam": "ipv4", 00:30:03.429 "trsvcid": "4420", 00:30:03.429 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:03.429 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:03.429 "hdgst": false, 00:30:03.429 "ddgst": false 00:30:03.429 }, 00:30:03.429 "method": "bdev_nvme_attach_controller" 00:30:03.429 },{ 00:30:03.429 "params": { 00:30:03.429 "name": "Nvme7", 00:30:03.429 "trtype": "tcp", 00:30:03.429 "traddr": "10.0.0.2", 00:30:03.429 "adrfam": "ipv4", 00:30:03.429 "trsvcid": "4420", 00:30:03.429 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:03.429 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:03.429 "hdgst": false, 00:30:03.429 "ddgst": false 00:30:03.429 }, 00:30:03.429 "method": "bdev_nvme_attach_controller" 00:30:03.429 },{ 00:30:03.429 "params": { 00:30:03.429 "name": "Nvme8", 00:30:03.429 "trtype": "tcp", 00:30:03.429 "traddr": "10.0.0.2", 00:30:03.429 "adrfam": "ipv4", 00:30:03.429 "trsvcid": "4420", 00:30:03.429 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:03.429 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:03.429 "hdgst": false, 00:30:03.429 "ddgst": false 00:30:03.429 }, 00:30:03.429 "method": "bdev_nvme_attach_controller" 00:30:03.429 },{ 00:30:03.429 "params": { 00:30:03.429 "name": "Nvme9", 00:30:03.429 "trtype": "tcp", 00:30:03.429 "traddr": "10.0.0.2", 00:30:03.429 "adrfam": "ipv4", 00:30:03.429 "trsvcid": "4420", 00:30:03.429 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:03.429 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:03.429 "hdgst": false, 00:30:03.429 "ddgst": false 00:30:03.429 }, 00:30:03.429 "method": "bdev_nvme_attach_controller" 00:30:03.430 },{ 00:30:03.430 "params": { 00:30:03.430 "name": "Nvme10", 00:30:03.430 "trtype": "tcp", 00:30:03.430 "traddr": "10.0.0.2", 00:30:03.430 "adrfam": "ipv4", 00:30:03.430 "trsvcid": "4420", 00:30:03.430 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:03.430 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:03.430 "hdgst": false, 00:30:03.430 "ddgst": false 00:30:03.430 }, 00:30:03.430 "method": "bdev_nvme_attach_controller" 00:30:03.430 }' 00:30:03.430 [2024-07-14 15:02:42.621312] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:03.430 [2024-07-14 15:02:42.621472] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1985717 ] 00:30:03.430 EAL: No free 2048 kB hugepages reported on node 1 00:30:03.688 [2024-07-14 15:02:42.750758] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:03.688 [2024-07-14 15:02:42.989765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:05.635 Running I/O for 10 seconds... 00:30:06.199 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:06.199 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:30:06.199 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:06.199 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:06.199 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:06.199 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:06.199 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:06.199 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:30:06.199 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:06.199 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:30:06.199 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:30:06.199 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:30:06.199 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:30:06.199 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:30:06.199 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:06.199 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:30:06.199 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:06.199 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:06.199 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:06.199 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=16 00:30:06.199 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 16 -ge 100 ']' 00:30:06.199 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:30:06.456 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:30:06.456 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:30:06.456 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:06.456 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:30:06.456 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:06.456 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:06.456 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:06.456 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:30:06.456 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:30:06.456 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:30:06.727 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:30:06.727 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:30:06.727 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:06.727 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:30:06.727 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:06.727 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:06.727 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:06.727 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:30:06.727 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:30:06.727 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:30:06.727 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:30:06.727 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:30:06.727 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1985409 00:30:06.727 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 1985409 ']' 00:30:06.727 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 1985409 00:30:06.727 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:30:06.727 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:06.727 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1985409 00:30:06.727 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:06.727 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:06.727 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1985409' 00:30:06.727 killing process with pid 1985409 00:30:06.727 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 1985409 00:30:06.727 15:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 1985409 00:30:06.727 [2024-07-14 15:02:45.937012] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.727 [2024-07-14 15:02:45.937115] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.727 [2024-07-14 15:02:45.937138] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.727 [2024-07-14 15:02:45.937157] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.727 [2024-07-14 15:02:45.937200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.727 [2024-07-14 15:02:45.937218] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.727 [2024-07-14 15:02:45.937235] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.727 [2024-07-14 15:02:45.937253] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.727 [2024-07-14 15:02:45.937271] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.727 [2024-07-14 15:02:45.937288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.727 [2024-07-14 15:02:45.937305] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.727 [2024-07-14 15:02:45.937323] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.727 [2024-07-14 15:02:45.937340] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.727 [2024-07-14 15:02:45.937358] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.727 [2024-07-14 15:02:45.937376] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.727 [2024-07-14 15:02:45.937393] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.727 [2024-07-14 15:02:45.937410] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.727 [2024-07-14 15:02:45.937427] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.727 [2024-07-14 15:02:45.937445] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.727 [2024-07-14 15:02:45.937470] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.727 [2024-07-14 15:02:45.937489] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.727 [2024-07-14 15:02:45.937506] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.727 [2024-07-14 15:02:45.937523] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.727 [2024-07-14 15:02:45.937540] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.727 [2024-07-14 15:02:45.937557] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.727 [2024-07-14 15:02:45.937574] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.727 [2024-07-14 15:02:45.937591] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.727 [2024-07-14 15:02:45.937609] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.727 [2024-07-14 15:02:45.937626] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.727 [2024-07-14 15:02:45.937643] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.727 [2024-07-14 15:02:45.937660] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.727 [2024-07-14 15:02:45.937677] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.727 [2024-07-14 15:02:45.937695] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.727 [2024-07-14 15:02:45.937712] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.727 [2024-07-14 15:02:45.937729] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.727 [2024-07-14 15:02:45.937747] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.727 [2024-07-14 15:02:45.937765] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.727 [2024-07-14 15:02:45.937782] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.727 [2024-07-14 15:02:45.937799] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.727 [2024-07-14 15:02:45.937816] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.727 [2024-07-14 15:02:45.937834] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.727 [2024-07-14 15:02:45.937866] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.937896] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.937915] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.937934] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.937957] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.937976] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.937995] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.938013] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.938032] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.938050] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.938069] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.938086] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.938104] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.938122] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.938141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.938159] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.938182] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.938200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.938232] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.938251] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.938269] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.938286] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.941025] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.941068] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.941092] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.941110] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.941128] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.941146] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.941165] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.941206] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.941237] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.941256] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.941273] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.941292] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.941310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.941328] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.941345] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.941362] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.941381] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.941398] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.941416] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.941434] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.941451] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.941469] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.941486] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.941519] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.941539] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.941557] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.941575] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.941594] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.941613] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.941631] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.941650] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.941667] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.941686] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.941777] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.941800] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.941825] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.941845] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.941873] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.941930] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.941952] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.941971] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.941989] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.942007] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.942047] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.942071] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.942089] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.942108] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.942126] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.942144] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.942204] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.942224] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.942243] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.942261] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.942279] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.942298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.942339] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.942361] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.942380] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.942427] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.942448] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.942466] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.942519] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.942542] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:06.728 [2024-07-14 15:02:45.943011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.728 [2024-07-14 15:02:45.943066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.728 [2024-07-14 15:02:45.943096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.728 [2024-07-14 15:02:45.943121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.728 [2024-07-14 15:02:45.943143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.729 [2024-07-14 15:02:45.943176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.729 [2024-07-14 15:02:45.943200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.729 [2024-07-14 15:02:45.943222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.729 [2024-07-14 15:02:45.943242] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.946983] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.947058] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.947086] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.947106] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.947125] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.947169] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.947197] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.947217] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.947240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.947289] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.947314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.947334] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.947381] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.947406] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.947426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.947458] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.947481] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.947501] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.947543] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.947592] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.947614] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.947634] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.947685] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.947707] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.947727] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.947746] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.947764] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.947783] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.947803] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.947845] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.947924] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.947948] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.947968] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.948016] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.948040] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.948059] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.948078] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.948096] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.948115] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.948134] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.948182] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.948245] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.948293] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.948318] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.948337] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.948387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.948410] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.948429] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.948448] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.948466] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.948484] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.948502] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.948520] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.948562] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.948586] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.948624] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.948644] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.948661] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.948679] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.948698] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.948716] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.948733] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.948751] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.951161] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:06.729 [2024-07-14 15:02:45.952748] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:06.729 [2024-07-14 15:02:45.953653] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.953698] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.953749] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.953785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.729 [2024-07-14 15:02:45.953809] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.953822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.729 [2024-07-14 15:02:45.953833] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.953848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.729 [2024-07-14 15:02:45.953886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.729 [2024-07-14 15:02:45.953852] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.953917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.729 [2024-07-14 15:02:45.953920] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.953939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.729 [2024-07-14 15:02:45.953941] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.953961] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same [2024-07-14 15:02:45.953961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nswith the state(5) to be set 00:30:06.729 id:0 cdw10:00000000 cdw11:00000000 00:30:06.729 [2024-07-14 15:02:45.953982] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.729 [2024-07-14 15:02:45.953984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.729 [2024-07-14 15:02:45.954000] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.954005] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6880 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.954019] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.954037] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.954056] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.954094] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.954114] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.954147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.730 [2024-07-14 15:02:45.954133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.954185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.730 [2024-07-14 15:02:45.954200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same [2024-07-14 15:02:45.954208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nswith the state(5) to be set 00:30:06.730 id:0 cdw10:00000000 cdw11:00000000 00:30:06.730 [2024-07-14 15:02:45.954231] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same [2024-07-14 15:02:45.954232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cwith the state(5) to be set 00:30:06.730 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.730 [2024-07-14 15:02:45.954253] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.954258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.730 [2024-07-14 15:02:45.954273] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.954279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.730 [2024-07-14 15:02:45.954291] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.954300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.730 [2024-07-14 15:02:45.954321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.730 [2024-07-14 15:02:45.954341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3400 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.954310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.954369] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.954390] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.954409] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same [2024-07-14 15:02:45.954406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nswith the state(5) to be set 00:30:06.730 id:0 cdw10:00000000 cdw11:00000000 00:30:06.730 [2024-07-14 15:02:45.954431] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same [2024-07-14 15:02:45.954435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cwith the state(5) to be set 00:30:06.730 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.730 [2024-07-14 15:02:45.954481] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.954491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.730 [2024-07-14 15:02:45.954503] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.954518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.730 [2024-07-14 15:02:45.954523] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.954542] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.954540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.730 [2024-07-14 15:02:45.954580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.730 [2024-07-14 15:02:45.954560] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.954608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.730 [2024-07-14 15:02:45.954617] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.954630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.730 [2024-07-14 15:02:45.954638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.954650] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2c80 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.954657] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.954677] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.954700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:30:06.730 [2024-07-14 15:02:45.954717] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.954742] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.954761] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.954781] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.954800] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.954818] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.954881] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.954904] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.954924] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.954964] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.954992] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.955014] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.955035] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.955054] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.955073] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.955122] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.955142] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.955181] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.955202] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.955248] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.955267] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.955285] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.955304] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.955322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.955354] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.955375] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.955394] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.956921] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:06.730 [2024-07-14 15:02:45.959266] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.959311] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.959346] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.959366] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.959385] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.959403] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.959422] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.959442] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.959465] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.959485] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.959504] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.959522] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.959541] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.959559] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.730 [2024-07-14 15:02:45.959577] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.731 [2024-07-14 15:02:45.959608] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.731 [2024-07-14 15:02:45.959630] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.731 [2024-07-14 15:02:45.959648] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.731 [2024-07-14 15:02:45.959666] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.731 [2024-07-14 15:02:45.959685] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.731 [2024-07-14 15:02:45.959703] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.731 [2024-07-14 15:02:45.959727] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.731 [2024-07-14 15:02:45.959748] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.731 [2024-07-14 15:02:45.959767] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.731 [2024-07-14 15:02:45.959796] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.731 [2024-07-14 15:02:45.959889] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.731 [2024-07-14 15:02:45.959922] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.731 [2024-07-14 15:02:45.959943] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.731 [2024-07-14 15:02:45.959941] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:06.731 [2024-07-14 15:02:45.959961] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.731 [2024-07-14 15:02:45.959980] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.731 [2024-07-14 15:02:45.960000] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.731 [2024-07-14 15:02:45.960023] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.731 [2024-07-14 15:02:45.960041] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.731 [2024-07-14 15:02:45.960060] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.731 [2024-07-14 15:02:45.960079] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.731 [2024-07-14 15:02:45.960097] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.731 [2024-07-14 15:02:45.960116] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.731 [2024-07-14 15:02:45.960138] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.731 [2024-07-14 15:02:45.960159] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.731 [2024-07-14 15:02:45.960188] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.731 [2024-07-14 15:02:45.960207] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.731 [2024-07-14 15:02:45.960230] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.731 [2024-07-14 15:02:45.960250] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.731 [2024-07-14 15:02:45.960269] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.731 [2024-07-14 15:02:45.960288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.731 [2024-07-14 15:02:45.960307] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.731 [2024-07-14 15:02:45.960328] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.731 [2024-07-14 15:02:45.960351] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.731 [2024-07-14 15:02:45.960370] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.731 [2024-07-14 15:02:45.960388] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.731 [2024-07-14 15:02:45.960407] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.731 [2024-07-14 15:02:45.960425] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.731 [2024-07-14 15:02:45.960468] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.731 [2024-07-14 15:02:45.960491] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.731 [2024-07-14 15:02:45.960510] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.731 [2024-07-14 15:02:45.960529] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.731 [2024-07-14 15:02:45.960558] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.731 [2024-07-14 15:02:45.960618] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.731 [2024-07-14 15:02:45.960647] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.731 [2024-07-14 15:02:45.960688] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.731 [2024-07-14 15:02:45.960715] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.731 [2024-07-14 15:02:45.960735] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.731 [2024-07-14 15:02:45.960753] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:06.731 [2024-07-14 15:02:45.963792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.731 [2024-07-14 15:02:45.963834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.731 [2024-07-14 15:02:45.963898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.731 [2024-07-14 15:02:45.963936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.731 [2024-07-14 15:02:45.963969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.731 [2024-07-14 15:02:45.963993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.731 [2024-07-14 15:02:45.964018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.731 [2024-07-14 15:02:45.964041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.731 [2024-07-14 15:02:45.964066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.731 [2024-07-14 15:02:45.964089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.731 [2024-07-14 15:02:45.964113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.731 [2024-07-14 15:02:45.964136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.731 [2024-07-14 15:02:45.964160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.731 [2024-07-14 15:02:45.964184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.731 [2024-07-14 15:02:45.964217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.731 [2024-07-14 15:02:45.964240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.731 [2024-07-14 15:02:45.964265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.731 [2024-07-14 15:02:45.964288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.731 [2024-07-14 15:02:45.964313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.731 [2024-07-14 15:02:45.964336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.731 [2024-07-14 15:02:45.964362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.731 [2024-07-14 15:02:45.964385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.731 [2024-07-14 15:02:45.964410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.732 [2024-07-14 15:02:45.964433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.732 [2024-07-14 15:02:45.964429] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:06.732 [2024-07-14 15:02:45.964459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.732 [2024-07-14 15:02:45.964482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.732 [2024-07-14 15:02:45.964507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.732 [2024-07-14 15:02:45.964530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.732 [2024-07-14 15:02:45.964561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.732 [2024-07-14 15:02:45.964585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.732 [2024-07-14 15:02:45.964610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.732 [2024-07-14 15:02:45.964632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.732 [2024-07-14 15:02:45.964657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.732 [2024-07-14 15:02:45.964680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.732 [2024-07-14 15:02:45.964705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.732 [2024-07-14 15:02:45.964728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.732 [2024-07-14 15:02:45.964753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.732 [2024-07-14 15:02:45.964776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.732 [2024-07-14 15:02:45.964801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.732 [2024-07-14 15:02:45.964825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.732 [2024-07-14 15:02:45.964851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.732 [2024-07-14 15:02:45.964889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.732 [2024-07-14 15:02:45.964917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.732 [2024-07-14 15:02:45.964940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.732 [2024-07-14 15:02:45.964966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.732 [2024-07-14 15:02:45.964989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.732 [2024-07-14 15:02:45.965013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.732 [2024-07-14 15:02:45.965036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.732 [2024-07-14 15:02:45.965060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.732 [2024-07-14 15:02:45.965083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.732 [2024-07-14 15:02:45.965107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.732 [2024-07-14 15:02:45.965130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.732 [2024-07-14 15:02:45.965155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.732 [2024-07-14 15:02:45.965192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.732 [2024-07-14 15:02:45.965218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.732 [2024-07-14 15:02:45.965241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.732 [2024-07-14 15:02:45.965266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.732 [2024-07-14 15:02:45.965289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.732 [2024-07-14 15:02:45.965313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.732 [2024-07-14 15:02:45.965336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.732 [2024-07-14 15:02:45.965360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.732 [2024-07-14 15:02:45.965383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.732 [2024-07-14 15:02:45.965407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.732 [2024-07-14 15:02:45.965430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.732 [2024-07-14 15:02:45.965454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.732 [2024-07-14 15:02:45.965478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.732 [2024-07-14 15:02:45.965503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.732 [2024-07-14 15:02:45.965526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.732 [2024-07-14 15:02:45.965550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.732 [2024-07-14 15:02:45.965573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.732 [2024-07-14 15:02:45.965598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.732 [2024-07-14 15:02:45.965620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.732 [2024-07-14 15:02:45.965645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.732 [2024-07-14 15:02:45.965668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.732 [2024-07-14 15:02:45.965693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.732 [2024-07-14 15:02:45.965716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.732 [2024-07-14 15:02:45.965740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.732 [2024-07-14 15:02:45.965763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.732 [2024-07-14 15:02:45.965791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.732 [2024-07-14 15:02:45.965815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.732 [2024-07-14 15:02:45.965839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.732 [2024-07-14 15:02:45.965873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.732 [2024-07-14 15:02:45.965907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.732 [2024-07-14 15:02:45.965930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.732 [2024-07-14 15:02:45.965968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.732 [2024-07-14 15:02:45.965990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.732 [2024-07-14 15:02:45.966015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.732 [2024-07-14 15:02:45.966038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.732 [2024-07-14 15:02:45.966062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.732 [2024-07-14 15:02:45.966085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.732 [2024-07-14 15:02:45.966111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.732 [2024-07-14 15:02:45.966134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.732 [2024-07-14 15:02:45.966176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.732 [2024-07-14 15:02:45.966199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.732 [2024-07-14 15:02:45.966223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.732 [2024-07-14 15:02:45.966245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.732 [2024-07-14 15:02:45.966270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.732 [2024-07-14 15:02:45.966292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.732 [2024-07-14 15:02:45.966316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.732 [2024-07-14 15:02:45.966338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.732 [2024-07-14 15:02:45.966362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.732 [2024-07-14 15:02:45.966385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.732 [2024-07-14 15:02:45.966410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.732 [2024-07-14 15:02:45.966437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.732 [2024-07-14 15:02:45.966463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.732 [2024-07-14 15:02:45.966486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.732 [2024-07-14 15:02:45.966511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.733 [2024-07-14 15:02:45.966534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.733 [2024-07-14 15:02:45.966559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.733 [2024-07-14 15:02:45.966581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.733 [2024-07-14 15:02:45.966605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.733 [2024-07-14 15:02:45.966628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.733 [2024-07-14 15:02:45.966654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.733 [2024-07-14 15:02:45.966677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.733 [2024-07-14 15:02:45.966701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.733 [2024-07-14 15:02:45.966724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.733 [2024-07-14 15:02:45.966749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.733 [2024-07-14 15:02:45.966772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.733 [2024-07-14 15:02:45.966796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.733 [2024-07-14 15:02:45.966820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.733 [2024-07-14 15:02:45.966844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.733 [2024-07-14 15:02:45.966918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.733 [2024-07-14 15:02:45.966948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.733 [2024-07-14 15:02:45.966973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.733 [2024-07-14 15:02:45.966998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.733 [2024-07-14 15:02:45.967022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.733 [2024-07-14 15:02:45.967047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.733 [2024-07-14 15:02:45.967070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.733 [2024-07-14 15:02:45.967097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f9080 is same with the state(5) to be set 00:30:06.733 [2024-07-14 15:02:45.967410] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f9080 was disconnected and freed. reset controller. 00:30:06.733 [2024-07-14 15:02:45.968135] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6880 (9): Bad file descriptor 00:30:06.733 [2024-07-14 15:02:45.968260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.733 [2024-07-14 15:02:45.968290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.733 [2024-07-14 15:02:45.968316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.733 [2024-07-14 15:02:45.968338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.733 [2024-07-14 15:02:45.968360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.733 [2024-07-14 15:02:45.968381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.733 [2024-07-14 15:02:45.968403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.733 [2024-07-14 15:02:45.968425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.733 [2024-07-14 15:02:45.968446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3b80 is same with the state(5) to be set 00:30:06.733 [2024-07-14 15:02:45.968538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.733 [2024-07-14 15:02:45.968568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.733 [2024-07-14 15:02:45.968593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.733 [2024-07-14 15:02:45.968615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.733 [2024-07-14 15:02:45.968637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.733 [2024-07-14 15:02:45.968658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.733 [2024-07-14 15:02:45.968680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.733 [2024-07-14 15:02:45.968702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.733 [2024-07-14 15:02:45.968722] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4300 is same with the state(5) to be set 00:30:06.733 [2024-07-14 15:02:45.968755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3400 (9): Bad file descriptor 00:30:06.733 [2024-07-14 15:02:45.968801] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2c80 (9): Bad file descriptor 00:30:06.733 [2024-07-14 15:02:45.968975] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:06.733 [2024-07-14 15:02:45.972331] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:30:06.733 [2024-07-14 15:02:45.972401] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4300 (9): Bad file descriptor 00:30:06.733 [2024-07-14 15:02:45.972498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.733 [2024-07-14 15:02:45.972537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.733 [2024-07-14 15:02:45.972573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.733 [2024-07-14 15:02:45.972599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.733 [2024-07-14 15:02:45.972625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.733 [2024-07-14 15:02:45.972649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.733 [2024-07-14 15:02:45.972674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.733 [2024-07-14 15:02:45.972697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.733 [2024-07-14 15:02:45.972737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.733 [2024-07-14 15:02:45.972760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.733 [2024-07-14 15:02:45.972784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.733 [2024-07-14 15:02:45.972806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.733 [2024-07-14 15:02:45.972831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.733 [2024-07-14 15:02:45.972853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.733 [2024-07-14 15:02:45.972909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.733 [2024-07-14 15:02:45.972934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.733 [2024-07-14 15:02:45.972959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.733 [2024-07-14 15:02:45.972981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.733 [2024-07-14 15:02:45.973006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.733 [2024-07-14 15:02:45.973029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.733 [2024-07-14 15:02:45.973053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.733 [2024-07-14 15:02:45.973076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.733 [2024-07-14 15:02:45.973100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.733 [2024-07-14 15:02:45.973122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.733 [2024-07-14 15:02:45.973146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.733 [2024-07-14 15:02:45.973169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.733 [2024-07-14 15:02:45.973219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.733 [2024-07-14 15:02:45.973243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.733 [2024-07-14 15:02:45.973267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.733 [2024-07-14 15:02:45.973288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.733 [2024-07-14 15:02:45.973312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.733 [2024-07-14 15:02:45.973334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.733 [2024-07-14 15:02:45.973357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.733 [2024-07-14 15:02:45.973379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.733 [2024-07-14 15:02:45.973403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.733 [2024-07-14 15:02:45.973425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.733 [2024-07-14 15:02:45.973449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.733 [2024-07-14 15:02:45.973471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.733 [2024-07-14 15:02:45.973494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.734 [2024-07-14 15:02:45.973516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.734 [2024-07-14 15:02:45.973540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.734 [2024-07-14 15:02:45.973562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.734 [2024-07-14 15:02:45.973586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.734 [2024-07-14 15:02:45.973608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.734 [2024-07-14 15:02:45.973632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.734 [2024-07-14 15:02:45.973654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.734 [2024-07-14 15:02:45.973678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.734 [2024-07-14 15:02:45.973700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.734 [2024-07-14 15:02:45.973724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.734 [2024-07-14 15:02:45.973745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.734 [2024-07-14 15:02:45.973770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.734 [2024-07-14 15:02:45.973795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.734 [2024-07-14 15:02:45.973820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.734 [2024-07-14 15:02:45.973842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.734 [2024-07-14 15:02:45.973874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.734 [2024-07-14 15:02:45.973921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.734 [2024-07-14 15:02:45.973947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.734 [2024-07-14 15:02:45.973970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.734 [2024-07-14 15:02:45.973994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.734 [2024-07-14 15:02:45.974017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.734 [2024-07-14 15:02:45.974041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.734 [2024-07-14 15:02:45.974064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.734 [2024-07-14 15:02:45.974088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.734 [2024-07-14 15:02:45.974110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.734 [2024-07-14 15:02:45.974134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.734 [2024-07-14 15:02:45.974164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.734 [2024-07-14 15:02:45.974203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.734 [2024-07-14 15:02:45.974225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.734 [2024-07-14 15:02:45.974250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.734 [2024-07-14 15:02:45.974272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.734 [2024-07-14 15:02:45.974296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.734 [2024-07-14 15:02:45.974318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.734 [2024-07-14 15:02:45.974342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.734 [2024-07-14 15:02:45.974363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.734 [2024-07-14 15:02:45.974386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.734 [2024-07-14 15:02:45.974408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.734 [2024-07-14 15:02:45.974437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.734 [2024-07-14 15:02:45.974460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.734 [2024-07-14 15:02:45.974484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.734 [2024-07-14 15:02:45.974507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.734 [2024-07-14 15:02:45.974530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.734 [2024-07-14 15:02:45.974551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.734 [2024-07-14 15:02:45.974575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.734 [2024-07-14 15:02:45.974597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.734 [2024-07-14 15:02:45.974621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.734 [2024-07-14 15:02:45.974643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.734 [2024-07-14 15:02:45.974667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.734 [2024-07-14 15:02:45.974688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.734 [2024-07-14 15:02:45.974712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.734 [2024-07-14 15:02:45.974733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.734 [2024-07-14 15:02:45.974757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.734 [2024-07-14 15:02:45.974779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.734 [2024-07-14 15:02:45.974802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.734 [2024-07-14 15:02:45.974824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.734 [2024-07-14 15:02:45.974848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.734 [2024-07-14 15:02:45.974915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.734 [2024-07-14 15:02:45.974943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:12[2024-07-14 15:02:45.974932] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.734 with the state(5) to be set 00:30:06.734 [2024-07-14 15:02:45.974971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.734 [2024-07-14 15:02:45.974975] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:06.734 [2024-07-14 15:02:45.974997] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same [2024-07-14 15:02:45.974996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:12with the state(5) to be set 00:30:06.734 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.734 [2024-07-14 15:02:45.975027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-14 15:02:45.975026] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.734 with the state(5) to be set 00:30:06.734 [2024-07-14 15:02:45.975048] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:06.734 [2024-07-14 15:02:45.975053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.734 [2024-07-14 15:02:45.975068] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:06.734 [2024-07-14 15:02:45.975076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.734 [2024-07-14 15:02:45.975086] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:06.734 [2024-07-14 15:02:45.975101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:12[2024-07-14 15:02:45.975104] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.734 with the state(5) to be set 00:30:06.734 [2024-07-14 15:02:45.975124] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same [2024-07-14 15:02:45.975125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:30:06.734 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.734 [2024-07-14 15:02:45.975144] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:06.734 [2024-07-14 15:02:45.975151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.734 [2024-07-14 15:02:45.975173] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:06.734 [2024-07-14 15:02:45.975178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.734 [2024-07-14 15:02:45.975208] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:06.734 [2024-07-14 15:02:45.975219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.734 [2024-07-14 15:02:45.975227] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:06.734 [2024-07-14 15:02:45.975241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.734 [2024-07-14 15:02:45.975246] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:06.734 [2024-07-14 15:02:45.975263] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:06.734 [2024-07-14 15:02:45.975265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.735 [2024-07-14 15:02:45.975280] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:06.735 [2024-07-14 15:02:45.975287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.735 [2024-07-14 15:02:45.975298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:06.735 [2024-07-14 15:02:45.975310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.735 [2024-07-14 15:02:45.975320] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:06.735 [2024-07-14 15:02:45.975332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.735 [2024-07-14 15:02:45.975339] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:06.735 [2024-07-14 15:02:45.975357] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:06.735 [2024-07-14 15:02:45.975357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.735 [2024-07-14 15:02:45.975375] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:06.735 [2024-07-14 15:02:45.975381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.735 [2024-07-14 15:02:45.975393] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:06.735 [2024-07-14 15:02:45.975405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.735 [2024-07-14 15:02:45.975410] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:06.735 [2024-07-14 15:02:45.975426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.735 [2024-07-14 15:02:45.975428] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:06.735 [2024-07-14 15:02:45.975447] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:06.735 [2024-07-14 15:02:45.975451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.735 [2024-07-14 15:02:45.975465] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:06.735 [2024-07-14 15:02:45.975472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.735 [2024-07-14 15:02:45.975483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:06.735 [2024-07-14 15:02:45.975496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.735 [2024-07-14 15:02:45.975501] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:06.735 [2024-07-14 15:02:45.975518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-14 15:02:45.975518] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.735 with the state(5) to be set 00:30:06.735 [2024-07-14 15:02:45.975538] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:06.735 [2024-07-14 15:02:45.975543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.735 [2024-07-14 15:02:45.975556] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:06.735 [2024-07-14 15:02:45.975565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.735 [2024-07-14 15:02:45.975578] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:06.735 [2024-07-14 15:02:45.975589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.735 [2024-07-14 15:02:45.975597] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:06.735 [2024-07-14 15:02:45.975611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.735 [2024-07-14 15:02:45.975615] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:06.735 [2024-07-14 15:02:45.975634] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:06.735 [2024-07-14 15:02:45.975635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.735 [2024-07-14 15:02:45.975653] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:06.735 [2024-07-14 15:02:45.975657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.735 [2024-07-14 15:02:45.975671] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:06.735 [2024-07-14 15:02:45.975680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.735 [2024-07-14 15:02:45.975688] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:06.735 [2024-07-14 15:02:45.975702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.735 [2024-07-14 15:02:45.975707] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:06.735 [2024-07-14 15:02:45.975725] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:06.735 [2024-07-14 15:02:45.975724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f8680 is same with the state(5) to be set 00:30:06.735 [2024-07-14 15:02:45.975743] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:06.735 [2024-07-14 15:02:45.975761] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:06.735 [2024-07-14 15:02:45.975779] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:06.735 [2024-07-14 15:02:45.975796] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:06.735 [2024-07-14 15:02:45.975813] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:06.735 [2024-07-14 15:02:45.975831] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:06.735 [2024-07-14 15:02:45.975848] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:06.735 [2024-07-14 15:02:45.975900] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:06.735 [2024-07-14 15:02:45.975924] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:06.735 [2024-07-14 15:02:45.975947] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:06.735 [2024-07-14 15:02:45.975967] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:06.735 [2024-07-14 15:02:45.975985] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:06.735 [2024-07-14 15:02:45.977654] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f9300 was disconnected and freed. reset controller. 00:30:06.735 [2024-07-14 15:02:45.977954] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:06.735 [2024-07-14 15:02:45.978148] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.735 [2024-07-14 15:02:45.978198] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.735 [2024-07-14 15:02:45.978220] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.735 [2024-07-14 15:02:45.978239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.735 [2024-07-14 15:02:45.978258] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.735 [2024-07-14 15:02:45.978276] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.735 [2024-07-14 15:02:45.978295] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.978313] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.978332] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.978351] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.978385] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.978403] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.978422] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.978440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.978458] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.978476] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.978494] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.978512] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.978530] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.978547] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.978565] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.978583] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.978606] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.978625] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.978642] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.978661] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.978678] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.978696] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.978713] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.978732] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.978750] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.978768] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.978785] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.978803] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.978821] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.978839] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.978856] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.978911] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.978932] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.978951] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.978970] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.978988] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.979006] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.979024] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.979043] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.979061] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.979080] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.979098] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.979120] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.979139] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.979167] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.979185] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.979219] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.979237] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.979269] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.979289] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.979308] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.979365] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.979373] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:30:06.736 [2024-07-14 15:02:45.979401] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.979430] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.979451] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.979458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4a80 (9): Bad file descriptor 00:30:06.736 [2024-07-14 15:02:45.979469] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.979487] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.979620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.736 [2024-07-14 15:02:45.979657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4300 with addr=10.0.0.2, port=4420 00:30:06.736 [2024-07-14 15:02:45.979681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4300 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.979802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.736 [2024-07-14 15:02:45.979835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:30:06.736 [2024-07-14 15:02:45.979869] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.979961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.736 [2024-07-14 15:02:45.979990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.736 [2024-07-14 15:02:45.980014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.736 [2024-07-14 15:02:45.980036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.736 [2024-07-14 15:02:45.980062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.736 [2024-07-14 15:02:45.980085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.736 [2024-07-14 15:02:45.980107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.736 [2024-07-14 15:02:45.980127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.736 [2024-07-14 15:02:45.980147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5980 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.980201] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3b80 (9): Bad file descriptor 00:30:06.736 [2024-07-14 15:02:45.980275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.736 [2024-07-14 15:02:45.980303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.736 [2024-07-14 15:02:45.980326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.736 [2024-07-14 15:02:45.980347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.736 [2024-07-14 15:02:45.980368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.736 [2024-07-14 15:02:45.980388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.736 [2024-07-14 15:02:45.980409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.736 [2024-07-14 15:02:45.980430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.736 [2024-07-14 15:02:45.980449] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5200 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.980953] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.980994] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.736 [2024-07-14 15:02:45.981019] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.737 [2024-07-14 15:02:45.981038] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.737 [2024-07-14 15:02:45.981057] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.737 [2024-07-14 15:02:45.981075] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.737 [2024-07-14 15:02:45.981093] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.737 [2024-07-14 15:02:45.981111] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.737 [2024-07-14 15:02:45.981129] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.737 [2024-07-14 15:02:45.981146] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.737 [2024-07-14 15:02:45.981172] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.737 [2024-07-14 15:02:45.981197] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.737 [2024-07-14 15:02:45.981231] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.737 [2024-07-14 15:02:45.981250] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.737 [2024-07-14 15:02:45.981268] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.737 [2024-07-14 15:02:45.981302] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.737 [2024-07-14 15:02:45.981321] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.737 [2024-07-14 15:02:45.981345] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.737 [2024-07-14 15:02:45.981364] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.737 [2024-07-14 15:02:45.981386] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.737 [2024-07-14 15:02:45.981404] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.737 [2024-07-14 15:02:45.981422] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.737 [2024-07-14 15:02:45.981440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.737 [2024-07-14 15:02:45.981440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4300 (9): Bad file descriptor 00:30:06.737 [2024-07-14 15:02:45.981459] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.737 [2024-07-14 15:02:45.981478] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.737 [2024-07-14 15:02:45.981481] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:30:06.737 [2024-07-14 15:02:45.981496] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.737 [2024-07-14 15:02:45.981514] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.737 [2024-07-14 15:02:45.981532] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.737 [2024-07-14 15:02:45.981550] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.737 [2024-07-14 15:02:45.981568] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.737 [2024-07-14 15:02:45.981576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.737 [2024-07-14 15:02:45.981586] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.737 [2024-07-14 15:02:45.981605] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.737 [2024-07-14 15:02:45.981606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.737 [2024-07-14 15:02:45.981622] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.737 [2024-07-14 15:02:45.981639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.737 [2024-07-14 15:02:45.981645] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.737 [2024-07-14 15:02:45.981664] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same [2024-07-14 15:02:45.981664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:30:06.737 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.737 [2024-07-14 15:02:45.981686] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.737 [2024-07-14 15:02:45.981693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.737 [2024-07-14 15:02:45.981704] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.737 [2024-07-14 15:02:45.981716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.737 [2024-07-14 15:02:45.981723] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.737 [2024-07-14 15:02:45.981742] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.737 [2024-07-14 15:02:45.981742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.737 [2024-07-14 15:02:45.981760] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.737 [2024-07-14 15:02:45.981765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.737 [2024-07-14 15:02:45.981778] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.737 [2024-07-14 15:02:45.981790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.737 [2024-07-14 15:02:45.981811] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.737 [2024-07-14 15:02:45.981827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-14 15:02:45.981830] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.737 with the state(5) to be set 00:30:06.737 [2024-07-14 15:02:45.981848] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.737 [2024-07-14 15:02:45.981854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.737 [2024-07-14 15:02:45.981866] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.737 [2024-07-14 15:02:45.981883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.737 [2024-07-14 15:02:45.981908] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.737 [2024-07-14 15:02:45.981926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128[2024-07-14 15:02:45.981930] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.737 with the state(5) to be set 00:30:06.737 [2024-07-14 15:02:45.981950] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same [2024-07-14 15:02:45.981952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:30:06.737 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.737 [2024-07-14 15:02:45.981974] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.737 [2024-07-14 15:02:45.981981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.737 [2024-07-14 15:02:45.981994] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.737 [2024-07-14 15:02:45.982005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.737 [2024-07-14 15:02:45.982013] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.737 [2024-07-14 15:02:45.982032] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same [2024-07-14 15:02:45.982030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128with the state(5) to be set 00:30:06.737 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.737 [2024-07-14 15:02:45.982052] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.737 [2024-07-14 15:02:45.982056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.737 [2024-07-14 15:02:45.982070] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.737 [2024-07-14 15:02:45.982081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.737 [2024-07-14 15:02:45.982089] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.737 [2024-07-14 15:02:45.982103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.737 [2024-07-14 15:02:45.982107] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.737 [2024-07-14 15:02:45.982126] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.737 [2024-07-14 15:02:45.982128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.737 [2024-07-14 15:02:45.982144] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.737 [2024-07-14 15:02:45.982150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.737 [2024-07-14 15:02:45.982162] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.738 [2024-07-14 15:02:45.982176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.738 [2024-07-14 15:02:45.982180] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.738 [2024-07-14 15:02:45.982214] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same [2024-07-14 15:02:45.982213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:30:06.738 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.738 [2024-07-14 15:02:45.982234] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.738 [2024-07-14 15:02:45.982242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.738 [2024-07-14 15:02:45.982264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.738 [2024-07-14 15:02:45.982288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.738 [2024-07-14 15:02:45.982255] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:06.738 [2024-07-14 15:02:45.982310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.738 [2024-07-14 15:02:45.982334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.738 [2024-07-14 15:02:45.982356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.738 [2024-07-14 15:02:45.982379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.738 [2024-07-14 15:02:45.982401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.738 [2024-07-14 15:02:45.982425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.738 [2024-07-14 15:02:45.982447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.738 [2024-07-14 15:02:45.982471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.738 [2024-07-14 15:02:45.982492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.738 [2024-07-14 15:02:45.982516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.738 [2024-07-14 15:02:45.982537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.738 [2024-07-14 15:02:45.982561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.738 [2024-07-14 15:02:45.982583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.738 [2024-07-14 15:02:45.982607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.738 [2024-07-14 15:02:45.982628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.738 [2024-07-14 15:02:45.982653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.738 [2024-07-14 15:02:45.982675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.738 [2024-07-14 15:02:45.982699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.738 [2024-07-14 15:02:45.982720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.738 [2024-07-14 15:02:45.982744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.738 [2024-07-14 15:02:45.982766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.738 [2024-07-14 15:02:45.982790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.738 [2024-07-14 15:02:45.982818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.738 [2024-07-14 15:02:45.982843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.738 [2024-07-14 15:02:45.982864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.738 [2024-07-14 15:02:45.982911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.738 [2024-07-14 15:02:45.982935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.738 [2024-07-14 15:02:45.982959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.738 [2024-07-14 15:02:45.982981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.738 [2024-07-14 15:02:45.983006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.738 [2024-07-14 15:02:45.983028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.738 [2024-07-14 15:02:45.983053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.738 [2024-07-14 15:02:45.983075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.738 [2024-07-14 15:02:45.983099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.738 [2024-07-14 15:02:45.983121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.738 [2024-07-14 15:02:45.983145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.738 [2024-07-14 15:02:45.983180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.738 [2024-07-14 15:02:45.983223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.738 [2024-07-14 15:02:45.983245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.738 [2024-07-14 15:02:45.983269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.738 [2024-07-14 15:02:45.983290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.738 [2024-07-14 15:02:45.983314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.738 [2024-07-14 15:02:45.983335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.738 [2024-07-14 15:02:45.983359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.738 [2024-07-14 15:02:45.983381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.738 [2024-07-14 15:02:45.983404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.738 [2024-07-14 15:02:45.983426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.738 [2024-07-14 15:02:45.983454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.738 [2024-07-14 15:02:45.983476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.738 [2024-07-14 15:02:45.983499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.738 [2024-07-14 15:02:45.983520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.738 [2024-07-14 15:02:45.983544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.738 [2024-07-14 15:02:45.983566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.738 [2024-07-14 15:02:45.983591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.738 [2024-07-14 15:02:45.983613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.738 [2024-07-14 15:02:45.983636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.738 [2024-07-14 15:02:45.983657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.738 [2024-07-14 15:02:45.983681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.738 [2024-07-14 15:02:45.983703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.738 [2024-07-14 15:02:45.983726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.738 [2024-07-14 15:02:45.983748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.738 [2024-07-14 15:02:45.983772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.738 [2024-07-14 15:02:45.983793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.738 [2024-07-14 15:02:45.983817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.738 [2024-07-14 15:02:45.983839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.738 [2024-07-14 15:02:45.983862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.738 [2024-07-14 15:02:45.983905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.738 [2024-07-14 15:02:45.983932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.738 [2024-07-14 15:02:45.983956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.738 [2024-07-14 15:02:45.983980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.738 [2024-07-14 15:02:45.984002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.738 [2024-07-14 15:02:45.984027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.738 [2024-07-14 15:02:45.984053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.738 [2024-07-14 15:02:45.984078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.738 [2024-07-14 15:02:45.984101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.738 [2024-07-14 15:02:45.984125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.739 [2024-07-14 15:02:45.984147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.739 [2024-07-14 15:02:45.984171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.739 [2024-07-14 15:02:45.984193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.739 [2024-07-14 15:02:45.984218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.739 [2024-07-14 15:02:45.984239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.739 [2024-07-14 15:02:45.984263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.739 [2024-07-14 15:02:45.984285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.739 [2024-07-14 15:02:45.984310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.739 [2024-07-14 15:02:45.984334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.739 [2024-07-14 15:02:45.984366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.739 [2024-07-14 15:02:45.984389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.739 [2024-07-14 15:02:45.984414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.739 [2024-07-14 15:02:45.984452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.739 [2024-07-14 15:02:45.984478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.739 [2024-07-14 15:02:45.984499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.739 [2024-07-14 15:02:45.984523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.739 [2024-07-14 15:02:45.984545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.739 [2024-07-14 15:02:45.984568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.739 [2024-07-14 15:02:45.984589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.739 [2024-07-14 15:02:45.984612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.739 [2024-07-14 15:02:45.984633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.739 [2024-07-14 15:02:45.984661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.739 [2024-07-14 15:02:45.984683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.739 [2024-07-14 15:02:45.984707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.739 [2024-07-14 15:02:45.984728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.739 [2024-07-14 15:02:45.984749] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f8900 is same with the state(5) to be set 00:30:06.739 [2024-07-14 15:02:45.986342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.739 [2024-07-14 15:02:45.986376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.739 [2024-07-14 15:02:45.986419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.739 [2024-07-14 15:02:45.986445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.739 [2024-07-14 15:02:45.986471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.739 [2024-07-14 15:02:45.986493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.739 [2024-07-14 15:02:45.986517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.739 [2024-07-14 15:02:45.986539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.739 [2024-07-14 15:02:45.986564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.739 [2024-07-14 15:02:45.986587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.739 [2024-07-14 15:02:45.986611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.739 [2024-07-14 15:02:45.986632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.739 [2024-07-14 15:02:45.986657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.739 [2024-07-14 15:02:45.986679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.739 [2024-07-14 15:02:45.986703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.739 [2024-07-14 15:02:45.986725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.739 [2024-07-14 15:02:45.986750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.739 [2024-07-14 15:02:45.986772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.739 [2024-07-14 15:02:45.986797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.739 [2024-07-14 15:02:45.986819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.739 [2024-07-14 15:02:45.986848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.739 [2024-07-14 15:02:45.986872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.739 [2024-07-14 15:02:45.986923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.739 [2024-07-14 15:02:45.986948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.739 [2024-07-14 15:02:45.986973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.739 [2024-07-14 15:02:45.986995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.739 [2024-07-14 15:02:45.987020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.739 [2024-07-14 15:02:45.987043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.739 [2024-07-14 15:02:45.987069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.739 [2024-07-14 15:02:45.987092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.739 [2024-07-14 15:02:45.987116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.739 [2024-07-14 15:02:45.987139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.739 [2024-07-14 15:02:45.987163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.739 [2024-07-14 15:02:45.987201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.739 [2024-07-14 15:02:45.987227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.739 [2024-07-14 15:02:45.987249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.739 [2024-07-14 15:02:45.987273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.739 [2024-07-14 15:02:45.987295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.739 [2024-07-14 15:02:45.987319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.739 [2024-07-14 15:02:45.987341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.739 [2024-07-14 15:02:45.987365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.739 [2024-07-14 15:02:45.987387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.739 [2024-07-14 15:02:45.987410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.739 [2024-07-14 15:02:45.987432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.739 [2024-07-14 15:02:45.987455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.739 [2024-07-14 15:02:45.987481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.739 [2024-07-14 15:02:45.987506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.739 [2024-07-14 15:02:45.987528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.739 [2024-07-14 15:02:45.987552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.739 [2024-07-14 15:02:45.987573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.740 [2024-07-14 15:02:45.987597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.740 [2024-07-14 15:02:45.987619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.740 [2024-07-14 15:02:45.987643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.740 [2024-07-14 15:02:45.987664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.740 [2024-07-14 15:02:45.987688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.740 [2024-07-14 15:02:45.987710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.740 [2024-07-14 15:02:45.987734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.740 [2024-07-14 15:02:45.987756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.740 [2024-07-14 15:02:45.987780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.740 [2024-07-14 15:02:45.987802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.740 [2024-07-14 15:02:45.987826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.740 [2024-07-14 15:02:45.987849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.740 [2024-07-14 15:02:45.987907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.740 [2024-07-14 15:02:45.987933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.740 [2024-07-14 15:02:45.987958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.740 [2024-07-14 15:02:45.987981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.740 [2024-07-14 15:02:45.988006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.740 [2024-07-14 15:02:45.988028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.740 [2024-07-14 15:02:45.988053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.740 [2024-07-14 15:02:45.988076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.740 [2024-07-14 15:02:45.988104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.740 [2024-07-14 15:02:45.988127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.740 [2024-07-14 15:02:45.988152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.740 [2024-07-14 15:02:45.988175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.740 [2024-07-14 15:02:45.988201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.740 [2024-07-14 15:02:45.988223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.740 [2024-07-14 15:02:45.988247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.740 [2024-07-14 15:02:45.988269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.740 [2024-07-14 15:02:45.988294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.740 [2024-07-14 15:02:45.988316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.740 [2024-07-14 15:02:45.988341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.740 [2024-07-14 15:02:45.988363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.740 [2024-07-14 15:02:45.988388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.740 [2024-07-14 15:02:45.988410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.740 [2024-07-14 15:02:45.988434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.740 [2024-07-14 15:02:45.988457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.740 [2024-07-14 15:02:45.988482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.740 [2024-07-14 15:02:45.988504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.740 [2024-07-14 15:02:45.988528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.740 [2024-07-14 15:02:45.988550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.740 [2024-07-14 15:02:45.988575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.740 [2024-07-14 15:02:45.988597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.740 [2024-07-14 15:02:45.988621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.740 [2024-07-14 15:02:45.988644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.740 [2024-07-14 15:02:45.988684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.740 [2024-07-14 15:02:45.988710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.740 [2024-07-14 15:02:45.988736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.740 [2024-07-14 15:02:45.988758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.740 [2024-07-14 15:02:45.988783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.740 [2024-07-14 15:02:45.988804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.740 [2024-07-14 15:02:45.988828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.740 [2024-07-14 15:02:45.988850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.740 [2024-07-14 15:02:45.988910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.740 [2024-07-14 15:02:45.988935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.740 [2024-07-14 15:02:45.988960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.740 [2024-07-14 15:02:45.988983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.740 [2024-07-14 15:02:45.989008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.740 [2024-07-14 15:02:45.989031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.740 [2024-07-14 15:02:45.989056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.740 [2024-07-14 15:02:45.989079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.740 [2024-07-14 15:02:45.989104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.740 [2024-07-14 15:02:45.989128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.740 [2024-07-14 15:02:45.989153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.740 [2024-07-14 15:02:45.989177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.740 [2024-07-14 15:02:45.989218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.740 [2024-07-14 15:02:45.989241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.740 [2024-07-14 15:02:45.989267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.740 [2024-07-14 15:02:45.989289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.740 [2024-07-14 15:02:45.989314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.740 [2024-07-14 15:02:45.989337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.740 [2024-07-14 15:02:45.989368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.740 [2024-07-14 15:02:45.989393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.740 [2024-07-14 15:02:45.989418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.741 [2024-07-14 15:02:45.989441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.741 [2024-07-14 15:02:45.989466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.741 [2024-07-14 15:02:45.989489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.741 [2024-07-14 15:02:45.989514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.741 [2024-07-14 15:02:45.989537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.741 [2024-07-14 15:02:45.989560] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f8b80 is same with the state(5) to be set 00:30:06.741 [2024-07-14 15:02:45.991420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.741 [2024-07-14 15:02:45.991456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.741 [2024-07-14 15:02:45.991493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.741 [2024-07-14 15:02:45.991518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.741 [2024-07-14 15:02:45.991543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.741 [2024-07-14 15:02:45.991566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.741 [2024-07-14 15:02:45.991592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.741 [2024-07-14 15:02:45.991614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.741 [2024-07-14 15:02:45.991639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.741 [2024-07-14 15:02:45.991661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.741 [2024-07-14 15:02:45.991685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.741 [2024-07-14 15:02:45.991708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.741 [2024-07-14 15:02:45.991733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.741 [2024-07-14 15:02:45.991755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.741 [2024-07-14 15:02:45.991780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.741 [2024-07-14 15:02:45.991803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.741 [2024-07-14 15:02:45.991833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.741 [2024-07-14 15:02:45.991857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.741 [2024-07-14 15:02:45.991889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.741 [2024-07-14 15:02:45.991914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.741 [2024-07-14 15:02:45.991942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.741 [2024-07-14 15:02:45.991964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.741 [2024-07-14 15:02:45.991989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.741 [2024-07-14 15:02:45.992012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.741 [2024-07-14 15:02:45.992036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.741 [2024-07-14 15:02:45.992059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.741 [2024-07-14 15:02:45.992084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.741 [2024-07-14 15:02:45.992106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.741 [2024-07-14 15:02:45.992131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.741 [2024-07-14 15:02:45.992154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.741 [2024-07-14 15:02:45.992178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.741 [2024-07-14 15:02:45.992200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.741 [2024-07-14 15:02:45.992225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.741 [2024-07-14 15:02:45.992247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.741 [2024-07-14 15:02:45.992272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.741 [2024-07-14 15:02:45.992295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.741 [2024-07-14 15:02:45.992320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.741 [2024-07-14 15:02:45.992342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.741 [2024-07-14 15:02:45.992366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.741 [2024-07-14 15:02:45.992389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.741 [2024-07-14 15:02:45.992414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.741 [2024-07-14 15:02:45.992441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.741 [2024-07-14 15:02:45.992466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.741 [2024-07-14 15:02:45.992489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.741 [2024-07-14 15:02:45.992514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.741 [2024-07-14 15:02:45.992546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.741 [2024-07-14 15:02:45.992571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.741 [2024-07-14 15:02:45.992593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.741 [2024-07-14 15:02:45.992618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.741 [2024-07-14 15:02:45.992640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.741 [2024-07-14 15:02:45.992664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.741 [2024-07-14 15:02:45.992687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.741 [2024-07-14 15:02:45.992712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.741 [2024-07-14 15:02:45.992734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.741 [2024-07-14 15:02:45.992759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.741 [2024-07-14 15:02:45.992781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.741 [2024-07-14 15:02:45.992806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.741 [2024-07-14 15:02:45.992828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.741 [2024-07-14 15:02:45.992864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.741 [2024-07-14 15:02:45.992895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.741 [2024-07-14 15:02:45.992921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.741 [2024-07-14 15:02:45.992994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.741 [2024-07-14 15:02:45.993022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.741 [2024-07-14 15:02:45.993045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.741 [2024-07-14 15:02:45.993070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.741 [2024-07-14 15:02:45.993092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.741 [2024-07-14 15:02:45.993122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.741 [2024-07-14 15:02:45.993146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.741 [2024-07-14 15:02:45.993171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.741 [2024-07-14 15:02:45.993194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.742 [2024-07-14 15:02:45.993218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.742 [2024-07-14 15:02:45.993241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.742 [2024-07-14 15:02:45.993265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.742 [2024-07-14 15:02:45.993294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.742 [2024-07-14 15:02:45.993318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.742 [2024-07-14 15:02:45.993341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.742 [2024-07-14 15:02:45.993365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.742 [2024-07-14 15:02:45.993388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.742 [2024-07-14 15:02:45.993412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.742 [2024-07-14 15:02:45.993434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.742 [2024-07-14 15:02:45.993459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.742 [2024-07-14 15:02:45.993481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.742 [2024-07-14 15:02:45.993506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.742 [2024-07-14 15:02:45.993529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.742 [2024-07-14 15:02:45.993554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.742 [2024-07-14 15:02:45.993577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.742 [2024-07-14 15:02:45.993603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.742 [2024-07-14 15:02:45.993626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.742 [2024-07-14 15:02:45.993651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.742 [2024-07-14 15:02:45.993674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.742 [2024-07-14 15:02:45.993698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.742 [2024-07-14 15:02:45.993725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.742 [2024-07-14 15:02:45.993751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.742 [2024-07-14 15:02:45.993774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.742 [2024-07-14 15:02:45.993798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.742 [2024-07-14 15:02:45.993821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.742 [2024-07-14 15:02:45.993845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.742 [2024-07-14 15:02:45.993882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.742 [2024-07-14 15:02:45.993910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.742 [2024-07-14 15:02:45.993934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.742 [2024-07-14 15:02:45.993959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.742 [2024-07-14 15:02:45.993981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.742 [2024-07-14 15:02:45.994006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.742 [2024-07-14 15:02:45.994029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.742 [2024-07-14 15:02:45.994054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.742 [2024-07-14 15:02:45.994077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.742 [2024-07-14 15:02:45.994101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.742 [2024-07-14 15:02:45.994123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.742 [2024-07-14 15:02:45.994149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.742 [2024-07-14 15:02:45.994181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.742 [2024-07-14 15:02:45.994205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.742 [2024-07-14 15:02:45.994227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.742 [2024-07-14 15:02:45.994252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.742 [2024-07-14 15:02:45.994275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.742 [2024-07-14 15:02:45.994300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.742 [2024-07-14 15:02:45.994323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.742 [2024-07-14 15:02:45.994352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.742 [2024-07-14 15:02:45.994375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.742 [2024-07-14 15:02:45.994400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.742 [2024-07-14 15:02:45.994423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.742 [2024-07-14 15:02:45.994448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.742 [2024-07-14 15:02:45.994471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.742 [2024-07-14 15:02:45.994495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.742 [2024-07-14 15:02:45.994518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.742 [2024-07-14 15:02:45.994542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.742 [2024-07-14 15:02:45.994564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.742 [2024-07-14 15:02:45.994589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.742 [2024-07-14 15:02:45.994612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.742 [2024-07-14 15:02:45.994636] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f9d00 is same with the state(5) to be set 00:30:06.742 [2024-07-14 15:02:45.999644] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:30:06.742 [2024-07-14 15:02:45.999754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:30:06.742 [2024-07-14 15:02:45.999783] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:30:06.742 [2024-07-14 15:02:46.000088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.742 [2024-07-14 15:02:46.000136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4a80 with addr=10.0.0.2, port=4420 00:30:06.742 [2024-07-14 15:02:46.000174] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4a80 is same with the state(5) to be set 00:30:06.742 [2024-07-14 15:02:46.000200] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:30:06.742 [2024-07-14 15:02:46.000223] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:30:06.742 [2024-07-14 15:02:46.000246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:30:06.742 [2024-07-14 15:02:46.000289] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:06.742 [2024-07-14 15:02:46.000310] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:06.742 [2024-07-14 15:02:46.000330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:06.742 [2024-07-14 15:02:46.000402] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:06.742 [2024-07-14 15:02:46.000445] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:06.742 [2024-07-14 15:02:46.000538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.742 [2024-07-14 15:02:46.000579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.742 [2024-07-14 15:02:46.000608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.742 [2024-07-14 15:02:46.000630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.742 [2024-07-14 15:02:46.000651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.742 [2024-07-14 15:02:46.000672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.742 [2024-07-14 15:02:46.000694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.742 [2024-07-14 15:02:46.000715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.742 [2024-07-14 15:02:46.000735] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6100 is same with the state(5) to be set 00:30:06.742 [2024-07-14 15:02:46.000794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5980 (9): Bad file descriptor 00:30:06.742 [2024-07-14 15:02:46.000853] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5200 (9): Bad file descriptor 00:30:06.742 [2024-07-14 15:02:46.001017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4a80 (9): Bad file descriptor 00:30:06.742 [2024-07-14 15:02:46.001438] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:06.742 [2024-07-14 15:02:46.001492] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:06.742 [2024-07-14 15:02:46.001626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.743 [2024-07-14 15:02:46.001672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2c80 with addr=10.0.0.2, port=4420 00:30:06.743 [2024-07-14 15:02:46.001696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2c80 is same with the state(5) to be set 00:30:06.743 [2024-07-14 15:02:46.001811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.743 [2024-07-14 15:02:46.001845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3400 with addr=10.0.0.2, port=4420 00:30:06.743 [2024-07-14 15:02:46.001893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3400 is same with the state(5) to be set 00:30:06.743 [2024-07-14 15:02:46.001996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.743 [2024-07-14 15:02:46.002030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f6880 with addr=10.0.0.2, port=4420 00:30:06.743 [2024-07-14 15:02:46.002053] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6880 is same with the state(5) to be set 00:30:06.743 [2024-07-14 15:02:46.003148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.743 [2024-07-14 15:02:46.003182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.743 [2024-07-14 15:02:46.003244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.743 [2024-07-14 15:02:46.003268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.743 [2024-07-14 15:02:46.003294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.743 [2024-07-14 15:02:46.003321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.743 [2024-07-14 15:02:46.003346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.743 [2024-07-14 15:02:46.003369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.743 [2024-07-14 15:02:46.003394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.743 [2024-07-14 15:02:46.003416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.743 [2024-07-14 15:02:46.003441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.743 [2024-07-14 15:02:46.003463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.743 [2024-07-14 15:02:46.003488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.743 [2024-07-14 15:02:46.003510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.743 [2024-07-14 15:02:46.003535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.743 [2024-07-14 15:02:46.003557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.743 [2024-07-14 15:02:46.003582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.743 [2024-07-14 15:02:46.003604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.743 [2024-07-14 15:02:46.003629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.743 [2024-07-14 15:02:46.003651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.743 [2024-07-14 15:02:46.003676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.743 [2024-07-14 15:02:46.003698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.743 [2024-07-14 15:02:46.003723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.743 [2024-07-14 15:02:46.003759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.743 [2024-07-14 15:02:46.003786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.743 [2024-07-14 15:02:46.003809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.743 [2024-07-14 15:02:46.003833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.743 [2024-07-14 15:02:46.003867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.743 [2024-07-14 15:02:46.003917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.743 [2024-07-14 15:02:46.003942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.743 [2024-07-14 15:02:46.003972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.743 [2024-07-14 15:02:46.003997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.743 [2024-07-14 15:02:46.004023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.743 [2024-07-14 15:02:46.004046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.743 [2024-07-14 15:02:46.004071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.743 [2024-07-14 15:02:46.004094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.743 [2024-07-14 15:02:46.004119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.743 [2024-07-14 15:02:46.004142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.743 [2024-07-14 15:02:46.004177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.743 [2024-07-14 15:02:46.004214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.743 [2024-07-14 15:02:46.004250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.743 [2024-07-14 15:02:46.004272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.743 [2024-07-14 15:02:46.004296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.743 [2024-07-14 15:02:46.004317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.743 [2024-07-14 15:02:46.004341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.743 [2024-07-14 15:02:46.004362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.743 [2024-07-14 15:02:46.004386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.743 [2024-07-14 15:02:46.004408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.743 [2024-07-14 15:02:46.004432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.743 [2024-07-14 15:02:46.004453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.743 [2024-07-14 15:02:46.004477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.743 [2024-07-14 15:02:46.004499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.743 [2024-07-14 15:02:46.004523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.743 [2024-07-14 15:02:46.004545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.743 [2024-07-14 15:02:46.004568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.743 [2024-07-14 15:02:46.004594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.743 [2024-07-14 15:02:46.004618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.743 [2024-07-14 15:02:46.004640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.743 [2024-07-14 15:02:46.004665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.743 [2024-07-14 15:02:46.004686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.743 [2024-07-14 15:02:46.004709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.743 [2024-07-14 15:02:46.004731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.743 [2024-07-14 15:02:46.004755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.743 [2024-07-14 15:02:46.004777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.743 [2024-07-14 15:02:46.004801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.743 [2024-07-14 15:02:46.004822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.743 [2024-07-14 15:02:46.004846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.743 [2024-07-14 15:02:46.004893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.743 [2024-07-14 15:02:46.004921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.743 [2024-07-14 15:02:46.004945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.743 [2024-07-14 15:02:46.004970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.743 [2024-07-14 15:02:46.004992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.743 [2024-07-14 15:02:46.005016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.743 [2024-07-14 15:02:46.005038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.743 [2024-07-14 15:02:46.005063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.743 [2024-07-14 15:02:46.005086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.743 [2024-07-14 15:02:46.005110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.743 [2024-07-14 15:02:46.005133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.743 [2024-07-14 15:02:46.005158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.744 [2024-07-14 15:02:46.005208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.744 [2024-07-14 15:02:46.005237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.744 [2024-07-14 15:02:46.005259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.744 [2024-07-14 15:02:46.005283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.744 [2024-07-14 15:02:46.005305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.744 [2024-07-14 15:02:46.005329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.744 [2024-07-14 15:02:46.005351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.744 [2024-07-14 15:02:46.005375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.744 [2024-07-14 15:02:46.005396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.744 [2024-07-14 15:02:46.005419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.744 [2024-07-14 15:02:46.005442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.744 [2024-07-14 15:02:46.005465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.744 [2024-07-14 15:02:46.005487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.744 [2024-07-14 15:02:46.005510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.744 [2024-07-14 15:02:46.005532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.744 [2024-07-14 15:02:46.005556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.744 [2024-07-14 15:02:46.005578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.744 [2024-07-14 15:02:46.005602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.744 [2024-07-14 15:02:46.005623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.744 [2024-07-14 15:02:46.005646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.744 [2024-07-14 15:02:46.005668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.744 [2024-07-14 15:02:46.005692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.744 [2024-07-14 15:02:46.005714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.744 [2024-07-14 15:02:46.005738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.744 [2024-07-14 15:02:46.005760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.744 [2024-07-14 15:02:46.005783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.744 [2024-07-14 15:02:46.005809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.744 [2024-07-14 15:02:46.005834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.744 [2024-07-14 15:02:46.005867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.744 [2024-07-14 15:02:46.005923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.744 [2024-07-14 15:02:46.005947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.744 [2024-07-14 15:02:46.005972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.744 [2024-07-14 15:02:46.005994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.744 [2024-07-14 15:02:46.006019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.744 [2024-07-14 15:02:46.006042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.744 [2024-07-14 15:02:46.006066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.744 [2024-07-14 15:02:46.006088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.744 [2024-07-14 15:02:46.006113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.744 [2024-07-14 15:02:46.006135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.744 [2024-07-14 15:02:46.006160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.744 [2024-07-14 15:02:46.006209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.744 [2024-07-14 15:02:46.006234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.744 [2024-07-14 15:02:46.006261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.744 [2024-07-14 15:02:46.006285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.744 [2024-07-14 15:02:46.006307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.744 [2024-07-14 15:02:46.006330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.744 [2024-07-14 15:02:46.006352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.744 [2024-07-14 15:02:46.006376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.744 [2024-07-14 15:02:46.006398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.744 [2024-07-14 15:02:46.006418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f8e00 is same with the state(5) to be set 00:30:06.744 [2024-07-14 15:02:46.008009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.744 [2024-07-14 15:02:46.008048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.744 [2024-07-14 15:02:46.008082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.744 [2024-07-14 15:02:46.008107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.744 [2024-07-14 15:02:46.008132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.744 [2024-07-14 15:02:46.008166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.744 [2024-07-14 15:02:46.008206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.744 [2024-07-14 15:02:46.008241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.744 [2024-07-14 15:02:46.008267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.744 [2024-07-14 15:02:46.008289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.744 [2024-07-14 15:02:46.008312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.744 [2024-07-14 15:02:46.008334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.744 [2024-07-14 15:02:46.008358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.744 [2024-07-14 15:02:46.008380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.744 [2024-07-14 15:02:46.008404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.744 [2024-07-14 15:02:46.008426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.744 [2024-07-14 15:02:46.008450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.744 [2024-07-14 15:02:46.008471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.744 [2024-07-14 15:02:46.008496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.744 [2024-07-14 15:02:46.008517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.744 [2024-07-14 15:02:46.008542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.744 [2024-07-14 15:02:46.008563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.745 [2024-07-14 15:02:46.008602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.745 [2024-07-14 15:02:46.008625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.745 [2024-07-14 15:02:46.008649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.745 [2024-07-14 15:02:46.008671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.745 [2024-07-14 15:02:46.008701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.745 [2024-07-14 15:02:46.008724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.745 [2024-07-14 15:02:46.008749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.745 [2024-07-14 15:02:46.008771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.745 [2024-07-14 15:02:46.008794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.745 [2024-07-14 15:02:46.008816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.745 [2024-07-14 15:02:46.008840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.745 [2024-07-14 15:02:46.008887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.745 [2024-07-14 15:02:46.008949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.745 [2024-07-14 15:02:46.008976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.745 [2024-07-14 15:02:46.009001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.745 [2024-07-14 15:02:46.009024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.745 [2024-07-14 15:02:46.009048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.745 [2024-07-14 15:02:46.009071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.745 [2024-07-14 15:02:46.009095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.745 [2024-07-14 15:02:46.009118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.745 [2024-07-14 15:02:46.009143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.745 [2024-07-14 15:02:46.009174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.745 [2024-07-14 15:02:46.009214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.745 [2024-07-14 15:02:46.009246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.745 [2024-07-14 15:02:46.009270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.745 [2024-07-14 15:02:46.009291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.745 [2024-07-14 15:02:46.009315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.745 [2024-07-14 15:02:46.009337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.745 [2024-07-14 15:02:46.009361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.745 [2024-07-14 15:02:46.009387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.745 [2024-07-14 15:02:46.009412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.745 [2024-07-14 15:02:46.009434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.745 [2024-07-14 15:02:46.009459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.745 [2024-07-14 15:02:46.009480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.745 [2024-07-14 15:02:46.009504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.745 [2024-07-14 15:02:46.009525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.745 [2024-07-14 15:02:46.009549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.745 [2024-07-14 15:02:46.009571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.745 [2024-07-14 15:02:46.009595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.745 [2024-07-14 15:02:46.009616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.745 [2024-07-14 15:02:46.009640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.745 [2024-07-14 15:02:46.009662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.745 [2024-07-14 15:02:46.009686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.745 [2024-07-14 15:02:46.009708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.745 [2024-07-14 15:02:46.009732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.745 [2024-07-14 15:02:46.009754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.745 [2024-07-14 15:02:46.009778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.745 [2024-07-14 15:02:46.009800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.745 [2024-07-14 15:02:46.009824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.745 [2024-07-14 15:02:46.009846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.745 [2024-07-14 15:02:46.009883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.745 [2024-07-14 15:02:46.009924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.745 [2024-07-14 15:02:46.009950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.745 [2024-07-14 15:02:46.009973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.745 [2024-07-14 15:02:46.009998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.745 [2024-07-14 15:02:46.010025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.745 [2024-07-14 15:02:46.010050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.745 [2024-07-14 15:02:46.010073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.745 [2024-07-14 15:02:46.010098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.745 [2024-07-14 15:02:46.010120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.745 [2024-07-14 15:02:46.010145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.745 [2024-07-14 15:02:46.010178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.745 [2024-07-14 15:02:46.010202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.745 [2024-07-14 15:02:46.010240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.745 [2024-07-14 15:02:46.010265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.745 [2024-07-14 15:02:46.010287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.745 [2024-07-14 15:02:46.010310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.745 [2024-07-14 15:02:46.010333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.745 [2024-07-14 15:02:46.010356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.745 [2024-07-14 15:02:46.010379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.745 [2024-07-14 15:02:46.010404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.745 [2024-07-14 15:02:46.010425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.745 [2024-07-14 15:02:46.010449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.745 [2024-07-14 15:02:46.010471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.745 [2024-07-14 15:02:46.010495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.745 [2024-07-14 15:02:46.010517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.745 [2024-07-14 15:02:46.010541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.745 [2024-07-14 15:02:46.010563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.745 [2024-07-14 15:02:46.010587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.745 [2024-07-14 15:02:46.010609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.745 [2024-07-14 15:02:46.010638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.745 [2024-07-14 15:02:46.010661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.745 [2024-07-14 15:02:46.010684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.745 [2024-07-14 15:02:46.010706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.745 [2024-07-14 15:02:46.010730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.745 [2024-07-14 15:02:46.010752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.746 [2024-07-14 15:02:46.010776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.746 [2024-07-14 15:02:46.010798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.746 [2024-07-14 15:02:46.010821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.746 [2024-07-14 15:02:46.010843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.746 [2024-07-14 15:02:46.010912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.746 [2024-07-14 15:02:46.010938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.746 [2024-07-14 15:02:46.010963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.746 [2024-07-14 15:02:46.010987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.746 [2024-07-14 15:02:46.011011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.746 [2024-07-14 15:02:46.011034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.746 [2024-07-14 15:02:46.011059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.746 [2024-07-14 15:02:46.011081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.746 [2024-07-14 15:02:46.011106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.746 [2024-07-14 15:02:46.011128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.746 [2024-07-14 15:02:46.011152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.746 [2024-07-14 15:02:46.011175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.746 [2024-07-14 15:02:46.011205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.746 [2024-07-14 15:02:46.011228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.746 [2024-07-14 15:02:46.011268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.746 [2024-07-14 15:02:46.011294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.746 [2024-07-14 15:02:46.011317] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f9580 is same with the state(5) to be set 00:30:06.746 [2024-07-14 15:02:46.011594] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f9580 was disconnected and freed. reset controller. 00:30:06.746 [2024-07-14 15:02:46.012320] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:30:06.746 [2024-07-14 15:02:46.012399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2c80 (9): Bad file descriptor 00:30:06.746 [2024-07-14 15:02:46.012434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3400 (9): Bad file descriptor 00:30:06.746 [2024-07-14 15:02:46.012462] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6880 (9): Bad file descriptor 00:30:06.746 [2024-07-14 15:02:46.012486] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:30:06.746 [2024-07-14 15:02:46.012506] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:30:06.746 [2024-07-14 15:02:46.012525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:30:06.746 [2024-07-14 15:02:46.012598] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6100 (9): Bad file descriptor 00:30:06.746 [2024-07-14 15:02:46.012654] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:06.746 [2024-07-14 15:02:46.012686] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:06.746 [2024-07-14 15:02:46.012713] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:06.746 [2024-07-14 15:02:46.012738] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:06.746 [2024-07-14 15:02:46.012763] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:06.746 [2024-07-14 15:02:46.014152] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:06.746 [2024-07-14 15:02:46.014314] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:06.746 [2024-07-14 15:02:46.014367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:30:06.746 [2024-07-14 15:02:46.014557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.746 [2024-07-14 15:02:46.014594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3b80 with addr=10.0.0.2, port=4420 00:30:06.746 [2024-07-14 15:02:46.014619] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3b80 is same with the state(5) to be set 00:30:06.746 [2024-07-14 15:02:46.014642] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:30:06.746 [2024-07-14 15:02:46.014673] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:30:06.746 [2024-07-14 15:02:46.014693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:30:06.746 [2024-07-14 15:02:46.014724] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:30:06.746 [2024-07-14 15:02:46.014745] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:30:06.746 [2024-07-14 15:02:46.014763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:30:06.746 [2024-07-14 15:02:46.014790] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:30:06.746 [2024-07-14 15:02:46.014815] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:30:06.746 [2024-07-14 15:02:46.014835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:30:06.746 [2024-07-14 15:02:46.015431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.746 [2024-07-14 15:02:46.015465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.746 [2024-07-14 15:02:46.015497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.746 [2024-07-14 15:02:46.015522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.746 [2024-07-14 15:02:46.015547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.746 [2024-07-14 15:02:46.015571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.746 [2024-07-14 15:02:46.015596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.746 [2024-07-14 15:02:46.015618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.746 [2024-07-14 15:02:46.015643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.746 [2024-07-14 15:02:46.015665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.746 [2024-07-14 15:02:46.015689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.746 [2024-07-14 15:02:46.015712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.746 [2024-07-14 15:02:46.015753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.746 [2024-07-14 15:02:46.015775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.746 [2024-07-14 15:02:46.015799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.746 [2024-07-14 15:02:46.015820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.746 [2024-07-14 15:02:46.015843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.746 [2024-07-14 15:02:46.015898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.746 [2024-07-14 15:02:46.015927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.746 [2024-07-14 15:02:46.015950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.746 [2024-07-14 15:02:46.015975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.746 [2024-07-14 15:02:46.015997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.746 [2024-07-14 15:02:46.016022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.746 [2024-07-14 15:02:46.016049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.746 [2024-07-14 15:02:46.016075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.746 [2024-07-14 15:02:46.016097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.746 [2024-07-14 15:02:46.016122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.746 [2024-07-14 15:02:46.016144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.746 [2024-07-14 15:02:46.016194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.746 [2024-07-14 15:02:46.016217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.746 [2024-07-14 15:02:46.016241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.746 [2024-07-14 15:02:46.016264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.746 [2024-07-14 15:02:46.016287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.746 [2024-07-14 15:02:46.016309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.746 [2024-07-14 15:02:46.016333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.746 [2024-07-14 15:02:46.016355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.746 [2024-07-14 15:02:46.016378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.746 [2024-07-14 15:02:46.016400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.746 [2024-07-14 15:02:46.016423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.747 [2024-07-14 15:02:46.016445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.747 [2024-07-14 15:02:46.016469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.747 [2024-07-14 15:02:46.016491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.747 [2024-07-14 15:02:46.016515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.747 [2024-07-14 15:02:46.016536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.747 [2024-07-14 15:02:46.016560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.747 [2024-07-14 15:02:46.016582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.747 [2024-07-14 15:02:46.016606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.747 [2024-07-14 15:02:46.016627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.747 [2024-07-14 15:02:46.016655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.747 [2024-07-14 15:02:46.016678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.747 [2024-07-14 15:02:46.016702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.747 [2024-07-14 15:02:46.016724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.747 [2024-07-14 15:02:46.016747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.747 [2024-07-14 15:02:46.016769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.747 [2024-07-14 15:02:46.016793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.747 [2024-07-14 15:02:46.016815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.747 [2024-07-14 15:02:46.016838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.747 [2024-07-14 15:02:46.016893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.747 [2024-07-14 15:02:46.016922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.747 [2024-07-14 15:02:46.016945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.747 [2024-07-14 15:02:46.016970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.747 [2024-07-14 15:02:46.016992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.747 [2024-07-14 15:02:46.017016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.747 [2024-07-14 15:02:46.017039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.747 [2024-07-14 15:02:46.017063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.747 [2024-07-14 15:02:46.017085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.747 [2024-07-14 15:02:46.017109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.747 [2024-07-14 15:02:46.017132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.747 [2024-07-14 15:02:46.017156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.747 [2024-07-14 15:02:46.017203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.747 [2024-07-14 15:02:46.017229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.747 [2024-07-14 15:02:46.017250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.747 [2024-07-14 15:02:46.017274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.747 [2024-07-14 15:02:46.017299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.747 [2024-07-14 15:02:46.017323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.747 [2024-07-14 15:02:46.017345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.747 [2024-07-14 15:02:46.017368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.747 [2024-07-14 15:02:46.017390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.747 [2024-07-14 15:02:46.017413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.747 [2024-07-14 15:02:46.017435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.747 [2024-07-14 15:02:46.017458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.747 [2024-07-14 15:02:46.017480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.747 [2024-07-14 15:02:46.017503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.747 [2024-07-14 15:02:46.017524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.747 [2024-07-14 15:02:46.017547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.747 [2024-07-14 15:02:46.017569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.747 [2024-07-14 15:02:46.017593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.747 [2024-07-14 15:02:46.017614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.747 [2024-07-14 15:02:46.017637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.747 [2024-07-14 15:02:46.017658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.747 [2024-07-14 15:02:46.017682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.747 [2024-07-14 15:02:46.017704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.747 [2024-07-14 15:02:46.017727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.747 [2024-07-14 15:02:46.017748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.747 [2024-07-14 15:02:46.017771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.747 [2024-07-14 15:02:46.017793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.747 [2024-07-14 15:02:46.017817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.747 [2024-07-14 15:02:46.017838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.747 [2024-07-14 15:02:46.017889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.747 [2024-07-14 15:02:46.017914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.747 [2024-07-14 15:02:46.017939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.747 [2024-07-14 15:02:46.017961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.747 [2024-07-14 15:02:46.017985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.747 [2024-07-14 15:02:46.018008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.747 [2024-07-14 15:02:46.018032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.747 [2024-07-14 15:02:46.018054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.747 [2024-07-14 15:02:46.018078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.747 [2024-07-14 15:02:46.018101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.747 [2024-07-14 15:02:46.018125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.747 [2024-07-14 15:02:46.018147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.747 [2024-07-14 15:02:46.018196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.747 [2024-07-14 15:02:46.018218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.747 [2024-07-14 15:02:46.018243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.747 [2024-07-14 15:02:46.018265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.747 [2024-07-14 15:02:46.018288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.747 [2024-07-14 15:02:46.018310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.747 [2024-07-14 15:02:46.018350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.747 [2024-07-14 15:02:46.018373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.747 [2024-07-14 15:02:46.018397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.747 [2024-07-14 15:02:46.018419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.747 [2024-07-14 15:02:46.018459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.747 [2024-07-14 15:02:46.018482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.747 [2024-07-14 15:02:46.018506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.747 [2024-07-14 15:02:46.018533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.747 [2024-07-14 15:02:46.018558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.748 [2024-07-14 15:02:46.018580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.748 [2024-07-14 15:02:46.018604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.748 [2024-07-14 15:02:46.018651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.748 [2024-07-14 15:02:46.018675] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f9800 is same with the state(5) to be set 00:30:06.748 [2024-07-14 15:02:46.020333] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:06.748 [2024-07-14 15:02:46.020404] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:06.748 [2024-07-14 15:02:46.020443] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:30:06.748 [2024-07-14 15:02:46.020468] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:06.748 [2024-07-14 15:02:46.020486] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:06.748 [2024-07-14 15:02:46.020503] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:06.748 [2024-07-14 15:02:46.020523] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:30:06.748 [2024-07-14 15:02:46.020692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.748 [2024-07-14 15:02:46.020728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f5200 with addr=10.0.0.2, port=4420 00:30:06.748 [2024-07-14 15:02:46.020751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5200 is same with the state(5) to be set 00:30:06.748 [2024-07-14 15:02:46.020779] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3b80 (9): Bad file descriptor 00:30:06.748 [2024-07-14 15:02:46.021517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.748 [2024-07-14 15:02:46.021565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:30:06.748 [2024-07-14 15:02:46.021588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:30:06.748 [2024-07-14 15:02:46.021693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.748 [2024-07-14 15:02:46.021727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4300 with addr=10.0.0.2, port=4420 00:30:06.748 [2024-07-14 15:02:46.021750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4300 is same with the state(5) to be set 00:30:06.748 [2024-07-14 15:02:46.021901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.748 [2024-07-14 15:02:46.021935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f5980 with addr=10.0.0.2, port=4420 00:30:06.748 [2024-07-14 15:02:46.021957] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5980 is same with the state(5) to be set 00:30:06.748 [2024-07-14 15:02:46.021985] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5200 (9): Bad file descriptor 00:30:06.748 [2024-07-14 15:02:46.022010] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:30:06.748 [2024-07-14 15:02:46.022029] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:30:06.748 [2024-07-14 15:02:46.022053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:30:07.010 [2024-07-14 15:02:46.022652] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:07.010 [2024-07-14 15:02:46.022689] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:30:07.010 [2024-07-14 15:02:46.022719] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4300 (9): Bad file descriptor 00:30:07.010 [2024-07-14 15:02:46.022747] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5980 (9): Bad file descriptor 00:30:07.010 [2024-07-14 15:02:46.022770] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:30:07.010 [2024-07-14 15:02:46.022790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:30:07.010 [2024-07-14 15:02:46.022809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:30:07.010 [2024-07-14 15:02:46.022983] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:07.010 [2024-07-14 15:02:46.023055] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:07.010 [2024-07-14 15:02:46.023081] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:07.010 [2024-07-14 15:02:46.023101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:07.010 [2024-07-14 15:02:46.023127] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:30:07.010 [2024-07-14 15:02:46.023147] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:30:07.010 [2024-07-14 15:02:46.023175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:30:07.010 [2024-07-14 15:02:46.023217] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:30:07.010 [2024-07-14 15:02:46.023247] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:30:07.010 [2024-07-14 15:02:46.023265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:30:07.010 [2024-07-14 15:02:46.023358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.010 [2024-07-14 15:02:46.023388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.010 [2024-07-14 15:02:46.023420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.010 [2024-07-14 15:02:46.023445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.010 [2024-07-14 15:02:46.023470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.010 [2024-07-14 15:02:46.023493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.010 [2024-07-14 15:02:46.023517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.010 [2024-07-14 15:02:46.023539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.010 [2024-07-14 15:02:46.023564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.010 [2024-07-14 15:02:46.023586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.010 [2024-07-14 15:02:46.023609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.010 [2024-07-14 15:02:46.023637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.010 [2024-07-14 15:02:46.023663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.010 [2024-07-14 15:02:46.023685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.010 [2024-07-14 15:02:46.023710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.010 [2024-07-14 15:02:46.023732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.010 [2024-07-14 15:02:46.023757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.010 [2024-07-14 15:02:46.023779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.010 [2024-07-14 15:02:46.023804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.010 [2024-07-14 15:02:46.023826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.010 [2024-07-14 15:02:46.023850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.010 [2024-07-14 15:02:46.023884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.010 [2024-07-14 15:02:46.023911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.010 [2024-07-14 15:02:46.023935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.010 [2024-07-14 15:02:46.023959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.010 [2024-07-14 15:02:46.023981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.010 [2024-07-14 15:02:46.024005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.010 [2024-07-14 15:02:46.024027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.010 [2024-07-14 15:02:46.024052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.010 [2024-07-14 15:02:46.024074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.010 [2024-07-14 15:02:46.024098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.010 [2024-07-14 15:02:46.024120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.010 [2024-07-14 15:02:46.024144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.010 [2024-07-14 15:02:46.024177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.010 [2024-07-14 15:02:46.024201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.010 [2024-07-14 15:02:46.024223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.010 [2024-07-14 15:02:46.024251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.010 [2024-07-14 15:02:46.024290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.010 [2024-07-14 15:02:46.024315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.010 [2024-07-14 15:02:46.024354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.010 [2024-07-14 15:02:46.024379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.010 [2024-07-14 15:02:46.024401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.010 [2024-07-14 15:02:46.024425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.010 [2024-07-14 15:02:46.024446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.010 [2024-07-14 15:02:46.024470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.010 [2024-07-14 15:02:46.024492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.010 [2024-07-14 15:02:46.024516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.010 [2024-07-14 15:02:46.024537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.010 [2024-07-14 15:02:46.024561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.010 [2024-07-14 15:02:46.024584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.010 [2024-07-14 15:02:46.024608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.010 [2024-07-14 15:02:46.024630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.010 [2024-07-14 15:02:46.024654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.010 [2024-07-14 15:02:46.024675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.010 [2024-07-14 15:02:46.024700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.010 [2024-07-14 15:02:46.024722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.010 [2024-07-14 15:02:46.024745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.010 [2024-07-14 15:02:46.024767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.010 [2024-07-14 15:02:46.024791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.010 [2024-07-14 15:02:46.024813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.010 [2024-07-14 15:02:46.024836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.010 [2024-07-14 15:02:46.024870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.011 [2024-07-14 15:02:46.024903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.011 [2024-07-14 15:02:46.024926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.011 [2024-07-14 15:02:46.024950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.011 [2024-07-14 15:02:46.024973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.011 [2024-07-14 15:02:46.024997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.011 [2024-07-14 15:02:46.025019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.011 [2024-07-14 15:02:46.025043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.011 [2024-07-14 15:02:46.025065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.011 [2024-07-14 15:02:46.025090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.011 [2024-07-14 15:02:46.025112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.011 [2024-07-14 15:02:46.025135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.011 [2024-07-14 15:02:46.025157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.011 [2024-07-14 15:02:46.025185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.011 [2024-07-14 15:02:46.025207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.011 [2024-07-14 15:02:46.025231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.011 [2024-07-14 15:02:46.025253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.011 [2024-07-14 15:02:46.025293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.011 [2024-07-14 15:02:46.025316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.011 [2024-07-14 15:02:46.025341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.011 [2024-07-14 15:02:46.025364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.011 [2024-07-14 15:02:46.025388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.011 [2024-07-14 15:02:46.025410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.011 [2024-07-14 15:02:46.025434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.011 [2024-07-14 15:02:46.025456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.011 [2024-07-14 15:02:46.025485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.011 [2024-07-14 15:02:46.025508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.011 [2024-07-14 15:02:46.025532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.011 [2024-07-14 15:02:46.025554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.011 [2024-07-14 15:02:46.025578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.011 [2024-07-14 15:02:46.025600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.011 [2024-07-14 15:02:46.025625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.011 [2024-07-14 15:02:46.025646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.011 [2024-07-14 15:02:46.025670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.011 [2024-07-14 15:02:46.025692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.011 [2024-07-14 15:02:46.025716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.011 [2024-07-14 15:02:46.025738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.011 [2024-07-14 15:02:46.025762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.011 [2024-07-14 15:02:46.025784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.011 [2024-07-14 15:02:46.025809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.011 [2024-07-14 15:02:46.025831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.011 [2024-07-14 15:02:46.025855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.011 [2024-07-14 15:02:46.025902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.011 [2024-07-14 15:02:46.025929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.011 [2024-07-14 15:02:46.025952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.011 [2024-07-14 15:02:46.025976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.011 [2024-07-14 15:02:46.025998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.011 [2024-07-14 15:02:46.026022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.011 [2024-07-14 15:02:46.026044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.011 [2024-07-14 15:02:46.026068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.011 [2024-07-14 15:02:46.026094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.011 [2024-07-14 15:02:46.026119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.011 [2024-07-14 15:02:46.026142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.011 [2024-07-14 15:02:46.026175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.011 [2024-07-14 15:02:46.026197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.011 [2024-07-14 15:02:46.026222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.011 [2024-07-14 15:02:46.026252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.011 [2024-07-14 15:02:46.026276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.011 [2024-07-14 15:02:46.026298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.011 [2024-07-14 15:02:46.026322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.011 [2024-07-14 15:02:46.026344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.011 [2024-07-14 15:02:46.026368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.011 [2024-07-14 15:02:46.026390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.011 [2024-07-14 15:02:46.026414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.011 [2024-07-14 15:02:46.026436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.011 [2024-07-14 15:02:46.026460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.011 [2024-07-14 15:02:46.026482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.011 [2024-07-14 15:02:46.026504] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f9a80 is same with the state(5) to be set 00:30:07.011 [2024-07-14 15:02:46.031090] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:30:07.011 [2024-07-14 15:02:46.031130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:30:07.011 [2024-07-14 15:02:46.031156] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:30:07.011 [2024-07-14 15:02:46.031204] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:30:07.011 [2024-07-14 15:02:46.031232] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:30:07.011 [2024-07-14 15:02:46.031255] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:07.011 [2024-07-14 15:02:46.031272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:07.011 [2024-07-14 15:02:46.031289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:07.011 task offset: 31616 on job bdev=Nvme5n1 fails 00:30:07.011 00:30:07.011 Latency(us) 00:30:07.011 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:07.011 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:07.011 Job: Nvme1n1 ended in about 1.17 seconds with error 00:30:07.011 Verification LBA range: start 0x0 length 0x400 00:30:07.011 Nvme1n1 : 1.17 109.05 6.82 54.52 0.00 387575.66 40195.41 299815.06 00:30:07.011 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:07.011 Job: Nvme2n1 ended in about 1.18 seconds with error 00:30:07.011 Verification LBA range: start 0x0 length 0x400 00:30:07.011 Nvme2n1 : 1.18 162.33 10.15 54.11 0.00 287968.14 21262.79 298261.62 00:30:07.011 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:07.011 Job: Nvme3n1 ended in about 1.19 seconds with error 00:30:07.011 Verification LBA range: start 0x0 length 0x400 00:30:07.011 Nvme3n1 : 1.19 165.04 10.32 53.89 0.00 279739.95 21456.97 299815.06 00:30:07.011 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:07.011 Job: Nvme4n1 ended in about 1.20 seconds with error 00:30:07.011 Verification LBA range: start 0x0 length 0x400 00:30:07.011 Nvme4n1 : 1.20 159.42 9.96 53.14 0.00 283274.81 23981.32 304475.40 00:30:07.011 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:07.012 Job: Nvme5n1 ended in about 1.17 seconds with error 00:30:07.012 Verification LBA range: start 0x0 length 0x400 00:30:07.012 Nvme5n1 : 1.17 164.52 10.28 54.84 0.00 269073.26 6553.60 299815.06 00:30:07.012 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:07.012 Verification LBA range: start 0x0 length 0x400 00:30:07.012 Nvme6n1 : 1.17 164.26 10.27 0.00 0.00 352642.09 24563.86 312242.63 00:30:07.012 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:07.012 Job: Nvme7n1 ended in about 1.21 seconds with error 00:30:07.012 Verification LBA range: start 0x0 length 0x400 00:30:07.012 Nvme7n1 : 1.21 158.57 9.91 52.86 0.00 270185.43 24369.68 306028.85 00:30:07.012 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:07.012 Job: Nvme8n1 ended in about 1.22 seconds with error 00:30:07.012 Verification LBA range: start 0x0 length 0x400 00:30:07.012 Nvme8n1 : 1.22 161.10 10.07 52.60 0.00 262567.65 22233.69 279620.27 00:30:07.012 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:07.012 Job: Nvme9n1 ended in about 1.22 seconds with error 00:30:07.012 Verification LBA range: start 0x0 length 0x400 00:30:07.012 Nvme9n1 : 1.22 161.71 10.11 52.27 0.00 257610.95 22330.79 299815.06 00:30:07.012 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:07.012 Job: Nvme10n1 ended in about 1.19 seconds with error 00:30:07.012 Verification LBA range: start 0x0 length 0x400 00:30:07.012 Nvme10n1 : 1.19 107.32 6.71 53.66 0.00 334510.21 27185.30 341758.10 00:30:07.012 =================================================================================================================== 00:30:07.012 Total : 1513.31 94.58 481.89 0.00 293505.56 6553.60 341758.10 00:30:07.012 [2024-07-14 15:02:46.113430] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:07.012 [2024-07-14 15:02:46.113536] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:30:07.012 [2024-07-14 15:02:46.113920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.012 [2024-07-14 15:02:46.113977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4a80 with addr=10.0.0.2, port=4420 00:30:07.012 [2024-07-14 15:02:46.114016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4a80 is same with the state(5) to be set 00:30:07.012 [2024-07-14 15:02:46.114131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.012 [2024-07-14 15:02:46.114174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f6880 with addr=10.0.0.2, port=4420 00:30:07.012 [2024-07-14 15:02:46.114207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6880 is same with the state(5) to be set 00:30:07.012 [2024-07-14 15:02:46.114322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.012 [2024-07-14 15:02:46.114355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3400 with addr=10.0.0.2, port=4420 00:30:07.012 [2024-07-14 15:02:46.114378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3400 is same with the state(5) to be set 00:30:07.012 [2024-07-14 15:02:46.114518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.012 [2024-07-14 15:02:46.114550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2c80 with addr=10.0.0.2, port=4420 00:30:07.012 [2024-07-14 15:02:46.114573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2c80 is same with the state(5) to be set 00:30:07.012 [2024-07-14 15:02:46.114706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.012 [2024-07-14 15:02:46.114738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3b80 with addr=10.0.0.2, port=4420 00:30:07.012 [2024-07-14 15:02:46.114761] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3b80 is same with the state(5) to be set 00:30:07.012 [2024-07-14 15:02:46.115428] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:30:07.012 [2024-07-14 15:02:46.115465] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:30:07.012 [2024-07-14 15:02:46.115499] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:30:07.012 [2024-07-14 15:02:46.115524] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:07.012 [2024-07-14 15:02:46.115781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.012 [2024-07-14 15:02:46.115817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f6100 with addr=10.0.0.2, port=4420 00:30:07.012 [2024-07-14 15:02:46.115841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6100 is same with the state(5) to be set 00:30:07.012 [2024-07-14 15:02:46.115888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4a80 (9): Bad file descriptor 00:30:07.012 [2024-07-14 15:02:46.115925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6880 (9): Bad file descriptor 00:30:07.012 [2024-07-14 15:02:46.115953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3400 (9): Bad file descriptor 00:30:07.012 [2024-07-14 15:02:46.115982] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2c80 (9): Bad file descriptor 00:30:07.012 [2024-07-14 15:02:46.116010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3b80 (9): Bad file descriptor 00:30:07.012 [2024-07-14 15:02:46.116086] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:07.012 [2024-07-14 15:02:46.116118] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:07.012 [2024-07-14 15:02:46.116147] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:07.012 [2024-07-14 15:02:46.116199] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:07.012 [2024-07-14 15:02:46.116225] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:07.012 [2024-07-14 15:02:46.116462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.012 [2024-07-14 15:02:46.116498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f5200 with addr=10.0.0.2, port=4420 00:30:07.012 [2024-07-14 15:02:46.116526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5200 is same with the state(5) to be set 00:30:07.012 [2024-07-14 15:02:46.116653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.012 [2024-07-14 15:02:46.116688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f5980 with addr=10.0.0.2, port=4420 00:30:07.012 [2024-07-14 15:02:46.116710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5980 is same with the state(5) to be set 00:30:07.012 [2024-07-14 15:02:46.116871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.012 [2024-07-14 15:02:46.116910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4300 with addr=10.0.0.2, port=4420 00:30:07.012 [2024-07-14 15:02:46.116933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4300 is same with the state(5) to be set 00:30:07.012 [2024-07-14 15:02:46.117063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.012 [2024-07-14 15:02:46.117097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:30:07.012 [2024-07-14 15:02:46.117120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:30:07.012 [2024-07-14 15:02:46.117147] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6100 (9): Bad file descriptor 00:30:07.012 [2024-07-14 15:02:46.117181] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:30:07.012 [2024-07-14 15:02:46.117201] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:30:07.012 [2024-07-14 15:02:46.117223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:30:07.012 [2024-07-14 15:02:46.117255] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:30:07.012 [2024-07-14 15:02:46.117276] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:30:07.012 [2024-07-14 15:02:46.117296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:30:07.012 [2024-07-14 15:02:46.117338] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:30:07.012 [2024-07-14 15:02:46.117359] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:30:07.012 [2024-07-14 15:02:46.117378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:30:07.012 [2024-07-14 15:02:46.117403] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:30:07.012 [2024-07-14 15:02:46.117423] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:30:07.012 [2024-07-14 15:02:46.117442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:30:07.012 [2024-07-14 15:02:46.117467] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:30:07.012 [2024-07-14 15:02:46.117486] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:30:07.012 [2024-07-14 15:02:46.117504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:30:07.012 [2024-07-14 15:02:46.117598] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:07.012 [2024-07-14 15:02:46.117625] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:07.012 [2024-07-14 15:02:46.117642] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:07.012 [2024-07-14 15:02:46.117658] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:07.012 [2024-07-14 15:02:46.117678] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:07.012 [2024-07-14 15:02:46.117703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5200 (9): Bad file descriptor 00:30:07.012 [2024-07-14 15:02:46.117732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5980 (9): Bad file descriptor 00:30:07.012 [2024-07-14 15:02:46.117774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4300 (9): Bad file descriptor 00:30:07.012 [2024-07-14 15:02:46.117802] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:30:07.012 [2024-07-14 15:02:46.117826] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:30:07.012 [2024-07-14 15:02:46.117844] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:30:07.012 [2024-07-14 15:02:46.117873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:30:07.012 [2024-07-14 15:02:46.117970] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:07.012 [2024-07-14 15:02:46.117999] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:30:07.012 [2024-07-14 15:02:46.118018] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:30:07.012 [2024-07-14 15:02:46.118037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:30:07.012 [2024-07-14 15:02:46.118064] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:30:07.012 [2024-07-14 15:02:46.118085] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:30:07.012 [2024-07-14 15:02:46.118104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:30:07.013 [2024-07-14 15:02:46.118129] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:30:07.013 [2024-07-14 15:02:46.118150] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:30:07.013 [2024-07-14 15:02:46.118168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:30:07.013 [2024-07-14 15:02:46.118218] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:07.013 [2024-07-14 15:02:46.118238] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:07.013 [2024-07-14 15:02:46.118267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:07.013 [2024-07-14 15:02:46.118328] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:07.013 [2024-07-14 15:02:46.118353] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:07.013 [2024-07-14 15:02:46.118371] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:07.013 [2024-07-14 15:02:46.118387] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.301 15:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:30:10.301 15:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:30:10.885 15:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1985717 00:30:10.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1985717) - No such process 00:30:10.885 15:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:30:10.885 15:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:30:10.885 15:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:30:10.885 15:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:10.885 15:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:10.885 15:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:30:10.885 15:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:10.885 15:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:30:10.885 15:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:10.885 15:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:30:10.885 15:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:10.885 15:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:10.885 rmmod nvme_tcp 00:30:10.885 rmmod nvme_fabrics 00:30:10.885 rmmod nvme_keyring 00:30:10.885 15:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:10.885 15:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:30:10.885 15:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:30:10.885 15:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:30:10.885 15:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:10.885 15:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:10.885 15:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:10.885 15:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:10.885 15:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:10.885 15:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:10.885 15:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:10.885 15:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:12.791 15:02:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:12.791 00:30:12.791 real 0m11.814s 00:30:12.791 user 0m34.469s 00:30:12.791 sys 0m2.073s 00:30:12.791 15:02:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:12.791 15:02:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:12.791 ************************************ 00:30:12.791 END TEST nvmf_shutdown_tc3 00:30:12.791 ************************************ 00:30:12.791 15:02:52 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:30:12.791 15:02:52 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:30:12.791 00:30:12.791 real 0m42.439s 00:30:12.791 user 2m14.803s 00:30:12.791 sys 0m7.873s 00:30:12.791 15:02:52 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:12.791 15:02:52 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:12.791 ************************************ 00:30:12.791 END TEST nvmf_shutdown 00:30:12.791 ************************************ 00:30:12.791 15:02:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:12.791 15:02:52 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:30:12.791 15:02:52 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:12.791 15:02:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:12.791 15:02:52 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:30:12.791 15:02:52 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:12.791 15:02:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:13.050 15:02:52 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:30:13.050 15:02:52 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:13.050 15:02:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:13.050 15:02:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:13.050 15:02:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:13.050 ************************************ 00:30:13.050 START TEST nvmf_multicontroller 00:30:13.050 ************************************ 00:30:13.050 15:02:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:13.050 * Looking for test storage... 00:30:13.050 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:13.050 15:02:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:13.050 15:02:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:30:13.050 15:02:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:13.050 15:02:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:13.050 15:02:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:13.050 15:02:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:13.050 15:02:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:13.050 15:02:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:13.050 15:02:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:13.050 15:02:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:13.050 15:02:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:13.050 15:02:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:13.050 15:02:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:13.050 15:02:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:13.050 15:02:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:13.050 15:02:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:13.050 15:02:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:13.050 15:02:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:13.050 15:02:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:13.050 15:02:52 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:13.050 15:02:52 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:13.050 15:02:52 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:13.050 15:02:52 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.050 15:02:52 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.050 15:02:52 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.050 15:02:52 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:30:13.050 15:02:52 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.050 15:02:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:30:13.050 15:02:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:13.050 15:02:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:13.051 15:02:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:13.051 15:02:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:13.051 15:02:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:13.051 15:02:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:13.051 15:02:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:13.051 15:02:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:13.051 15:02:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:13.051 15:02:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:13.051 15:02:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:30:13.051 15:02:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:30:13.051 15:02:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:13.051 15:02:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:30:13.051 15:02:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:30:13.051 15:02:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:13.051 15:02:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:13.051 15:02:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:13.051 15:02:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:13.051 15:02:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:13.051 15:02:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:13.051 15:02:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:13.051 15:02:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:13.051 15:02:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:13.051 15:02:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:13.051 15:02:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:30:13.051 15:02:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:14.955 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:14.955 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:14.955 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:14.955 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:14.955 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:15.214 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:15.214 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:15.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:15.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:30:15.214 00:30:15.214 --- 10.0.0.2 ping statistics --- 00:30:15.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:15.214 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:30:15.214 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:15.214 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:15.214 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:30:15.214 00:30:15.214 --- 10.0.0.1 ping statistics --- 00:30:15.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:15.214 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:30:15.214 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:15.214 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:30:15.214 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:15.214 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:15.214 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:15.214 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:15.214 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:15.214 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:15.214 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:15.214 15:02:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:30:15.214 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:15.215 15:02:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:15.215 15:02:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:15.215 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1988511 00:30:15.215 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:15.215 15:02:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1988511 00:30:15.215 15:02:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1988511 ']' 00:30:15.215 15:02:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:15.215 15:02:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:15.215 15:02:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:15.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:15.215 15:02:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:15.215 15:02:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:15.215 [2024-07-14 15:02:54.391295] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:15.215 [2024-07-14 15:02:54.391434] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:15.215 EAL: No free 2048 kB hugepages reported on node 1 00:30:15.474 [2024-07-14 15:02:54.536500] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:15.733 [2024-07-14 15:02:54.795454] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:15.733 [2024-07-14 15:02:54.795537] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:15.733 [2024-07-14 15:02:54.795572] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:15.733 [2024-07-14 15:02:54.795609] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:15.733 [2024-07-14 15:02:54.795630] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:15.733 [2024-07-14 15:02:54.795778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:15.733 [2024-07-14 15:02:54.795808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:15.733 [2024-07-14 15:02:54.795798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:16.347 15:02:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:16.347 15:02:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:30:16.347 15:02:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:16.347 15:02:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:16.347 15:02:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:16.347 15:02:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:16.347 15:02:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:16.347 15:02:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:16.347 15:02:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:16.347 [2024-07-14 15:02:55.371598] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:16.347 15:02:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:16.347 15:02:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:16.347 15:02:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:16.347 15:02:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:16.347 Malloc0 00:30:16.347 15:02:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:16.347 15:02:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:16.347 15:02:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:16.347 15:02:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:16.347 15:02:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:16.347 15:02:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:16.347 15:02:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:16.347 15:02:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:16.347 15:02:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:16.347 15:02:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:16.347 15:02:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:16.347 15:02:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:16.347 [2024-07-14 15:02:55.489079] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:16.347 15:02:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:16.347 15:02:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:16.347 15:02:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:16.347 15:02:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:16.347 [2024-07-14 15:02:55.496923] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:16.347 15:02:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:16.347 15:02:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:30:16.347 15:02:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:16.347 15:02:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:16.347 Malloc1 00:30:16.347 15:02:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:16.347 15:02:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:30:16.347 15:02:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:16.347 15:02:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:16.347 15:02:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:16.347 15:02:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:30:16.348 15:02:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:16.348 15:02:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:16.348 15:02:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:16.348 15:02:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:16.348 15:02:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:16.348 15:02:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:16.348 15:02:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:16.348 15:02:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:30:16.348 15:02:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:16.348 15:02:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:16.348 15:02:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:16.348 15:02:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1988725 00:30:16.348 15:02:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:30:16.348 15:02:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:16.348 15:02:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1988725 /var/tmp/bdevperf.sock 00:30:16.348 15:02:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1988725 ']' 00:30:16.348 15:02:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:16.348 15:02:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:16.348 15:02:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:16.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:16.348 15:02:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:16.348 15:02:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:17.722 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:17.722 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:30:17.722 15:02:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:30:17.722 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:17.722 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:17.722 NVMe0n1 00:30:17.722 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:17.722 15:02:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:17.722 15:02:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:30:17.722 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:17.722 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:17.722 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:17.722 1 00:30:17.722 15:02:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:30:17.722 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:30:17.722 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:30:17.722 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:17.722 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:17.722 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:17.722 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:17.722 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:30:17.722 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:17.722 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:17.722 request: 00:30:17.722 { 00:30:17.722 "name": "NVMe0", 00:30:17.722 "trtype": "tcp", 00:30:17.722 "traddr": "10.0.0.2", 00:30:17.722 "adrfam": "ipv4", 00:30:17.722 "trsvcid": "4420", 00:30:17.722 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:17.722 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:30:17.722 "hostaddr": "10.0.0.2", 00:30:17.722 "hostsvcid": "60000", 00:30:17.722 "prchk_reftag": false, 00:30:17.722 "prchk_guard": false, 00:30:17.722 "hdgst": false, 00:30:17.722 "ddgst": false, 00:30:17.722 "method": "bdev_nvme_attach_controller", 00:30:17.722 "req_id": 1 00:30:17.722 } 00:30:17.722 Got JSON-RPC error response 00:30:17.722 response: 00:30:17.722 { 00:30:17.722 "code": -114, 00:30:17.722 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:30:17.722 } 00:30:17.722 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:17.722 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:30:17.722 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:17.722 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:17.722 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:17.722 15:02:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:30:17.722 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:30:17.722 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:30:17.722 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:17.722 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:17.722 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:17.722 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:17.722 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:30:17.722 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:17.722 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:17.722 request: 00:30:17.722 { 00:30:17.722 "name": "NVMe0", 00:30:17.722 "trtype": "tcp", 00:30:17.722 "traddr": "10.0.0.2", 00:30:17.722 "adrfam": "ipv4", 00:30:17.722 "trsvcid": "4420", 00:30:17.722 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:17.722 "hostaddr": "10.0.0.2", 00:30:17.722 "hostsvcid": "60000", 00:30:17.722 "prchk_reftag": false, 00:30:17.722 "prchk_guard": false, 00:30:17.722 "hdgst": false, 00:30:17.722 "ddgst": false, 00:30:17.722 "method": "bdev_nvme_attach_controller", 00:30:17.722 "req_id": 1 00:30:17.722 } 00:30:17.722 Got JSON-RPC error response 00:30:17.722 response: 00:30:17.722 { 00:30:17.722 "code": -114, 00:30:17.722 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:30:17.722 } 00:30:17.722 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:17.722 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:30:17.722 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:17.722 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:17.722 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:17.722 15:02:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:30:17.722 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:30:17.723 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:30:17.723 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:17.723 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:17.723 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:17.723 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:17.723 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:30:17.723 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:17.723 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:17.723 request: 00:30:17.723 { 00:30:17.723 "name": "NVMe0", 00:30:17.723 "trtype": "tcp", 00:30:17.723 "traddr": "10.0.0.2", 00:30:17.723 "adrfam": "ipv4", 00:30:17.723 "trsvcid": "4420", 00:30:17.723 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:17.723 "hostaddr": "10.0.0.2", 00:30:17.723 "hostsvcid": "60000", 00:30:17.723 "prchk_reftag": false, 00:30:17.723 "prchk_guard": false, 00:30:17.723 "hdgst": false, 00:30:17.723 "ddgst": false, 00:30:17.723 "multipath": "disable", 00:30:17.723 "method": "bdev_nvme_attach_controller", 00:30:17.723 "req_id": 1 00:30:17.723 } 00:30:17.723 Got JSON-RPC error response 00:30:17.723 response: 00:30:17.723 { 00:30:17.723 "code": -114, 00:30:17.723 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:30:17.723 } 00:30:17.723 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:17.723 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:30:17.723 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:17.723 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:17.723 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:17.723 15:02:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:30:17.723 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:30:17.723 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:30:17.723 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:17.723 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:17.723 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:17.723 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:17.723 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:30:17.723 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:17.723 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:17.723 request: 00:30:17.723 { 00:30:17.723 "name": "NVMe0", 00:30:17.723 "trtype": "tcp", 00:30:17.723 "traddr": "10.0.0.2", 00:30:17.723 "adrfam": "ipv4", 00:30:17.723 "trsvcid": "4420", 00:30:17.723 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:17.723 "hostaddr": "10.0.0.2", 00:30:17.723 "hostsvcid": "60000", 00:30:17.723 "prchk_reftag": false, 00:30:17.723 "prchk_guard": false, 00:30:17.723 "hdgst": false, 00:30:17.723 "ddgst": false, 00:30:17.723 "multipath": "failover", 00:30:17.723 "method": "bdev_nvme_attach_controller", 00:30:17.723 "req_id": 1 00:30:17.723 } 00:30:17.723 Got JSON-RPC error response 00:30:17.723 response: 00:30:17.723 { 00:30:17.723 "code": -114, 00:30:17.723 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:30:17.723 } 00:30:17.723 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:17.723 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:30:17.723 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:17.723 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:17.723 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:17.723 15:02:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:17.723 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:17.723 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:17.723 00:30:17.723 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:17.723 15:02:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:17.723 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:17.723 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:17.723 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:17.723 15:02:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:30:17.723 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:17.723 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:17.723 00:30:17.723 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:17.723 15:02:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:17.723 15:02:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:30:17.723 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:17.723 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:17.723 15:02:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:17.723 15:02:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:30:17.723 15:02:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:19.100 0 00:30:19.100 15:02:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:30:19.100 15:02:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.100 15:02:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:19.100 15:02:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.100 15:02:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1988725 00:30:19.100 15:02:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1988725 ']' 00:30:19.100 15:02:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1988725 00:30:19.100 15:02:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:30:19.100 15:02:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:19.100 15:02:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1988725 00:30:19.100 15:02:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:19.100 15:02:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:19.100 15:02:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1988725' 00:30:19.100 killing process with pid 1988725 00:30:19.100 15:02:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1988725 00:30:19.100 15:02:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1988725 00:30:20.040 15:02:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:20.040 15:02:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.040 15:02:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:20.040 15:02:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.040 15:02:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:20.040 15:02:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.040 15:02:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:20.040 15:02:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.040 15:02:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:30:20.040 15:02:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:20.040 15:02:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:30:20.040 15:02:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:30:20.040 15:02:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:30:20.040 15:02:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:30:20.040 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:20.040 [2024-07-14 15:02:55.689038] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:20.040 [2024-07-14 15:02:55.689207] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1988725 ] 00:30:20.040 EAL: No free 2048 kB hugepages reported on node 1 00:30:20.040 [2024-07-14 15:02:55.817291] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:20.040 [2024-07-14 15:02:56.048836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:20.041 [2024-07-14 15:02:56.959998] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name abce0c90-b4a8-432d-ba37-8e9a56a5dc36 already exists 00:30:20.041 [2024-07-14 15:02:56.960060] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:abce0c90-b4a8-432d-ba37-8e9a56a5dc36 alias for bdev NVMe1n1 00:30:20.041 [2024-07-14 15:02:56.960085] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:30:20.041 Running I/O for 1 seconds... 00:30:20.041 00:30:20.041 Latency(us) 00:30:20.041 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:20.041 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:30:20.041 NVMe0n1 : 1.01 13376.05 52.25 0.00 0.00 9537.68 2451.53 18350.08 00:30:20.041 =================================================================================================================== 00:30:20.041 Total : 13376.05 52.25 0.00 0.00 9537.68 2451.53 18350.08 00:30:20.041 Received shutdown signal, test time was about 1.000000 seconds 00:30:20.041 00:30:20.041 Latency(us) 00:30:20.041 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:20.041 =================================================================================================================== 00:30:20.041 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:20.041 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:20.041 15:02:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:20.041 15:02:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:30:20.041 15:02:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:30:20.041 15:02:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:20.041 15:02:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:30:20.041 15:02:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:20.041 15:02:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:30:20.041 15:02:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:20.041 15:02:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:20.041 rmmod nvme_tcp 00:30:20.041 rmmod nvme_fabrics 00:30:20.041 rmmod nvme_keyring 00:30:20.041 15:02:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:20.041 15:02:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:30:20.041 15:02:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:30:20.041 15:02:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1988511 ']' 00:30:20.041 15:02:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1988511 00:30:20.041 15:02:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1988511 ']' 00:30:20.041 15:02:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1988511 00:30:20.041 15:02:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:30:20.041 15:02:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:20.041 15:02:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1988511 00:30:20.041 15:02:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:20.041 15:02:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:20.041 15:02:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1988511' 00:30:20.041 killing process with pid 1988511 00:30:20.041 15:02:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1988511 00:30:20.041 15:02:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1988511 00:30:21.946 15:03:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:21.946 15:03:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:21.946 15:03:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:21.946 15:03:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:21.946 15:03:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:21.946 15:03:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:21.946 15:03:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:21.946 15:03:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:23.854 15:03:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:23.854 00:30:23.854 real 0m10.710s 00:30:23.854 user 0m21.608s 00:30:23.854 sys 0m2.584s 00:30:23.854 15:03:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:23.854 15:03:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:23.854 ************************************ 00:30:23.854 END TEST nvmf_multicontroller 00:30:23.854 ************************************ 00:30:23.854 15:03:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:23.854 15:03:02 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:23.854 15:03:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:23.854 15:03:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:23.854 15:03:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:23.854 ************************************ 00:30:23.854 START TEST nvmf_aer 00:30:23.854 ************************************ 00:30:23.855 15:03:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:23.855 * Looking for test storage... 00:30:23.855 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:23.855 15:03:02 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:23.855 15:03:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:30:23.855 15:03:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:23.855 15:03:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:23.855 15:03:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:23.855 15:03:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:23.855 15:03:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:23.855 15:03:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:23.855 15:03:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:23.855 15:03:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:23.855 15:03:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:23.855 15:03:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:23.855 15:03:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:23.855 15:03:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:23.855 15:03:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:23.855 15:03:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:23.855 15:03:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:23.855 15:03:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:23.855 15:03:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:23.855 15:03:02 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:23.855 15:03:02 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:23.855 15:03:02 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:23.855 15:03:02 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.855 15:03:02 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.855 15:03:02 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.855 15:03:02 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:30:23.855 15:03:02 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.855 15:03:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:30:23.855 15:03:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:23.855 15:03:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:23.855 15:03:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:23.855 15:03:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:23.855 15:03:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:23.855 15:03:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:23.855 15:03:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:23.855 15:03:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:23.855 15:03:02 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:30:23.855 15:03:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:23.855 15:03:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:23.855 15:03:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:23.855 15:03:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:23.855 15:03:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:23.855 15:03:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:23.855 15:03:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:23.855 15:03:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:23.855 15:03:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:23.855 15:03:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:23.855 15:03:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:30:23.855 15:03:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:25.751 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:25.751 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:30:25.751 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:25.751 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:25.751 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:25.751 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:25.751 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:25.751 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:30:25.751 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:25.751 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:30:25.751 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:30:25.751 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:30:25.751 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:30:25.751 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:30:25.751 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:30:25.751 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:25.751 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:25.751 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:25.751 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:25.751 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:25.751 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:25.751 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:25.751 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:25.751 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:25.751 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:25.751 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:25.751 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:25.751 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:25.751 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:25.751 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:25.751 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:25.751 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:25.751 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:25.751 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:25.751 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:25.751 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:25.751 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:25.751 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:25.751 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:25.751 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:25.751 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:25.751 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:25.751 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:25.751 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:25.751 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:25.751 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:25.751 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:25.751 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:25.751 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:25.751 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:25.751 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:25.751 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:25.751 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:25.751 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:25.752 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:25.752 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:25.752 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:25.752 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:25.752 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:25.752 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:25.752 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:25.752 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:25.752 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:25.752 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:25.752 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:25.752 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:25.752 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:25.752 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:25.752 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:25.752 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:25.752 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:25.752 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:25.752 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:30:25.752 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:25.752 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:25.752 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:25.752 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:25.752 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:25.752 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:25.752 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:25.752 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:25.752 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:25.752 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:25.752 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:25.752 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:25.752 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:25.752 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:25.752 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:25.752 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:25.752 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:25.752 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:25.752 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:25.752 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:25.752 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:25.752 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:25.752 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:25.752 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:25.752 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:30:25.752 00:30:25.752 --- 10.0.0.2 ping statistics --- 00:30:25.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:25.752 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:30:25.752 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:25.752 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:25.752 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:30:25.752 00:30:25.752 --- 10.0.0.1 ping statistics --- 00:30:25.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:25.752 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:30:25.752 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:25.752 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:30:25.752 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:25.752 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:25.752 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:25.752 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:25.752 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:25.752 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:25.752 15:03:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:25.752 15:03:05 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:30:25.752 15:03:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:25.752 15:03:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:25.752 15:03:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:25.752 15:03:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1991365 00:30:25.752 15:03:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:25.752 15:03:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1991365 00:30:25.752 15:03:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 1991365 ']' 00:30:25.752 15:03:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:25.752 15:03:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:25.752 15:03:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:25.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:25.752 15:03:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:25.752 15:03:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:26.010 [2024-07-14 15:03:05.110338] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:26.010 [2024-07-14 15:03:05.110495] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:26.011 EAL: No free 2048 kB hugepages reported on node 1 00:30:26.011 [2024-07-14 15:03:05.272266] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:26.269 [2024-07-14 15:03:05.535651] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:26.269 [2024-07-14 15:03:05.535733] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:26.269 [2024-07-14 15:03:05.535762] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:26.269 [2024-07-14 15:03:05.535784] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:26.269 [2024-07-14 15:03:05.535806] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:26.269 [2024-07-14 15:03:05.535956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:26.269 [2024-07-14 15:03:05.536019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:26.269 [2024-07-14 15:03:05.536059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:26.269 [2024-07-14 15:03:05.536070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:26.832 15:03:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:26.832 15:03:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:30:26.832 15:03:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:26.832 15:03:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:26.832 15:03:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:26.832 15:03:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:26.832 15:03:06 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:26.832 15:03:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:26.832 15:03:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:26.832 [2024-07-14 15:03:06.039332] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:26.832 15:03:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:26.832 15:03:06 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:30:26.832 15:03:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:26.832 15:03:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:26.832 Malloc0 00:30:26.832 15:03:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:26.832 15:03:06 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:30:26.832 15:03:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:26.832 15:03:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:26.832 15:03:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:26.832 15:03:06 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:26.832 15:03:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:26.832 15:03:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:26.832 15:03:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:26.832 15:03:06 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:26.832 15:03:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:26.832 15:03:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:27.089 [2024-07-14 15:03:06.141630] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:27.089 15:03:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.089 15:03:06 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:30:27.089 15:03:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.089 15:03:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:27.089 [ 00:30:27.089 { 00:30:27.089 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:27.089 "subtype": "Discovery", 00:30:27.089 "listen_addresses": [], 00:30:27.089 "allow_any_host": true, 00:30:27.089 "hosts": [] 00:30:27.089 }, 00:30:27.089 { 00:30:27.089 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:27.089 "subtype": "NVMe", 00:30:27.089 "listen_addresses": [ 00:30:27.089 { 00:30:27.089 "trtype": "TCP", 00:30:27.089 "adrfam": "IPv4", 00:30:27.089 "traddr": "10.0.0.2", 00:30:27.089 "trsvcid": "4420" 00:30:27.089 } 00:30:27.089 ], 00:30:27.089 "allow_any_host": true, 00:30:27.089 "hosts": [], 00:30:27.089 "serial_number": "SPDK00000000000001", 00:30:27.089 "model_number": "SPDK bdev Controller", 00:30:27.089 "max_namespaces": 2, 00:30:27.089 "min_cntlid": 1, 00:30:27.089 "max_cntlid": 65519, 00:30:27.089 "namespaces": [ 00:30:27.089 { 00:30:27.089 "nsid": 1, 00:30:27.089 "bdev_name": "Malloc0", 00:30:27.089 "name": "Malloc0", 00:30:27.089 "nguid": "7530A02079A9493CA5F04473E55D23C2", 00:30:27.089 "uuid": "7530a020-79a9-493c-a5f0-4473e55d23c2" 00:30:27.089 } 00:30:27.089 ] 00:30:27.089 } 00:30:27.089 ] 00:30:27.089 15:03:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.089 15:03:06 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:30:27.089 15:03:06 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:30:27.089 15:03:06 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=1991521 00:30:27.089 15:03:06 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:30:27.089 15:03:06 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:30:27.089 15:03:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:30:27.089 15:03:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:27.090 15:03:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:30:27.090 15:03:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:30:27.090 15:03:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:30:27.090 15:03:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:27.090 15:03:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:30:27.090 15:03:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:30:27.090 15:03:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:30:27.090 EAL: No free 2048 kB hugepages reported on node 1 00:30:27.090 15:03:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:27.090 15:03:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:30:27.090 15:03:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:30:27.090 15:03:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:30:27.348 15:03:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:27.348 15:03:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:27.348 15:03:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:30:27.348 15:03:06 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:30:27.348 15:03:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.348 15:03:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:27.348 Malloc1 00:30:27.348 15:03:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.348 15:03:06 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:30:27.348 15:03:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.348 15:03:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:27.348 15:03:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.348 15:03:06 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:30:27.348 15:03:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.348 15:03:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:27.348 [ 00:30:27.348 { 00:30:27.348 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:27.348 "subtype": "Discovery", 00:30:27.348 "listen_addresses": [], 00:30:27.348 "allow_any_host": true, 00:30:27.348 "hosts": [] 00:30:27.348 }, 00:30:27.348 { 00:30:27.348 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:27.348 "subtype": "NVMe", 00:30:27.348 "listen_addresses": [ 00:30:27.348 { 00:30:27.348 "trtype": "TCP", 00:30:27.348 "adrfam": "IPv4", 00:30:27.348 "traddr": "10.0.0.2", 00:30:27.348 "trsvcid": "4420" 00:30:27.348 } 00:30:27.348 ], 00:30:27.348 "allow_any_host": true, 00:30:27.348 "hosts": [], 00:30:27.348 "serial_number": "SPDK00000000000001", 00:30:27.348 "model_number": "SPDK bdev Controller", 00:30:27.348 "max_namespaces": 2, 00:30:27.348 "min_cntlid": 1, 00:30:27.348 "max_cntlid": 65519, 00:30:27.348 "namespaces": [ 00:30:27.349 { 00:30:27.349 "nsid": 1, 00:30:27.349 "bdev_name": "Malloc0", 00:30:27.349 "name": "Malloc0", 00:30:27.349 "nguid": "7530A02079A9493CA5F04473E55D23C2", 00:30:27.349 "uuid": "7530a020-79a9-493c-a5f0-4473e55d23c2" 00:30:27.349 }, 00:30:27.349 { 00:30:27.349 "nsid": 2, 00:30:27.349 "bdev_name": "Malloc1", 00:30:27.349 "name": "Malloc1", 00:30:27.349 "nguid": "EF0309FF2FA14EE0974CB94E480C9E24", 00:30:27.349 "uuid": "ef0309ff-2fa1-4ee0-974c-b94e480c9e24" 00:30:27.349 } 00:30:27.349 ] 00:30:27.349 } 00:30:27.349 ] 00:30:27.349 15:03:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.349 15:03:06 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 1991521 00:30:27.606 Asynchronous Event Request test 00:30:27.607 Attaching to 10.0.0.2 00:30:27.607 Attached to 10.0.0.2 00:30:27.607 Registering asynchronous event callbacks... 00:30:27.607 Starting namespace attribute notice tests for all controllers... 00:30:27.607 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:30:27.607 aer_cb - Changed Namespace 00:30:27.607 Cleaning up... 00:30:27.607 15:03:06 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:30:27.607 15:03:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.607 15:03:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:27.607 15:03:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.607 15:03:06 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:30:27.607 15:03:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.607 15:03:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:27.864 15:03:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.864 15:03:07 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:27.864 15:03:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.864 15:03:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:27.864 15:03:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.864 15:03:07 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:30:27.864 15:03:07 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:30:27.864 15:03:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:27.864 15:03:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:30:27.864 15:03:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:27.864 15:03:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:30:27.864 15:03:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:27.864 15:03:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:27.864 rmmod nvme_tcp 00:30:27.864 rmmod nvme_fabrics 00:30:27.864 rmmod nvme_keyring 00:30:27.864 15:03:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:27.864 15:03:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:30:27.864 15:03:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:30:27.864 15:03:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1991365 ']' 00:30:27.864 15:03:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1991365 00:30:27.864 15:03:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 1991365 ']' 00:30:27.864 15:03:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 1991365 00:30:27.864 15:03:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:30:27.864 15:03:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:27.864 15:03:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1991365 00:30:27.864 15:03:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:27.864 15:03:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:27.864 15:03:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1991365' 00:30:27.864 killing process with pid 1991365 00:30:27.864 15:03:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 1991365 00:30:27.864 15:03:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 1991365 00:30:29.237 15:03:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:29.237 15:03:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:29.237 15:03:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:29.237 15:03:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:29.237 15:03:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:29.237 15:03:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:29.237 15:03:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:29.237 15:03:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:31.133 15:03:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:31.133 00:30:31.133 real 0m7.539s 00:30:31.133 user 0m10.665s 00:30:31.133 sys 0m2.100s 00:30:31.133 15:03:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:31.133 15:03:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:31.133 ************************************ 00:30:31.133 END TEST nvmf_aer 00:30:31.133 ************************************ 00:30:31.392 15:03:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:31.392 15:03:10 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:31.392 15:03:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:31.392 15:03:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:31.392 15:03:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:31.392 ************************************ 00:30:31.392 START TEST nvmf_async_init 00:30:31.392 ************************************ 00:30:31.392 15:03:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:31.392 * Looking for test storage... 00:30:31.392 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:31.392 15:03:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:31.392 15:03:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:30:31.392 15:03:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:31.392 15:03:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:31.392 15:03:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:31.392 15:03:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:31.392 15:03:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:31.392 15:03:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:31.392 15:03:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:31.392 15:03:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:31.392 15:03:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:31.392 15:03:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:31.392 15:03:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:31.392 15:03:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:31.392 15:03:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:31.392 15:03:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:31.392 15:03:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:31.392 15:03:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:31.392 15:03:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:31.392 15:03:10 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:31.392 15:03:10 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:31.392 15:03:10 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:31.392 15:03:10 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.392 15:03:10 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.392 15:03:10 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.392 15:03:10 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:30:31.392 15:03:10 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.392 15:03:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:30:31.392 15:03:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:31.392 15:03:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:31.392 15:03:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:31.392 15:03:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:31.392 15:03:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:31.393 15:03:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:31.393 15:03:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:31.393 15:03:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:31.393 15:03:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:30:31.393 15:03:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:30:31.393 15:03:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:30:31.393 15:03:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:30:31.393 15:03:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:30:31.393 15:03:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:30:31.393 15:03:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=ef63ef705f8942f1b06090884a9b27d1 00:30:31.393 15:03:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:30:31.393 15:03:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:31.393 15:03:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:31.393 15:03:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:31.393 15:03:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:31.393 15:03:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:31.393 15:03:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:31.393 15:03:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:31.393 15:03:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:31.393 15:03:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:31.393 15:03:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:31.393 15:03:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:30:31.393 15:03:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:33.293 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:33.293 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:30:33.293 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:33.293 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:33.293 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:33.293 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:33.293 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:33.293 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:30:33.293 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:33.293 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:30:33.293 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:30:33.293 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:30:33.293 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:30:33.293 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:30:33.293 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:30:33.293 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:33.293 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:33.293 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:33.293 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:33.293 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:33.293 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:33.293 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:33.293 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:33.293 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:33.293 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:33.293 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:33.293 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:33.293 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:33.293 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:33.293 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:33.293 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:33.293 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:33.293 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:33.293 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:33.293 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:33.293 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:33.293 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:33.293 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:33.293 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:33.293 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:33.293 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:33.293 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:33.293 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:33.293 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:33.293 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:33.293 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:33.293 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:33.293 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:33.293 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:33.294 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:33.294 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:33.294 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:33.294 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:30:33.294 00:30:33.294 --- 10.0.0.2 ping statistics --- 00:30:33.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:33.294 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:33.294 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:33.294 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:30:33.294 00:30:33.294 --- 10.0.0.1 ping statistics --- 00:30:33.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:33.294 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:33.294 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:33.552 15:03:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:30:33.552 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:33.552 15:03:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:33.552 15:03:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:33.552 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1994100 00:30:33.552 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:30:33.552 15:03:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1994100 00:30:33.552 15:03:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 1994100 ']' 00:30:33.552 15:03:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:33.552 15:03:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:33.552 15:03:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:33.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:33.552 15:03:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:33.552 15:03:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:33.552 [2024-07-14 15:03:12.697343] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:33.553 [2024-07-14 15:03:12.697504] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:33.553 EAL: No free 2048 kB hugepages reported on node 1 00:30:33.553 [2024-07-14 15:03:12.838862] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:33.810 [2024-07-14 15:03:13.094614] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:33.810 [2024-07-14 15:03:13.094695] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:33.810 [2024-07-14 15:03:13.094723] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:33.810 [2024-07-14 15:03:13.094747] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:33.810 [2024-07-14 15:03:13.094768] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:33.810 [2024-07-14 15:03:13.094815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:34.385 15:03:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:34.385 15:03:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:30:34.385 15:03:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:34.385 15:03:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:34.385 15:03:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:34.385 15:03:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:34.385 15:03:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:34.385 15:03:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.385 15:03:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:34.385 [2024-07-14 15:03:13.629804] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:34.385 15:03:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.385 15:03:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:30:34.385 15:03:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.385 15:03:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:34.385 null0 00:30:34.385 15:03:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.385 15:03:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:30:34.385 15:03:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.385 15:03:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:34.385 15:03:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.385 15:03:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:30:34.385 15:03:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.385 15:03:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:34.385 15:03:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.385 15:03:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g ef63ef705f8942f1b06090884a9b27d1 00:30:34.385 15:03:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.385 15:03:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:34.386 15:03:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.386 15:03:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:34.386 15:03:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.386 15:03:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:34.386 [2024-07-14 15:03:13.670116] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:34.386 15:03:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.386 15:03:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:30:34.386 15:03:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.386 15:03:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:34.643 nvme0n1 00:30:34.643 15:03:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.643 15:03:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:34.643 15:03:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.643 15:03:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:34.643 [ 00:30:34.643 { 00:30:34.643 "name": "nvme0n1", 00:30:34.643 "aliases": [ 00:30:34.643 "ef63ef70-5f89-42f1-b060-90884a9b27d1" 00:30:34.643 ], 00:30:34.643 "product_name": "NVMe disk", 00:30:34.643 "block_size": 512, 00:30:34.643 "num_blocks": 2097152, 00:30:34.643 "uuid": "ef63ef70-5f89-42f1-b060-90884a9b27d1", 00:30:34.643 "assigned_rate_limits": { 00:30:34.643 "rw_ios_per_sec": 0, 00:30:34.643 "rw_mbytes_per_sec": 0, 00:30:34.643 "r_mbytes_per_sec": 0, 00:30:34.643 "w_mbytes_per_sec": 0 00:30:34.643 }, 00:30:34.643 "claimed": false, 00:30:34.643 "zoned": false, 00:30:34.643 "supported_io_types": { 00:30:34.643 "read": true, 00:30:34.643 "write": true, 00:30:34.643 "unmap": false, 00:30:34.643 "flush": true, 00:30:34.643 "reset": true, 00:30:34.643 "nvme_admin": true, 00:30:34.643 "nvme_io": true, 00:30:34.643 "nvme_io_md": false, 00:30:34.643 "write_zeroes": true, 00:30:34.643 "zcopy": false, 00:30:34.643 "get_zone_info": false, 00:30:34.643 "zone_management": false, 00:30:34.643 "zone_append": false, 00:30:34.643 "compare": true, 00:30:34.643 "compare_and_write": true, 00:30:34.643 "abort": true, 00:30:34.643 "seek_hole": false, 00:30:34.643 "seek_data": false, 00:30:34.643 "copy": true, 00:30:34.643 "nvme_iov_md": false 00:30:34.643 }, 00:30:34.643 "memory_domains": [ 00:30:34.643 { 00:30:34.643 "dma_device_id": "system", 00:30:34.643 "dma_device_type": 1 00:30:34.643 } 00:30:34.643 ], 00:30:34.643 "driver_specific": { 00:30:34.643 "nvme": [ 00:30:34.643 { 00:30:34.643 "trid": { 00:30:34.643 "trtype": "TCP", 00:30:34.643 "adrfam": "IPv4", 00:30:34.643 "traddr": "10.0.0.2", 00:30:34.643 "trsvcid": "4420", 00:30:34.643 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:34.643 }, 00:30:34.643 "ctrlr_data": { 00:30:34.643 "cntlid": 1, 00:30:34.643 "vendor_id": "0x8086", 00:30:34.643 "model_number": "SPDK bdev Controller", 00:30:34.643 "serial_number": "00000000000000000000", 00:30:34.643 "firmware_revision": "24.09", 00:30:34.643 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:34.643 "oacs": { 00:30:34.643 "security": 0, 00:30:34.643 "format": 0, 00:30:34.643 "firmware": 0, 00:30:34.643 "ns_manage": 0 00:30:34.643 }, 00:30:34.643 "multi_ctrlr": true, 00:30:34.643 "ana_reporting": false 00:30:34.643 }, 00:30:34.643 "vs": { 00:30:34.643 "nvme_version": "1.3" 00:30:34.643 }, 00:30:34.643 "ns_data": { 00:30:34.643 "id": 1, 00:30:34.643 "can_share": true 00:30:34.643 } 00:30:34.643 } 00:30:34.643 ], 00:30:34.643 "mp_policy": "active_passive" 00:30:34.643 } 00:30:34.643 } 00:30:34.643 ] 00:30:34.643 15:03:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.643 15:03:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:30:34.643 15:03:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.643 15:03:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:34.643 [2024-07-14 15:03:13.926716] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:34.643 [2024-07-14 15:03:13.926846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2780 (9): Bad file descriptor 00:30:34.901 [2024-07-14 15:03:14.059123] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:34.901 15:03:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.901 15:03:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:34.901 15:03:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.901 15:03:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:34.901 [ 00:30:34.901 { 00:30:34.901 "name": "nvme0n1", 00:30:34.901 "aliases": [ 00:30:34.901 "ef63ef70-5f89-42f1-b060-90884a9b27d1" 00:30:34.901 ], 00:30:34.901 "product_name": "NVMe disk", 00:30:34.901 "block_size": 512, 00:30:34.901 "num_blocks": 2097152, 00:30:34.901 "uuid": "ef63ef70-5f89-42f1-b060-90884a9b27d1", 00:30:34.901 "assigned_rate_limits": { 00:30:34.901 "rw_ios_per_sec": 0, 00:30:34.901 "rw_mbytes_per_sec": 0, 00:30:34.901 "r_mbytes_per_sec": 0, 00:30:34.901 "w_mbytes_per_sec": 0 00:30:34.901 }, 00:30:34.901 "claimed": false, 00:30:34.901 "zoned": false, 00:30:34.901 "supported_io_types": { 00:30:34.901 "read": true, 00:30:34.901 "write": true, 00:30:34.901 "unmap": false, 00:30:34.901 "flush": true, 00:30:34.901 "reset": true, 00:30:34.901 "nvme_admin": true, 00:30:34.901 "nvme_io": true, 00:30:34.901 "nvme_io_md": false, 00:30:34.901 "write_zeroes": true, 00:30:34.901 "zcopy": false, 00:30:34.901 "get_zone_info": false, 00:30:34.901 "zone_management": false, 00:30:34.901 "zone_append": false, 00:30:34.901 "compare": true, 00:30:34.901 "compare_and_write": true, 00:30:34.901 "abort": true, 00:30:34.901 "seek_hole": false, 00:30:34.901 "seek_data": false, 00:30:34.901 "copy": true, 00:30:34.901 "nvme_iov_md": false 00:30:34.901 }, 00:30:34.901 "memory_domains": [ 00:30:34.901 { 00:30:34.901 "dma_device_id": "system", 00:30:34.901 "dma_device_type": 1 00:30:34.901 } 00:30:34.901 ], 00:30:34.901 "driver_specific": { 00:30:34.901 "nvme": [ 00:30:34.901 { 00:30:34.901 "trid": { 00:30:34.901 "trtype": "TCP", 00:30:34.901 "adrfam": "IPv4", 00:30:34.901 "traddr": "10.0.0.2", 00:30:34.901 "trsvcid": "4420", 00:30:34.901 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:34.901 }, 00:30:34.901 "ctrlr_data": { 00:30:34.901 "cntlid": 2, 00:30:34.901 "vendor_id": "0x8086", 00:30:34.901 "model_number": "SPDK bdev Controller", 00:30:34.902 "serial_number": "00000000000000000000", 00:30:34.902 "firmware_revision": "24.09", 00:30:34.902 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:34.902 "oacs": { 00:30:34.902 "security": 0, 00:30:34.902 "format": 0, 00:30:34.902 "firmware": 0, 00:30:34.902 "ns_manage": 0 00:30:34.902 }, 00:30:34.902 "multi_ctrlr": true, 00:30:34.902 "ana_reporting": false 00:30:34.902 }, 00:30:34.902 "vs": { 00:30:34.902 "nvme_version": "1.3" 00:30:34.902 }, 00:30:34.902 "ns_data": { 00:30:34.902 "id": 1, 00:30:34.902 "can_share": true 00:30:34.902 } 00:30:34.902 } 00:30:34.902 ], 00:30:34.902 "mp_policy": "active_passive" 00:30:34.902 } 00:30:34.902 } 00:30:34.902 ] 00:30:34.902 15:03:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.902 15:03:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:34.902 15:03:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.902 15:03:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:34.902 15:03:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.902 15:03:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:30:34.902 15:03:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.0I5ETIvWRU 00:30:34.902 15:03:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:30:34.902 15:03:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.0I5ETIvWRU 00:30:34.902 15:03:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:30:34.902 15:03:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.902 15:03:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:34.902 15:03:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.902 15:03:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:30:34.902 15:03:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.902 15:03:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:34.902 [2024-07-14 15:03:14.115420] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:34.902 [2024-07-14 15:03:14.115655] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:34.902 15:03:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.902 15:03:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0I5ETIvWRU 00:30:34.902 15:03:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.902 15:03:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:34.902 [2024-07-14 15:03:14.123406] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:30:34.902 15:03:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.902 15:03:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0I5ETIvWRU 00:30:34.902 15:03:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.902 15:03:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:34.902 [2024-07-14 15:03:14.131416] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:34.902 [2024-07-14 15:03:14.131520] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:30:34.902 nvme0n1 00:30:34.902 15:03:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.902 15:03:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:34.902 15:03:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.902 15:03:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:35.160 [ 00:30:35.160 { 00:30:35.160 "name": "nvme0n1", 00:30:35.160 "aliases": [ 00:30:35.160 "ef63ef70-5f89-42f1-b060-90884a9b27d1" 00:30:35.160 ], 00:30:35.160 "product_name": "NVMe disk", 00:30:35.160 "block_size": 512, 00:30:35.160 "num_blocks": 2097152, 00:30:35.160 "uuid": "ef63ef70-5f89-42f1-b060-90884a9b27d1", 00:30:35.160 "assigned_rate_limits": { 00:30:35.160 "rw_ios_per_sec": 0, 00:30:35.160 "rw_mbytes_per_sec": 0, 00:30:35.160 "r_mbytes_per_sec": 0, 00:30:35.160 "w_mbytes_per_sec": 0 00:30:35.160 }, 00:30:35.160 "claimed": false, 00:30:35.160 "zoned": false, 00:30:35.160 "supported_io_types": { 00:30:35.160 "read": true, 00:30:35.160 "write": true, 00:30:35.160 "unmap": false, 00:30:35.160 "flush": true, 00:30:35.160 "reset": true, 00:30:35.160 "nvme_admin": true, 00:30:35.160 "nvme_io": true, 00:30:35.160 "nvme_io_md": false, 00:30:35.160 "write_zeroes": true, 00:30:35.160 "zcopy": false, 00:30:35.160 "get_zone_info": false, 00:30:35.160 "zone_management": false, 00:30:35.160 "zone_append": false, 00:30:35.160 "compare": true, 00:30:35.160 "compare_and_write": true, 00:30:35.160 "abort": true, 00:30:35.160 "seek_hole": false, 00:30:35.160 "seek_data": false, 00:30:35.160 "copy": true, 00:30:35.160 "nvme_iov_md": false 00:30:35.160 }, 00:30:35.160 "memory_domains": [ 00:30:35.160 { 00:30:35.160 "dma_device_id": "system", 00:30:35.160 "dma_device_type": 1 00:30:35.160 } 00:30:35.160 ], 00:30:35.160 "driver_specific": { 00:30:35.160 "nvme": [ 00:30:35.160 { 00:30:35.160 "trid": { 00:30:35.160 "trtype": "TCP", 00:30:35.160 "adrfam": "IPv4", 00:30:35.160 "traddr": "10.0.0.2", 00:30:35.160 "trsvcid": "4421", 00:30:35.160 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:35.160 }, 00:30:35.160 "ctrlr_data": { 00:30:35.160 "cntlid": 3, 00:30:35.160 "vendor_id": "0x8086", 00:30:35.160 "model_number": "SPDK bdev Controller", 00:30:35.160 "serial_number": "00000000000000000000", 00:30:35.160 "firmware_revision": "24.09", 00:30:35.160 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:35.160 "oacs": { 00:30:35.160 "security": 0, 00:30:35.160 "format": 0, 00:30:35.160 "firmware": 0, 00:30:35.160 "ns_manage": 0 00:30:35.160 }, 00:30:35.160 "multi_ctrlr": true, 00:30:35.160 "ana_reporting": false 00:30:35.160 }, 00:30:35.160 "vs": { 00:30:35.160 "nvme_version": "1.3" 00:30:35.160 }, 00:30:35.160 "ns_data": { 00:30:35.160 "id": 1, 00:30:35.160 "can_share": true 00:30:35.160 } 00:30:35.160 } 00:30:35.160 ], 00:30:35.160 "mp_policy": "active_passive" 00:30:35.160 } 00:30:35.160 } 00:30:35.160 ] 00:30:35.160 15:03:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.160 15:03:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:35.160 15:03:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.160 15:03:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:35.160 15:03:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.160 15:03:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.0I5ETIvWRU 00:30:35.160 15:03:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:30:35.160 15:03:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:30:35.160 15:03:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:35.160 15:03:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:30:35.160 15:03:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:35.160 15:03:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:30:35.160 15:03:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:35.160 15:03:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:35.160 rmmod nvme_tcp 00:30:35.160 rmmod nvme_fabrics 00:30:35.160 rmmod nvme_keyring 00:30:35.160 15:03:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:35.160 15:03:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:30:35.160 15:03:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:30:35.160 15:03:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1994100 ']' 00:30:35.160 15:03:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1994100 00:30:35.160 15:03:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 1994100 ']' 00:30:35.160 15:03:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 1994100 00:30:35.160 15:03:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:30:35.160 15:03:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:35.160 15:03:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1994100 00:30:35.160 15:03:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:35.160 15:03:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:35.160 15:03:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1994100' 00:30:35.160 killing process with pid 1994100 00:30:35.160 15:03:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 1994100 00:30:35.160 [2024-07-14 15:03:14.319191] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:30:35.160 [2024-07-14 15:03:14.319255] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:30:35.160 15:03:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 1994100 00:30:36.536 15:03:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:36.536 15:03:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:36.536 15:03:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:36.536 15:03:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:36.536 15:03:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:36.536 15:03:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:36.536 15:03:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:36.536 15:03:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:38.439 15:03:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:38.439 00:30:38.439 real 0m7.089s 00:30:38.439 user 0m3.750s 00:30:38.439 sys 0m1.946s 00:30:38.439 15:03:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:38.439 15:03:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:38.439 ************************************ 00:30:38.439 END TEST nvmf_async_init 00:30:38.439 ************************************ 00:30:38.439 15:03:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:38.439 15:03:17 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:38.439 15:03:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:38.439 15:03:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:38.439 15:03:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:38.439 ************************************ 00:30:38.439 START TEST dma 00:30:38.439 ************************************ 00:30:38.439 15:03:17 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:38.439 * Looking for test storage... 00:30:38.439 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:38.439 15:03:17 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:38.439 15:03:17 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:30:38.439 15:03:17 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:38.439 15:03:17 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:38.439 15:03:17 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:38.439 15:03:17 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:38.439 15:03:17 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:38.439 15:03:17 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:38.439 15:03:17 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:38.439 15:03:17 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:38.439 15:03:17 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:38.439 15:03:17 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:38.439 15:03:17 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:38.439 15:03:17 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:38.439 15:03:17 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:38.439 15:03:17 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:38.439 15:03:17 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:38.439 15:03:17 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:38.439 15:03:17 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:38.439 15:03:17 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:38.440 15:03:17 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:38.440 15:03:17 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:38.440 15:03:17 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.440 15:03:17 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.440 15:03:17 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.440 15:03:17 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:30:38.440 15:03:17 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.440 15:03:17 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:30:38.440 15:03:17 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:38.440 15:03:17 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:38.440 15:03:17 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:38.440 15:03:17 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:38.440 15:03:17 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:38.440 15:03:17 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:38.440 15:03:17 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:38.440 15:03:17 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:38.440 15:03:17 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:30:38.440 15:03:17 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:30:38.440 00:30:38.440 real 0m0.067s 00:30:38.440 user 0m0.024s 00:30:38.440 sys 0m0.049s 00:30:38.440 15:03:17 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:38.440 15:03:17 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:30:38.440 ************************************ 00:30:38.440 END TEST dma 00:30:38.440 ************************************ 00:30:38.440 15:03:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:38.440 15:03:17 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:38.440 15:03:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:38.440 15:03:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:38.440 15:03:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:38.440 ************************************ 00:30:38.440 START TEST nvmf_identify 00:30:38.440 ************************************ 00:30:38.440 15:03:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:38.698 * Looking for test storage... 00:30:38.698 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:38.698 15:03:17 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:38.698 15:03:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:30:38.698 15:03:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:38.698 15:03:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:38.698 15:03:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:38.698 15:03:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:38.698 15:03:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:38.698 15:03:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:38.698 15:03:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:38.698 15:03:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:38.698 15:03:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:38.698 15:03:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:38.698 15:03:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:38.698 15:03:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:38.698 15:03:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:38.698 15:03:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:38.698 15:03:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:38.698 15:03:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:38.699 15:03:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:38.699 15:03:17 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:38.699 15:03:17 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:38.699 15:03:17 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:38.699 15:03:17 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.699 15:03:17 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.699 15:03:17 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.699 15:03:17 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:30:38.699 15:03:17 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.699 15:03:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:30:38.699 15:03:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:38.699 15:03:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:38.699 15:03:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:38.699 15:03:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:38.699 15:03:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:38.699 15:03:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:38.699 15:03:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:38.699 15:03:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:38.699 15:03:17 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:38.699 15:03:17 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:38.699 15:03:17 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:30:38.699 15:03:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:38.699 15:03:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:38.699 15:03:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:38.699 15:03:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:38.699 15:03:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:38.699 15:03:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:38.699 15:03:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:38.699 15:03:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:38.699 15:03:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:38.699 15:03:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:38.699 15:03:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:30:38.699 15:03:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:40.599 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:40.599 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:30:40.599 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:40.599 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:40.599 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:40.599 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:40.599 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:40.599 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:30:40.599 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:40.599 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:30:40.599 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:30:40.599 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:30:40.599 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:30:40.599 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:30:40.599 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:30:40.599 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:40.599 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:40.599 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:40.599 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:40.599 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:40.599 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:40.599 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:40.599 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:40.599 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:40.599 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:40.599 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:40.599 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:40.600 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:40.600 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:40.600 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:40.600 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:40.600 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:40.858 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:40.858 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:40.858 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:30:40.858 00:30:40.858 --- 10.0.0.2 ping statistics --- 00:30:40.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:40.858 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:30:40.858 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:40.858 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:40.858 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:30:40.858 00:30:40.858 --- 10.0.0.1 ping statistics --- 00:30:40.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:40.858 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:30:40.858 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:40.858 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:30:40.858 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:40.858 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:40.858 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:40.858 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:40.858 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:40.858 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:40.858 15:03:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:40.858 15:03:19 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:30:40.858 15:03:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:40.858 15:03:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:40.858 15:03:19 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1996478 00:30:40.858 15:03:19 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:40.858 15:03:19 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:40.858 15:03:19 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1996478 00:30:40.858 15:03:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 1996478 ']' 00:30:40.858 15:03:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:40.858 15:03:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:40.858 15:03:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:40.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:40.858 15:03:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:40.858 15:03:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:40.858 [2024-07-14 15:03:20.039643] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:40.858 [2024-07-14 15:03:20.039792] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:40.858 EAL: No free 2048 kB hugepages reported on node 1 00:30:41.116 [2024-07-14 15:03:20.177499] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:41.374 [2024-07-14 15:03:20.437479] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:41.374 [2024-07-14 15:03:20.437540] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:41.374 [2024-07-14 15:03:20.437567] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:41.374 [2024-07-14 15:03:20.437588] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:41.374 [2024-07-14 15:03:20.437609] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:41.374 [2024-07-14 15:03:20.437733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:41.374 [2024-07-14 15:03:20.437812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:41.374 [2024-07-14 15:03:20.437912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:41.374 [2024-07-14 15:03:20.437921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:41.940 15:03:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:41.940 15:03:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:30:41.940 15:03:20 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:41.940 15:03:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.940 15:03:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:41.940 [2024-07-14 15:03:20.984161] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:41.940 15:03:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.940 15:03:20 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:30:41.940 15:03:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:41.940 15:03:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:41.940 15:03:21 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:41.940 15:03:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.940 15:03:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:41.940 Malloc0 00:30:41.940 15:03:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.940 15:03:21 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:41.940 15:03:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.940 15:03:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:41.940 15:03:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.940 15:03:21 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:30:41.940 15:03:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.940 15:03:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:41.940 15:03:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.940 15:03:21 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:41.940 15:03:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.941 15:03:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:41.941 [2024-07-14 15:03:21.106666] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:41.941 15:03:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.941 15:03:21 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:41.941 15:03:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.941 15:03:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:41.941 15:03:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.941 15:03:21 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:30:41.941 15:03:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.941 15:03:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:41.941 [ 00:30:41.941 { 00:30:41.941 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:41.941 "subtype": "Discovery", 00:30:41.941 "listen_addresses": [ 00:30:41.941 { 00:30:41.941 "trtype": "TCP", 00:30:41.941 "adrfam": "IPv4", 00:30:41.941 "traddr": "10.0.0.2", 00:30:41.941 "trsvcid": "4420" 00:30:41.941 } 00:30:41.941 ], 00:30:41.941 "allow_any_host": true, 00:30:41.941 "hosts": [] 00:30:41.941 }, 00:30:41.941 { 00:30:41.941 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:41.941 "subtype": "NVMe", 00:30:41.941 "listen_addresses": [ 00:30:41.941 { 00:30:41.941 "trtype": "TCP", 00:30:41.941 "adrfam": "IPv4", 00:30:41.941 "traddr": "10.0.0.2", 00:30:41.941 "trsvcid": "4420" 00:30:41.941 } 00:30:41.941 ], 00:30:41.941 "allow_any_host": true, 00:30:41.941 "hosts": [], 00:30:41.941 "serial_number": "SPDK00000000000001", 00:30:41.941 "model_number": "SPDK bdev Controller", 00:30:41.941 "max_namespaces": 32, 00:30:41.941 "min_cntlid": 1, 00:30:41.941 "max_cntlid": 65519, 00:30:41.941 "namespaces": [ 00:30:41.941 { 00:30:41.941 "nsid": 1, 00:30:41.941 "bdev_name": "Malloc0", 00:30:41.941 "name": "Malloc0", 00:30:41.941 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:30:41.941 "eui64": "ABCDEF0123456789", 00:30:41.941 "uuid": "69d2c0c3-fbf2-45b2-8bc6-3467bc6b18eb" 00:30:41.941 } 00:30:41.941 ] 00:30:41.941 } 00:30:41.941 ] 00:30:41.941 15:03:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.941 15:03:21 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:30:41.941 [2024-07-14 15:03:21.167699] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:41.941 [2024-07-14 15:03:21.167794] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1996634 ] 00:30:41.941 EAL: No free 2048 kB hugepages reported on node 1 00:30:41.941 [2024-07-14 15:03:21.225614] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:30:41.941 [2024-07-14 15:03:21.225735] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:41.941 [2024-07-14 15:03:21.225758] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:41.941 [2024-07-14 15:03:21.225791] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:41.941 [2024-07-14 15:03:21.225814] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:41.941 [2024-07-14 15:03:21.230112] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:30:41.941 [2024-07-14 15:03:21.230186] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000015700 0 00:30:41.941 [2024-07-14 15:03:21.243895] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:41.941 [2024-07-14 15:03:21.243926] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:41.941 [2024-07-14 15:03:21.243941] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:41.941 [2024-07-14 15:03:21.243952] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:41.941 [2024-07-14 15:03:21.244042] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:41.941 [2024-07-14 15:03:21.244063] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:41.941 [2024-07-14 15:03:21.244083] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:41.941 [2024-07-14 15:03:21.244115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:41.941 [2024-07-14 15:03:21.244155] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:42.202 [2024-07-14 15:03:21.251913] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.202 [2024-07-14 15:03:21.251941] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.202 [2024-07-14 15:03:21.251954] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.202 [2024-07-14 15:03:21.251968] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:42.202 [2024-07-14 15:03:21.252006] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:42.202 [2024-07-14 15:03:21.252032] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:30:42.202 [2024-07-14 15:03:21.252049] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:30:42.202 [2024-07-14 15:03:21.252081] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.202 [2024-07-14 15:03:21.252100] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.202 [2024-07-14 15:03:21.252113] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:42.202 [2024-07-14 15:03:21.252139] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.202 [2024-07-14 15:03:21.252190] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:42.202 [2024-07-14 15:03:21.252354] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.202 [2024-07-14 15:03:21.252378] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.202 [2024-07-14 15:03:21.252396] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.202 [2024-07-14 15:03:21.252409] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:42.202 [2024-07-14 15:03:21.252426] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:30:42.202 [2024-07-14 15:03:21.252453] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:30:42.202 [2024-07-14 15:03:21.252476] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.203 [2024-07-14 15:03:21.252490] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.203 [2024-07-14 15:03:21.252502] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:42.203 [2024-07-14 15:03:21.252526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.203 [2024-07-14 15:03:21.252561] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:42.203 [2024-07-14 15:03:21.252720] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.203 [2024-07-14 15:03:21.252743] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.203 [2024-07-14 15:03:21.252755] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.203 [2024-07-14 15:03:21.252766] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:42.203 [2024-07-14 15:03:21.252782] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:30:42.203 [2024-07-14 15:03:21.252806] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:30:42.203 [2024-07-14 15:03:21.252832] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.203 [2024-07-14 15:03:21.252847] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.203 [2024-07-14 15:03:21.252864] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:42.203 [2024-07-14 15:03:21.252892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.203 [2024-07-14 15:03:21.252927] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:42.203 [2024-07-14 15:03:21.253033] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.203 [2024-07-14 15:03:21.253054] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.203 [2024-07-14 15:03:21.253066] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.203 [2024-07-14 15:03:21.253078] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:42.203 [2024-07-14 15:03:21.253093] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:42.203 [2024-07-14 15:03:21.253121] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.203 [2024-07-14 15:03:21.253137] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.203 [2024-07-14 15:03:21.253149] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:42.203 [2024-07-14 15:03:21.253168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.203 [2024-07-14 15:03:21.253200] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:42.203 [2024-07-14 15:03:21.253303] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.203 [2024-07-14 15:03:21.253324] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.203 [2024-07-14 15:03:21.253336] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.203 [2024-07-14 15:03:21.253347] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:42.203 [2024-07-14 15:03:21.253365] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:30:42.203 [2024-07-14 15:03:21.253386] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:30:42.203 [2024-07-14 15:03:21.253409] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:42.203 [2024-07-14 15:03:21.253530] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:30:42.203 [2024-07-14 15:03:21.253546] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:42.203 [2024-07-14 15:03:21.253584] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.203 [2024-07-14 15:03:21.253599] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.203 [2024-07-14 15:03:21.253611] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:42.203 [2024-07-14 15:03:21.253630] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.203 [2024-07-14 15:03:21.253661] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:42.203 [2024-07-14 15:03:21.253825] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.203 [2024-07-14 15:03:21.253848] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.203 [2024-07-14 15:03:21.253860] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.203 [2024-07-14 15:03:21.253871] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:42.203 [2024-07-14 15:03:21.253897] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:42.203 [2024-07-14 15:03:21.253932] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.203 [2024-07-14 15:03:21.253948] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.203 [2024-07-14 15:03:21.253960] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:42.203 [2024-07-14 15:03:21.253979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.203 [2024-07-14 15:03:21.254011] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:42.203 [2024-07-14 15:03:21.254129] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.203 [2024-07-14 15:03:21.254150] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.203 [2024-07-14 15:03:21.254162] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.203 [2024-07-14 15:03:21.254173] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:42.203 [2024-07-14 15:03:21.254196] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:42.203 [2024-07-14 15:03:21.254225] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:30:42.203 [2024-07-14 15:03:21.254249] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:30:42.203 [2024-07-14 15:03:21.254283] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:30:42.203 [2024-07-14 15:03:21.254312] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.203 [2024-07-14 15:03:21.254332] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:42.203 [2024-07-14 15:03:21.254353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.203 [2024-07-14 15:03:21.254389] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:42.203 [2024-07-14 15:03:21.254598] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:42.203 [2024-07-14 15:03:21.254621] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:42.203 [2024-07-14 15:03:21.254633] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:42.203 [2024-07-14 15:03:21.254646] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=0 00:30:42.203 [2024-07-14 15:03:21.254660] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:42.203 [2024-07-14 15:03:21.254677] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.203 [2024-07-14 15:03:21.254700] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:42.203 [2024-07-14 15:03:21.254714] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:42.203 [2024-07-14 15:03:21.254735] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.203 [2024-07-14 15:03:21.254752] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.203 [2024-07-14 15:03:21.254764] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.203 [2024-07-14 15:03:21.254775] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:42.203 [2024-07-14 15:03:21.254805] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:30:42.203 [2024-07-14 15:03:21.254822] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:30:42.203 [2024-07-14 15:03:21.254841] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:30:42.203 [2024-07-14 15:03:21.254855] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:30:42.203 [2024-07-14 15:03:21.254874] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:30:42.203 [2024-07-14 15:03:21.254899] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:30:42.203 [2024-07-14 15:03:21.254923] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:30:42.203 [2024-07-14 15:03:21.254948] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.203 [2024-07-14 15:03:21.254964] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.203 [2024-07-14 15:03:21.254977] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:42.203 [2024-07-14 15:03:21.254997] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:42.203 [2024-07-14 15:03:21.255035] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:42.203 [2024-07-14 15:03:21.255156] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.203 [2024-07-14 15:03:21.255178] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.203 [2024-07-14 15:03:21.255190] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.203 [2024-07-14 15:03:21.255206] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:42.203 [2024-07-14 15:03:21.255227] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.203 [2024-07-14 15:03:21.255242] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.203 [2024-07-14 15:03:21.255254] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:42.203 [2024-07-14 15:03:21.255278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:42.203 [2024-07-14 15:03:21.255301] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.203 [2024-07-14 15:03:21.255318] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.203 [2024-07-14 15:03:21.255330] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000015700) 00:30:42.203 [2024-07-14 15:03:21.255347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:42.203 [2024-07-14 15:03:21.255364] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.203 [2024-07-14 15:03:21.255376] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.203 [2024-07-14 15:03:21.255387] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000015700) 00:30:42.203 [2024-07-14 15:03:21.255403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:42.203 [2024-07-14 15:03:21.255419] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.203 [2024-07-14 15:03:21.255448] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.203 [2024-07-14 15:03:21.255458] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:42.204 [2024-07-14 15:03:21.255474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:42.204 [2024-07-14 15:03:21.255489] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:30:42.204 [2024-07-14 15:03:21.255518] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:42.204 [2024-07-14 15:03:21.255542] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.204 [2024-07-14 15:03:21.255556] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:42.204 [2024-07-14 15:03:21.255574] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.204 [2024-07-14 15:03:21.255612] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:42.204 [2024-07-14 15:03:21.255630] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:30:42.204 [2024-07-14 15:03:21.255642] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:30:42.204 [2024-07-14 15:03:21.255654] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:42.204 [2024-07-14 15:03:21.255666] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:42.204 [2024-07-14 15:03:21.255830] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.204 [2024-07-14 15:03:21.255852] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.204 [2024-07-14 15:03:21.255863] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.204 [2024-07-14 15:03:21.259884] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:42.204 [2024-07-14 15:03:21.259911] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:30:42.204 [2024-07-14 15:03:21.259934] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:30:42.204 [2024-07-14 15:03:21.259984] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.204 [2024-07-14 15:03:21.260011] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:42.204 [2024-07-14 15:03:21.260039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.204 [2024-07-14 15:03:21.260074] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:42.204 [2024-07-14 15:03:21.260249] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:42.204 [2024-07-14 15:03:21.260272] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:42.204 [2024-07-14 15:03:21.260286] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:42.204 [2024-07-14 15:03:21.260298] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:42.204 [2024-07-14 15:03:21.260317] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:42.204 [2024-07-14 15:03:21.260329] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.204 [2024-07-14 15:03:21.260362] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:42.204 [2024-07-14 15:03:21.260379] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:42.204 [2024-07-14 15:03:21.300986] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.204 [2024-07-14 15:03:21.301017] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.204 [2024-07-14 15:03:21.301031] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.204 [2024-07-14 15:03:21.301044] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:42.204 [2024-07-14 15:03:21.301081] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:30:42.204 [2024-07-14 15:03:21.301151] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.204 [2024-07-14 15:03:21.301170] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:42.204 [2024-07-14 15:03:21.301203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.204 [2024-07-14 15:03:21.301225] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.204 [2024-07-14 15:03:21.301240] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.204 [2024-07-14 15:03:21.301252] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:42.204 [2024-07-14 15:03:21.301270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:42.204 [2024-07-14 15:03:21.301322] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:42.204 [2024-07-14 15:03:21.301341] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:42.204 [2024-07-14 15:03:21.301622] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:42.204 [2024-07-14 15:03:21.301645] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:42.204 [2024-07-14 15:03:21.301658] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:42.204 [2024-07-14 15:03:21.301670] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=1024, cccid=4 00:30:42.204 [2024-07-14 15:03:21.301683] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=1024 00:30:42.204 [2024-07-14 15:03:21.301703] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.204 [2024-07-14 15:03:21.301722] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:42.204 [2024-07-14 15:03:21.301736] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:42.204 [2024-07-14 15:03:21.301756] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.204 [2024-07-14 15:03:21.301774] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.204 [2024-07-14 15:03:21.301786] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.204 [2024-07-14 15:03:21.301798] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:42.204 [2024-07-14 15:03:21.345918] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.204 [2024-07-14 15:03:21.345947] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.204 [2024-07-14 15:03:21.345965] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.204 [2024-07-14 15:03:21.345978] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:42.204 [2024-07-14 15:03:21.346015] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.204 [2024-07-14 15:03:21.346032] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:42.204 [2024-07-14 15:03:21.346053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.204 [2024-07-14 15:03:21.346111] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:42.204 [2024-07-14 15:03:21.346306] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:42.204 [2024-07-14 15:03:21.346327] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:42.204 [2024-07-14 15:03:21.346339] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:42.204 [2024-07-14 15:03:21.346350] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=3072, cccid=4 00:30:42.204 [2024-07-14 15:03:21.346363] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=3072 00:30:42.204 [2024-07-14 15:03:21.346374] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.204 [2024-07-14 15:03:21.346392] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:42.204 [2024-07-14 15:03:21.346406] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:42.204 [2024-07-14 15:03:21.346424] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.204 [2024-07-14 15:03:21.346442] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.204 [2024-07-14 15:03:21.346454] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.204 [2024-07-14 15:03:21.346465] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:42.204 [2024-07-14 15:03:21.346492] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.204 [2024-07-14 15:03:21.346509] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:42.204 [2024-07-14 15:03:21.346537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.204 [2024-07-14 15:03:21.346591] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:42.204 [2024-07-14 15:03:21.346768] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:42.204 [2024-07-14 15:03:21.346789] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:42.204 [2024-07-14 15:03:21.346801] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:42.204 [2024-07-14 15:03:21.346812] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=8, cccid=4 00:30:42.204 [2024-07-14 15:03:21.346824] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=8 00:30:42.204 [2024-07-14 15:03:21.346835] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.204 [2024-07-14 15:03:21.346852] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:42.204 [2024-07-14 15:03:21.346865] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:42.204 [2024-07-14 15:03:21.386984] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.204 [2024-07-14 15:03:21.387016] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.204 [2024-07-14 15:03:21.387029] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.204 [2024-07-14 15:03:21.387042] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:42.204 ===================================================== 00:30:42.204 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:30:42.204 ===================================================== 00:30:42.204 Controller Capabilities/Features 00:30:42.204 ================================ 00:30:42.204 Vendor ID: 0000 00:30:42.204 Subsystem Vendor ID: 0000 00:30:42.204 Serial Number: .................... 00:30:42.204 Model Number: ........................................ 00:30:42.204 Firmware Version: 24.09 00:30:42.204 Recommended Arb Burst: 0 00:30:42.204 IEEE OUI Identifier: 00 00 00 00:30:42.204 Multi-path I/O 00:30:42.204 May have multiple subsystem ports: No 00:30:42.204 May have multiple controllers: No 00:30:42.204 Associated with SR-IOV VF: No 00:30:42.204 Max Data Transfer Size: 131072 00:30:42.204 Max Number of Namespaces: 0 00:30:42.204 Max Number of I/O Queues: 1024 00:30:42.204 NVMe Specification Version (VS): 1.3 00:30:42.204 NVMe Specification Version (Identify): 1.3 00:30:42.204 Maximum Queue Entries: 128 00:30:42.204 Contiguous Queues Required: Yes 00:30:42.204 Arbitration Mechanisms Supported 00:30:42.204 Weighted Round Robin: Not Supported 00:30:42.204 Vendor Specific: Not Supported 00:30:42.204 Reset Timeout: 15000 ms 00:30:42.204 Doorbell Stride: 4 bytes 00:30:42.204 NVM Subsystem Reset: Not Supported 00:30:42.204 Command Sets Supported 00:30:42.204 NVM Command Set: Supported 00:30:42.205 Boot Partition: Not Supported 00:30:42.205 Memory Page Size Minimum: 4096 bytes 00:30:42.205 Memory Page Size Maximum: 4096 bytes 00:30:42.205 Persistent Memory Region: Not Supported 00:30:42.205 Optional Asynchronous Events Supported 00:30:42.205 Namespace Attribute Notices: Not Supported 00:30:42.205 Firmware Activation Notices: Not Supported 00:30:42.205 ANA Change Notices: Not Supported 00:30:42.205 PLE Aggregate Log Change Notices: Not Supported 00:30:42.205 LBA Status Info Alert Notices: Not Supported 00:30:42.205 EGE Aggregate Log Change Notices: Not Supported 00:30:42.205 Normal NVM Subsystem Shutdown event: Not Supported 00:30:42.205 Zone Descriptor Change Notices: Not Supported 00:30:42.205 Discovery Log Change Notices: Supported 00:30:42.205 Controller Attributes 00:30:42.205 128-bit Host Identifier: Not Supported 00:30:42.205 Non-Operational Permissive Mode: Not Supported 00:30:42.205 NVM Sets: Not Supported 00:30:42.205 Read Recovery Levels: Not Supported 00:30:42.205 Endurance Groups: Not Supported 00:30:42.205 Predictable Latency Mode: Not Supported 00:30:42.205 Traffic Based Keep ALive: Not Supported 00:30:42.205 Namespace Granularity: Not Supported 00:30:42.205 SQ Associations: Not Supported 00:30:42.205 UUID List: Not Supported 00:30:42.205 Multi-Domain Subsystem: Not Supported 00:30:42.205 Fixed Capacity Management: Not Supported 00:30:42.205 Variable Capacity Management: Not Supported 00:30:42.205 Delete Endurance Group: Not Supported 00:30:42.205 Delete NVM Set: Not Supported 00:30:42.205 Extended LBA Formats Supported: Not Supported 00:30:42.205 Flexible Data Placement Supported: Not Supported 00:30:42.205 00:30:42.205 Controller Memory Buffer Support 00:30:42.205 ================================ 00:30:42.205 Supported: No 00:30:42.205 00:30:42.205 Persistent Memory Region Support 00:30:42.205 ================================ 00:30:42.205 Supported: No 00:30:42.205 00:30:42.205 Admin Command Set Attributes 00:30:42.205 ============================ 00:30:42.205 Security Send/Receive: Not Supported 00:30:42.205 Format NVM: Not Supported 00:30:42.205 Firmware Activate/Download: Not Supported 00:30:42.205 Namespace Management: Not Supported 00:30:42.205 Device Self-Test: Not Supported 00:30:42.205 Directives: Not Supported 00:30:42.205 NVMe-MI: Not Supported 00:30:42.205 Virtualization Management: Not Supported 00:30:42.205 Doorbell Buffer Config: Not Supported 00:30:42.205 Get LBA Status Capability: Not Supported 00:30:42.205 Command & Feature Lockdown Capability: Not Supported 00:30:42.205 Abort Command Limit: 1 00:30:42.205 Async Event Request Limit: 4 00:30:42.205 Number of Firmware Slots: N/A 00:30:42.205 Firmware Slot 1 Read-Only: N/A 00:30:42.205 Firmware Activation Without Reset: N/A 00:30:42.205 Multiple Update Detection Support: N/A 00:30:42.205 Firmware Update Granularity: No Information Provided 00:30:42.205 Per-Namespace SMART Log: No 00:30:42.205 Asymmetric Namespace Access Log Page: Not Supported 00:30:42.205 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:30:42.205 Command Effects Log Page: Not Supported 00:30:42.205 Get Log Page Extended Data: Supported 00:30:42.205 Telemetry Log Pages: Not Supported 00:30:42.205 Persistent Event Log Pages: Not Supported 00:30:42.205 Supported Log Pages Log Page: May Support 00:30:42.205 Commands Supported & Effects Log Page: Not Supported 00:30:42.205 Feature Identifiers & Effects Log Page:May Support 00:30:42.205 NVMe-MI Commands & Effects Log Page: May Support 00:30:42.205 Data Area 4 for Telemetry Log: Not Supported 00:30:42.205 Error Log Page Entries Supported: 128 00:30:42.205 Keep Alive: Not Supported 00:30:42.205 00:30:42.205 NVM Command Set Attributes 00:30:42.205 ========================== 00:30:42.205 Submission Queue Entry Size 00:30:42.205 Max: 1 00:30:42.205 Min: 1 00:30:42.205 Completion Queue Entry Size 00:30:42.205 Max: 1 00:30:42.205 Min: 1 00:30:42.205 Number of Namespaces: 0 00:30:42.205 Compare Command: Not Supported 00:30:42.205 Write Uncorrectable Command: Not Supported 00:30:42.205 Dataset Management Command: Not Supported 00:30:42.205 Write Zeroes Command: Not Supported 00:30:42.205 Set Features Save Field: Not Supported 00:30:42.205 Reservations: Not Supported 00:30:42.205 Timestamp: Not Supported 00:30:42.205 Copy: Not Supported 00:30:42.205 Volatile Write Cache: Not Present 00:30:42.205 Atomic Write Unit (Normal): 1 00:30:42.205 Atomic Write Unit (PFail): 1 00:30:42.205 Atomic Compare & Write Unit: 1 00:30:42.205 Fused Compare & Write: Supported 00:30:42.205 Scatter-Gather List 00:30:42.205 SGL Command Set: Supported 00:30:42.205 SGL Keyed: Supported 00:30:42.205 SGL Bit Bucket Descriptor: Not Supported 00:30:42.205 SGL Metadata Pointer: Not Supported 00:30:42.205 Oversized SGL: Not Supported 00:30:42.205 SGL Metadata Address: Not Supported 00:30:42.205 SGL Offset: Supported 00:30:42.205 Transport SGL Data Block: Not Supported 00:30:42.205 Replay Protected Memory Block: Not Supported 00:30:42.205 00:30:42.205 Firmware Slot Information 00:30:42.205 ========================= 00:30:42.205 Active slot: 0 00:30:42.205 00:30:42.205 00:30:42.205 Error Log 00:30:42.205 ========= 00:30:42.205 00:30:42.205 Active Namespaces 00:30:42.205 ================= 00:30:42.205 Discovery Log Page 00:30:42.205 ================== 00:30:42.205 Generation Counter: 2 00:30:42.205 Number of Records: 2 00:30:42.205 Record Format: 0 00:30:42.205 00:30:42.205 Discovery Log Entry 0 00:30:42.205 ---------------------- 00:30:42.205 Transport Type: 3 (TCP) 00:30:42.205 Address Family: 1 (IPv4) 00:30:42.205 Subsystem Type: 3 (Current Discovery Subsystem) 00:30:42.205 Entry Flags: 00:30:42.205 Duplicate Returned Information: 1 00:30:42.205 Explicit Persistent Connection Support for Discovery: 1 00:30:42.205 Transport Requirements: 00:30:42.205 Secure Channel: Not Required 00:30:42.205 Port ID: 0 (0x0000) 00:30:42.205 Controller ID: 65535 (0xffff) 00:30:42.205 Admin Max SQ Size: 128 00:30:42.205 Transport Service Identifier: 4420 00:30:42.205 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:30:42.205 Transport Address: 10.0.0.2 00:30:42.205 Discovery Log Entry 1 00:30:42.205 ---------------------- 00:30:42.205 Transport Type: 3 (TCP) 00:30:42.205 Address Family: 1 (IPv4) 00:30:42.205 Subsystem Type: 2 (NVM Subsystem) 00:30:42.205 Entry Flags: 00:30:42.205 Duplicate Returned Information: 0 00:30:42.205 Explicit Persistent Connection Support for Discovery: 0 00:30:42.205 Transport Requirements: 00:30:42.205 Secure Channel: Not Required 00:30:42.205 Port ID: 0 (0x0000) 00:30:42.205 Controller ID: 65535 (0xffff) 00:30:42.205 Admin Max SQ Size: 128 00:30:42.205 Transport Service Identifier: 4420 00:30:42.205 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:30:42.205 Transport Address: 10.0.0.2 [2024-07-14 15:03:21.387236] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:30:42.205 [2024-07-14 15:03:21.387287] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:42.205 [2024-07-14 15:03:21.387309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.205 [2024-07-14 15:03:21.387327] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x615000015700 00:30:42.205 [2024-07-14 15:03:21.387341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.205 [2024-07-14 15:03:21.387353] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x615000015700 00:30:42.205 [2024-07-14 15:03:21.387368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.205 [2024-07-14 15:03:21.387380] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:42.205 [2024-07-14 15:03:21.387394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.205 [2024-07-14 15:03:21.387421] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.205 [2024-07-14 15:03:21.387437] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.205 [2024-07-14 15:03:21.387449] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:42.205 [2024-07-14 15:03:21.387469] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.205 [2024-07-14 15:03:21.387506] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:42.205 [2024-07-14 15:03:21.387631] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.205 [2024-07-14 15:03:21.387654] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.205 [2024-07-14 15:03:21.387667] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.205 [2024-07-14 15:03:21.387679] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:42.205 [2024-07-14 15:03:21.387701] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.205 [2024-07-14 15:03:21.387715] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.205 [2024-07-14 15:03:21.387727] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:42.205 [2024-07-14 15:03:21.387752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.205 [2024-07-14 15:03:21.387797] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:42.205 [2024-07-14 15:03:21.387955] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.205 [2024-07-14 15:03:21.387977] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.206 [2024-07-14 15:03:21.387989] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.206 [2024-07-14 15:03:21.388000] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:42.206 [2024-07-14 15:03:21.388014] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:30:42.206 [2024-07-14 15:03:21.388028] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:30:42.206 [2024-07-14 15:03:21.388070] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.206 [2024-07-14 15:03:21.388087] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.206 [2024-07-14 15:03:21.388099] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:42.206 [2024-07-14 15:03:21.388118] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.206 [2024-07-14 15:03:21.388150] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:42.206 [2024-07-14 15:03:21.388278] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.206 [2024-07-14 15:03:21.388316] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.206 [2024-07-14 15:03:21.388329] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.206 [2024-07-14 15:03:21.388340] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:42.206 [2024-07-14 15:03:21.388368] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.206 [2024-07-14 15:03:21.388384] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.206 [2024-07-14 15:03:21.388395] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:42.206 [2024-07-14 15:03:21.388414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.206 [2024-07-14 15:03:21.388445] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:42.206 [2024-07-14 15:03:21.388556] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.206 [2024-07-14 15:03:21.388578] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.206 [2024-07-14 15:03:21.388590] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.206 [2024-07-14 15:03:21.388601] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:42.206 [2024-07-14 15:03:21.388628] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.206 [2024-07-14 15:03:21.388643] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.206 [2024-07-14 15:03:21.388654] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:42.206 [2024-07-14 15:03:21.388680] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.206 [2024-07-14 15:03:21.388710] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:42.206 [2024-07-14 15:03:21.388830] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.206 [2024-07-14 15:03:21.388850] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.206 [2024-07-14 15:03:21.388861] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.206 [2024-07-14 15:03:21.388901] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:42.206 [2024-07-14 15:03:21.388930] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.206 [2024-07-14 15:03:21.388946] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.206 [2024-07-14 15:03:21.388957] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:42.206 [2024-07-14 15:03:21.388975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.206 [2024-07-14 15:03:21.389005] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:42.206 [2024-07-14 15:03:21.389105] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.206 [2024-07-14 15:03:21.389125] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.206 [2024-07-14 15:03:21.389137] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.206 [2024-07-14 15:03:21.389148] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:42.206 [2024-07-14 15:03:21.389174] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.206 [2024-07-14 15:03:21.389190] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.206 [2024-07-14 15:03:21.389201] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:42.206 [2024-07-14 15:03:21.389219] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.206 [2024-07-14 15:03:21.389249] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:42.206 [2024-07-14 15:03:21.389362] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.206 [2024-07-14 15:03:21.389391] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.206 [2024-07-14 15:03:21.389405] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.206 [2024-07-14 15:03:21.389417] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:42.206 [2024-07-14 15:03:21.389443] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.206 [2024-07-14 15:03:21.389459] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.206 [2024-07-14 15:03:21.389470] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:42.206 [2024-07-14 15:03:21.389488] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.206 [2024-07-14 15:03:21.389518] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:42.206 [2024-07-14 15:03:21.389616] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.206 [2024-07-14 15:03:21.389636] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.206 [2024-07-14 15:03:21.389648] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.206 [2024-07-14 15:03:21.389659] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:42.206 [2024-07-14 15:03:21.389685] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.206 [2024-07-14 15:03:21.389701] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.206 [2024-07-14 15:03:21.389711] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:42.206 [2024-07-14 15:03:21.389729] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.206 [2024-07-14 15:03:21.389759] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:42.206 [2024-07-14 15:03:21.389861] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.206 [2024-07-14 15:03:21.393899] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.206 [2024-07-14 15:03:21.393923] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.206 [2024-07-14 15:03:21.393935] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:42.206 [2024-07-14 15:03:21.393964] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.206 [2024-07-14 15:03:21.393980] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.206 [2024-07-14 15:03:21.393991] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:42.206 [2024-07-14 15:03:21.394015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.206 [2024-07-14 15:03:21.394049] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:42.206 [2024-07-14 15:03:21.394162] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.206 [2024-07-14 15:03:21.394183] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.206 [2024-07-14 15:03:21.394195] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.206 [2024-07-14 15:03:21.394206] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:42.206 [2024-07-14 15:03:21.394227] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:30:42.206 00:30:42.206 15:03:21 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:30:42.206 [2024-07-14 15:03:21.504248] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:42.206 [2024-07-14 15:03:21.504342] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1996643 ] 00:30:42.471 EAL: No free 2048 kB hugepages reported on node 1 00:30:42.471 [2024-07-14 15:03:21.562450] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:30:42.471 [2024-07-14 15:03:21.562569] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:42.471 [2024-07-14 15:03:21.562590] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:42.471 [2024-07-14 15:03:21.562627] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:42.471 [2024-07-14 15:03:21.562652] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:42.471 [2024-07-14 15:03:21.565946] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:30:42.471 [2024-07-14 15:03:21.566020] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000015700 0 00:30:42.471 [2024-07-14 15:03:21.572899] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:42.471 [2024-07-14 15:03:21.572934] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:42.471 [2024-07-14 15:03:21.572950] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:42.471 [2024-07-14 15:03:21.572968] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:42.471 [2024-07-14 15:03:21.573041] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.471 [2024-07-14 15:03:21.573066] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.471 [2024-07-14 15:03:21.573083] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:42.471 [2024-07-14 15:03:21.573117] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:42.471 [2024-07-14 15:03:21.573180] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:42.471 [2024-07-14 15:03:21.581911] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.471 [2024-07-14 15:03:21.581938] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.471 [2024-07-14 15:03:21.581951] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.471 [2024-07-14 15:03:21.581971] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:42.471 [2024-07-14 15:03:21.582005] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:42.471 [2024-07-14 15:03:21.582029] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:30:42.471 [2024-07-14 15:03:21.582046] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:30:42.471 [2024-07-14 15:03:21.582081] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.471 [2024-07-14 15:03:21.582097] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.471 [2024-07-14 15:03:21.582114] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:42.471 [2024-07-14 15:03:21.582136] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.471 [2024-07-14 15:03:21.582187] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:42.471 [2024-07-14 15:03:21.582394] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.471 [2024-07-14 15:03:21.582420] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.471 [2024-07-14 15:03:21.582434] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.471 [2024-07-14 15:03:21.582447] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:42.471 [2024-07-14 15:03:21.582470] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:30:42.471 [2024-07-14 15:03:21.582494] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:30:42.471 [2024-07-14 15:03:21.582520] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.471 [2024-07-14 15:03:21.582549] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.471 [2024-07-14 15:03:21.582561] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:42.471 [2024-07-14 15:03:21.582584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.471 [2024-07-14 15:03:21.582633] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:42.471 [2024-07-14 15:03:21.582803] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.471 [2024-07-14 15:03:21.582825] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.471 [2024-07-14 15:03:21.582837] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.471 [2024-07-14 15:03:21.582848] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:42.471 [2024-07-14 15:03:21.582864] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:30:42.471 [2024-07-14 15:03:21.582905] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:30:42.471 [2024-07-14 15:03:21.582929] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.471 [2024-07-14 15:03:21.582943] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.471 [2024-07-14 15:03:21.582955] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:42.471 [2024-07-14 15:03:21.582995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.471 [2024-07-14 15:03:21.583028] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:42.471 [2024-07-14 15:03:21.583173] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.471 [2024-07-14 15:03:21.583194] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.471 [2024-07-14 15:03:21.583206] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.471 [2024-07-14 15:03:21.583217] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:42.472 [2024-07-14 15:03:21.583233] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:42.472 [2024-07-14 15:03:21.583260] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.472 [2024-07-14 15:03:21.583277] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.472 [2024-07-14 15:03:21.583288] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:42.472 [2024-07-14 15:03:21.583311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.472 [2024-07-14 15:03:21.583360] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:42.472 [2024-07-14 15:03:21.583541] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.472 [2024-07-14 15:03:21.583562] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.472 [2024-07-14 15:03:21.583574] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.472 [2024-07-14 15:03:21.583585] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:42.472 [2024-07-14 15:03:21.583599] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:30:42.472 [2024-07-14 15:03:21.583618] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:30:42.472 [2024-07-14 15:03:21.583660] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:42.472 [2024-07-14 15:03:21.583779] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:30:42.472 [2024-07-14 15:03:21.583791] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:42.472 [2024-07-14 15:03:21.583814] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.472 [2024-07-14 15:03:21.583827] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.472 [2024-07-14 15:03:21.583844] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:42.472 [2024-07-14 15:03:21.583890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.472 [2024-07-14 15:03:21.583927] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:42.472 [2024-07-14 15:03:21.584089] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.472 [2024-07-14 15:03:21.584116] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.472 [2024-07-14 15:03:21.584129] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.472 [2024-07-14 15:03:21.584141] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:42.472 [2024-07-14 15:03:21.584155] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:42.472 [2024-07-14 15:03:21.584184] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.472 [2024-07-14 15:03:21.584205] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.472 [2024-07-14 15:03:21.584218] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:42.472 [2024-07-14 15:03:21.584252] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.472 [2024-07-14 15:03:21.584284] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:42.472 [2024-07-14 15:03:21.584438] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.472 [2024-07-14 15:03:21.584458] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.472 [2024-07-14 15:03:21.584470] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.472 [2024-07-14 15:03:21.584481] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:42.472 [2024-07-14 15:03:21.584495] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:42.472 [2024-07-14 15:03:21.584526] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:30:42.472 [2024-07-14 15:03:21.584552] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:30:42.472 [2024-07-14 15:03:21.584573] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:30:42.472 [2024-07-14 15:03:21.584601] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.472 [2024-07-14 15:03:21.584630] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:42.472 [2024-07-14 15:03:21.584650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.472 [2024-07-14 15:03:21.584686] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:42.472 [2024-07-14 15:03:21.584995] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:42.472 [2024-07-14 15:03:21.585017] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:42.472 [2024-07-14 15:03:21.585029] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:42.472 [2024-07-14 15:03:21.585042] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=0 00:30:42.472 [2024-07-14 15:03:21.585055] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:42.472 [2024-07-14 15:03:21.585068] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.472 [2024-07-14 15:03:21.585088] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:42.472 [2024-07-14 15:03:21.585103] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:42.472 [2024-07-14 15:03:21.585123] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.472 [2024-07-14 15:03:21.585145] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.472 [2024-07-14 15:03:21.585157] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.472 [2024-07-14 15:03:21.585169] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:42.472 [2024-07-14 15:03:21.585197] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:30:42.472 [2024-07-14 15:03:21.585213] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:30:42.472 [2024-07-14 15:03:21.585226] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:30:42.472 [2024-07-14 15:03:21.585239] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:30:42.472 [2024-07-14 15:03:21.585260] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:30:42.472 [2024-07-14 15:03:21.585293] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:30:42.472 [2024-07-14 15:03:21.585317] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:30:42.472 [2024-07-14 15:03:21.585356] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.472 [2024-07-14 15:03:21.585370] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.472 [2024-07-14 15:03:21.585382] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:42.472 [2024-07-14 15:03:21.585405] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:42.472 [2024-07-14 15:03:21.585437] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:42.472 [2024-07-14 15:03:21.585653] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.472 [2024-07-14 15:03:21.585674] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.472 [2024-07-14 15:03:21.585686] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.472 [2024-07-14 15:03:21.585697] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:42.472 [2024-07-14 15:03:21.585716] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.472 [2024-07-14 15:03:21.585730] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.472 [2024-07-14 15:03:21.585742] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:42.472 [2024-07-14 15:03:21.585770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:42.472 [2024-07-14 15:03:21.585789] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.472 [2024-07-14 15:03:21.585821] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.472 [2024-07-14 15:03:21.585834] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000015700) 00:30:42.472 [2024-07-14 15:03:21.585855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:42.472 [2024-07-14 15:03:21.589899] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.472 [2024-07-14 15:03:21.589917] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.472 [2024-07-14 15:03:21.589927] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000015700) 00:30:42.472 [2024-07-14 15:03:21.589944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:42.472 [2024-07-14 15:03:21.589960] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.472 [2024-07-14 15:03:21.589972] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.472 [2024-07-14 15:03:21.589982] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:42.472 [2024-07-14 15:03:21.589997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:42.472 [2024-07-14 15:03:21.590011] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:30:42.472 [2024-07-14 15:03:21.590047] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:42.472 [2024-07-14 15:03:21.590072] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.472 [2024-07-14 15:03:21.590085] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:42.472 [2024-07-14 15:03:21.590104] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.472 [2024-07-14 15:03:21.590138] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:42.472 [2024-07-14 15:03:21.590174] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:30:42.472 [2024-07-14 15:03:21.590186] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:30:42.472 [2024-07-14 15:03:21.590198] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:42.472 [2024-07-14 15:03:21.590210] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:42.472 [2024-07-14 15:03:21.590387] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.472 [2024-07-14 15:03:21.590409] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.472 [2024-07-14 15:03:21.590421] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.472 [2024-07-14 15:03:21.590433] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:42.472 [2024-07-14 15:03:21.590463] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:30:42.472 [2024-07-14 15:03:21.590486] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:30:42.472 [2024-07-14 15:03:21.590525] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:30:42.472 [2024-07-14 15:03:21.590551] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:30:42.473 [2024-07-14 15:03:21.590569] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.473 [2024-07-14 15:03:21.590582] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.473 [2024-07-14 15:03:21.590593] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:42.473 [2024-07-14 15:03:21.590619] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:42.473 [2024-07-14 15:03:21.590656] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:42.473 [2024-07-14 15:03:21.590874] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.473 [2024-07-14 15:03:21.590905] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.473 [2024-07-14 15:03:21.590917] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.473 [2024-07-14 15:03:21.590928] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:42.473 [2024-07-14 15:03:21.591026] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:30:42.473 [2024-07-14 15:03:21.591067] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:30:42.473 [2024-07-14 15:03:21.591095] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.473 [2024-07-14 15:03:21.591110] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:42.473 [2024-07-14 15:03:21.591130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.473 [2024-07-14 15:03:21.591188] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:42.473 [2024-07-14 15:03:21.591399] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:42.473 [2024-07-14 15:03:21.591422] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:42.473 [2024-07-14 15:03:21.591434] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:42.473 [2024-07-14 15:03:21.591445] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:42.473 [2024-07-14 15:03:21.591457] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:42.473 [2024-07-14 15:03:21.591468] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.473 [2024-07-14 15:03:21.591507] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:42.473 [2024-07-14 15:03:21.591523] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:42.473 [2024-07-14 15:03:21.591541] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.473 [2024-07-14 15:03:21.591558] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.473 [2024-07-14 15:03:21.591569] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.473 [2024-07-14 15:03:21.591580] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:42.473 [2024-07-14 15:03:21.591622] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:30:42.473 [2024-07-14 15:03:21.591696] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:30:42.473 [2024-07-14 15:03:21.591752] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:30:42.473 [2024-07-14 15:03:21.591788] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.473 [2024-07-14 15:03:21.591820] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:42.473 [2024-07-14 15:03:21.591838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.473 [2024-07-14 15:03:21.591896] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:42.473 [2024-07-14 15:03:21.592110] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:42.473 [2024-07-14 15:03:21.592131] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:42.473 [2024-07-14 15:03:21.592143] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:42.473 [2024-07-14 15:03:21.592154] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:42.473 [2024-07-14 15:03:21.592170] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:42.473 [2024-07-14 15:03:21.592182] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.473 [2024-07-14 15:03:21.592204] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:42.473 [2024-07-14 15:03:21.592218] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:42.473 [2024-07-14 15:03:21.592237] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.473 [2024-07-14 15:03:21.592253] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.473 [2024-07-14 15:03:21.592265] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.473 [2024-07-14 15:03:21.592275] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:42.473 [2024-07-14 15:03:21.592314] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:30:42.473 [2024-07-14 15:03:21.592344] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:30:42.473 [2024-07-14 15:03:21.592371] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.473 [2024-07-14 15:03:21.592410] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:42.473 [2024-07-14 15:03:21.592434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.473 [2024-07-14 15:03:21.592481] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:42.473 [2024-07-14 15:03:21.592686] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:42.473 [2024-07-14 15:03:21.592707] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:42.473 [2024-07-14 15:03:21.592719] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:42.473 [2024-07-14 15:03:21.592730] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:42.473 [2024-07-14 15:03:21.592742] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:42.473 [2024-07-14 15:03:21.592753] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.473 [2024-07-14 15:03:21.592780] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:42.473 [2024-07-14 15:03:21.592805] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:42.473 [2024-07-14 15:03:21.636902] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.473 [2024-07-14 15:03:21.636948] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.473 [2024-07-14 15:03:21.636962] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.473 [2024-07-14 15:03:21.636975] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:42.473 [2024-07-14 15:03:21.637003] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:30:42.473 [2024-07-14 15:03:21.637029] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:30:42.473 [2024-07-14 15:03:21.637060] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:30:42.473 [2024-07-14 15:03:21.637079] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:30:42.473 [2024-07-14 15:03:21.637094] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:30:42.473 [2024-07-14 15:03:21.637108] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:30:42.473 [2024-07-14 15:03:21.637127] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:30:42.473 [2024-07-14 15:03:21.637141] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:30:42.473 [2024-07-14 15:03:21.637155] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:30:42.473 [2024-07-14 15:03:21.637212] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.473 [2024-07-14 15:03:21.637246] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:42.473 [2024-07-14 15:03:21.637267] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.473 [2024-07-14 15:03:21.637293] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.473 [2024-07-14 15:03:21.637307] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.473 [2024-07-14 15:03:21.637318] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:42.473 [2024-07-14 15:03:21.637336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:42.473 [2024-07-14 15:03:21.637384] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:42.473 [2024-07-14 15:03:21.637404] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:42.473 [2024-07-14 15:03:21.637609] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.473 [2024-07-14 15:03:21.637631] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.473 [2024-07-14 15:03:21.637644] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.473 [2024-07-14 15:03:21.637665] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:42.473 [2024-07-14 15:03:21.637689] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.473 [2024-07-14 15:03:21.637707] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.473 [2024-07-14 15:03:21.637719] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.473 [2024-07-14 15:03:21.637730] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:42.473 [2024-07-14 15:03:21.637770] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.473 [2024-07-14 15:03:21.637786] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:42.473 [2024-07-14 15:03:21.637809] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.473 [2024-07-14 15:03:21.637840] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:42.473 [2024-07-14 15:03:21.637992] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.473 [2024-07-14 15:03:21.638014] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.473 [2024-07-14 15:03:21.638026] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.473 [2024-07-14 15:03:21.638038] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:42.473 [2024-07-14 15:03:21.638064] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.473 [2024-07-14 15:03:21.638080] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:42.473 [2024-07-14 15:03:21.638099] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.473 [2024-07-14 15:03:21.638130] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:42.473 [2024-07-14 15:03:21.638236] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.473 [2024-07-14 15:03:21.638256] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.473 [2024-07-14 15:03:21.638273] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.474 [2024-07-14 15:03:21.638285] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:42.474 [2024-07-14 15:03:21.638311] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.474 [2024-07-14 15:03:21.638327] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:42.474 [2024-07-14 15:03:21.638345] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.474 [2024-07-14 15:03:21.638376] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:42.474 [2024-07-14 15:03:21.638484] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.474 [2024-07-14 15:03:21.638505] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.474 [2024-07-14 15:03:21.638517] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.474 [2024-07-14 15:03:21.638528] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:42.474 [2024-07-14 15:03:21.638571] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.474 [2024-07-14 15:03:21.638590] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:42.474 [2024-07-14 15:03:21.638610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.474 [2024-07-14 15:03:21.638632] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.474 [2024-07-14 15:03:21.638663] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:42.474 [2024-07-14 15:03:21.638682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.474 [2024-07-14 15:03:21.638702] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.474 [2024-07-14 15:03:21.638717] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x615000015700) 00:30:42.474 [2024-07-14 15:03:21.638734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.474 [2024-07-14 15:03:21.638759] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.474 [2024-07-14 15:03:21.638778] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000015700) 00:30:42.474 [2024-07-14 15:03:21.638796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.474 [2024-07-14 15:03:21.638827] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:42.474 [2024-07-14 15:03:21.638860] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:42.474 [2024-07-14 15:03:21.638873] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001ba00, cid 6, qid 0 00:30:42.474 [2024-07-14 15:03:21.638913] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:30:42.474 [2024-07-14 15:03:21.639257] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:42.474 [2024-07-14 15:03:21.639279] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:42.474 [2024-07-14 15:03:21.639307] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:42.474 [2024-07-14 15:03:21.639318] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=8192, cccid=5 00:30:42.474 [2024-07-14 15:03:21.639332] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b880) on tqpair(0x615000015700): expected_datao=0, payload_size=8192 00:30:42.474 [2024-07-14 15:03:21.639344] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.474 [2024-07-14 15:03:21.639368] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:42.474 [2024-07-14 15:03:21.639383] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:42.474 [2024-07-14 15:03:21.639399] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:42.474 [2024-07-14 15:03:21.639415] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:42.474 [2024-07-14 15:03:21.639426] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:42.474 [2024-07-14 15:03:21.639437] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=512, cccid=4 00:30:42.474 [2024-07-14 15:03:21.639449] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=512 00:30:42.474 [2024-07-14 15:03:21.639460] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.474 [2024-07-14 15:03:21.639482] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:42.474 [2024-07-14 15:03:21.639496] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:42.474 [2024-07-14 15:03:21.639517] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:42.474 [2024-07-14 15:03:21.639534] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:42.474 [2024-07-14 15:03:21.639546] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:42.474 [2024-07-14 15:03:21.639556] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=512, cccid=6 00:30:42.474 [2024-07-14 15:03:21.639568] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001ba00) on tqpair(0x615000015700): expected_datao=0, payload_size=512 00:30:42.474 [2024-07-14 15:03:21.639579] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.474 [2024-07-14 15:03:21.639611] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:42.474 [2024-07-14 15:03:21.639623] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:42.474 [2024-07-14 15:03:21.639637] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:42.474 [2024-07-14 15:03:21.639668] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:42.474 [2024-07-14 15:03:21.639678] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:42.474 [2024-07-14 15:03:21.639688] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=7 00:30:42.474 [2024-07-14 15:03:21.639699] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001bb80) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:42.474 [2024-07-14 15:03:21.639710] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.474 [2024-07-14 15:03:21.639726] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:42.474 [2024-07-14 15:03:21.639738] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:42.474 [2024-07-14 15:03:21.639756] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.474 [2024-07-14 15:03:21.639772] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.474 [2024-07-14 15:03:21.639782] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.474 [2024-07-14 15:03:21.639800] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:42.474 [2024-07-14 15:03:21.639835] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.474 [2024-07-14 15:03:21.639852] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.474 [2024-07-14 15:03:21.639888] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.474 [2024-07-14 15:03:21.639900] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:42.474 [2024-07-14 15:03:21.639925] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.474 [2024-07-14 15:03:21.639943] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.474 [2024-07-14 15:03:21.639955] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.474 [2024-07-14 15:03:21.639965] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001ba00) on tqpair=0x615000015700 00:30:42.474 [2024-07-14 15:03:21.639994] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.474 [2024-07-14 15:03:21.640012] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.474 [2024-07-14 15:03:21.640023] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.474 [2024-07-14 15:03:21.640033] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x615000015700 00:30:42.474 ===================================================== 00:30:42.474 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:42.474 ===================================================== 00:30:42.474 Controller Capabilities/Features 00:30:42.474 ================================ 00:30:42.474 Vendor ID: 8086 00:30:42.474 Subsystem Vendor ID: 8086 00:30:42.474 Serial Number: SPDK00000000000001 00:30:42.474 Model Number: SPDK bdev Controller 00:30:42.474 Firmware Version: 24.09 00:30:42.474 Recommended Arb Burst: 6 00:30:42.474 IEEE OUI Identifier: e4 d2 5c 00:30:42.474 Multi-path I/O 00:30:42.474 May have multiple subsystem ports: Yes 00:30:42.474 May have multiple controllers: Yes 00:30:42.474 Associated with SR-IOV VF: No 00:30:42.474 Max Data Transfer Size: 131072 00:30:42.474 Max Number of Namespaces: 32 00:30:42.474 Max Number of I/O Queues: 127 00:30:42.474 NVMe Specification Version (VS): 1.3 00:30:42.474 NVMe Specification Version (Identify): 1.3 00:30:42.474 Maximum Queue Entries: 128 00:30:42.474 Contiguous Queues Required: Yes 00:30:42.474 Arbitration Mechanisms Supported 00:30:42.474 Weighted Round Robin: Not Supported 00:30:42.474 Vendor Specific: Not Supported 00:30:42.474 Reset Timeout: 15000 ms 00:30:42.474 Doorbell Stride: 4 bytes 00:30:42.474 NVM Subsystem Reset: Not Supported 00:30:42.474 Command Sets Supported 00:30:42.474 NVM Command Set: Supported 00:30:42.474 Boot Partition: Not Supported 00:30:42.474 Memory Page Size Minimum: 4096 bytes 00:30:42.474 Memory Page Size Maximum: 4096 bytes 00:30:42.474 Persistent Memory Region: Not Supported 00:30:42.474 Optional Asynchronous Events Supported 00:30:42.474 Namespace Attribute Notices: Supported 00:30:42.474 Firmware Activation Notices: Not Supported 00:30:42.474 ANA Change Notices: Not Supported 00:30:42.474 PLE Aggregate Log Change Notices: Not Supported 00:30:42.474 LBA Status Info Alert Notices: Not Supported 00:30:42.474 EGE Aggregate Log Change Notices: Not Supported 00:30:42.474 Normal NVM Subsystem Shutdown event: Not Supported 00:30:42.474 Zone Descriptor Change Notices: Not Supported 00:30:42.474 Discovery Log Change Notices: Not Supported 00:30:42.474 Controller Attributes 00:30:42.474 128-bit Host Identifier: Supported 00:30:42.474 Non-Operational Permissive Mode: Not Supported 00:30:42.474 NVM Sets: Not Supported 00:30:42.474 Read Recovery Levels: Not Supported 00:30:42.474 Endurance Groups: Not Supported 00:30:42.474 Predictable Latency Mode: Not Supported 00:30:42.474 Traffic Based Keep ALive: Not Supported 00:30:42.474 Namespace Granularity: Not Supported 00:30:42.474 SQ Associations: Not Supported 00:30:42.474 UUID List: Not Supported 00:30:42.474 Multi-Domain Subsystem: Not Supported 00:30:42.474 Fixed Capacity Management: Not Supported 00:30:42.474 Variable Capacity Management: Not Supported 00:30:42.474 Delete Endurance Group: Not Supported 00:30:42.474 Delete NVM Set: Not Supported 00:30:42.474 Extended LBA Formats Supported: Not Supported 00:30:42.474 Flexible Data Placement Supported: Not Supported 00:30:42.474 00:30:42.474 Controller Memory Buffer Support 00:30:42.474 ================================ 00:30:42.474 Supported: No 00:30:42.474 00:30:42.474 Persistent Memory Region Support 00:30:42.475 ================================ 00:30:42.475 Supported: No 00:30:42.475 00:30:42.475 Admin Command Set Attributes 00:30:42.475 ============================ 00:30:42.475 Security Send/Receive: Not Supported 00:30:42.475 Format NVM: Not Supported 00:30:42.475 Firmware Activate/Download: Not Supported 00:30:42.475 Namespace Management: Not Supported 00:30:42.475 Device Self-Test: Not Supported 00:30:42.475 Directives: Not Supported 00:30:42.475 NVMe-MI: Not Supported 00:30:42.475 Virtualization Management: Not Supported 00:30:42.475 Doorbell Buffer Config: Not Supported 00:30:42.475 Get LBA Status Capability: Not Supported 00:30:42.475 Command & Feature Lockdown Capability: Not Supported 00:30:42.475 Abort Command Limit: 4 00:30:42.475 Async Event Request Limit: 4 00:30:42.475 Number of Firmware Slots: N/A 00:30:42.475 Firmware Slot 1 Read-Only: N/A 00:30:42.475 Firmware Activation Without Reset: N/A 00:30:42.475 Multiple Update Detection Support: N/A 00:30:42.475 Firmware Update Granularity: No Information Provided 00:30:42.475 Per-Namespace SMART Log: No 00:30:42.475 Asymmetric Namespace Access Log Page: Not Supported 00:30:42.475 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:30:42.475 Command Effects Log Page: Supported 00:30:42.475 Get Log Page Extended Data: Supported 00:30:42.475 Telemetry Log Pages: Not Supported 00:30:42.475 Persistent Event Log Pages: Not Supported 00:30:42.475 Supported Log Pages Log Page: May Support 00:30:42.475 Commands Supported & Effects Log Page: Not Supported 00:30:42.475 Feature Identifiers & Effects Log Page:May Support 00:30:42.475 NVMe-MI Commands & Effects Log Page: May Support 00:30:42.475 Data Area 4 for Telemetry Log: Not Supported 00:30:42.475 Error Log Page Entries Supported: 128 00:30:42.475 Keep Alive: Supported 00:30:42.475 Keep Alive Granularity: 10000 ms 00:30:42.475 00:30:42.475 NVM Command Set Attributes 00:30:42.475 ========================== 00:30:42.475 Submission Queue Entry Size 00:30:42.475 Max: 64 00:30:42.475 Min: 64 00:30:42.475 Completion Queue Entry Size 00:30:42.475 Max: 16 00:30:42.475 Min: 16 00:30:42.475 Number of Namespaces: 32 00:30:42.475 Compare Command: Supported 00:30:42.475 Write Uncorrectable Command: Not Supported 00:30:42.475 Dataset Management Command: Supported 00:30:42.475 Write Zeroes Command: Supported 00:30:42.475 Set Features Save Field: Not Supported 00:30:42.475 Reservations: Supported 00:30:42.475 Timestamp: Not Supported 00:30:42.475 Copy: Supported 00:30:42.475 Volatile Write Cache: Present 00:30:42.475 Atomic Write Unit (Normal): 1 00:30:42.475 Atomic Write Unit (PFail): 1 00:30:42.475 Atomic Compare & Write Unit: 1 00:30:42.475 Fused Compare & Write: Supported 00:30:42.475 Scatter-Gather List 00:30:42.475 SGL Command Set: Supported 00:30:42.475 SGL Keyed: Supported 00:30:42.475 SGL Bit Bucket Descriptor: Not Supported 00:30:42.475 SGL Metadata Pointer: Not Supported 00:30:42.475 Oversized SGL: Not Supported 00:30:42.475 SGL Metadata Address: Not Supported 00:30:42.475 SGL Offset: Supported 00:30:42.475 Transport SGL Data Block: Not Supported 00:30:42.475 Replay Protected Memory Block: Not Supported 00:30:42.475 00:30:42.475 Firmware Slot Information 00:30:42.475 ========================= 00:30:42.475 Active slot: 1 00:30:42.475 Slot 1 Firmware Revision: 24.09 00:30:42.475 00:30:42.475 00:30:42.475 Commands Supported and Effects 00:30:42.475 ============================== 00:30:42.475 Admin Commands 00:30:42.475 -------------- 00:30:42.475 Get Log Page (02h): Supported 00:30:42.475 Identify (06h): Supported 00:30:42.475 Abort (08h): Supported 00:30:42.475 Set Features (09h): Supported 00:30:42.475 Get Features (0Ah): Supported 00:30:42.475 Asynchronous Event Request (0Ch): Supported 00:30:42.475 Keep Alive (18h): Supported 00:30:42.475 I/O Commands 00:30:42.475 ------------ 00:30:42.475 Flush (00h): Supported LBA-Change 00:30:42.475 Write (01h): Supported LBA-Change 00:30:42.475 Read (02h): Supported 00:30:42.475 Compare (05h): Supported 00:30:42.475 Write Zeroes (08h): Supported LBA-Change 00:30:42.475 Dataset Management (09h): Supported LBA-Change 00:30:42.475 Copy (19h): Supported LBA-Change 00:30:42.475 00:30:42.475 Error Log 00:30:42.475 ========= 00:30:42.475 00:30:42.475 Arbitration 00:30:42.475 =========== 00:30:42.475 Arbitration Burst: 1 00:30:42.475 00:30:42.475 Power Management 00:30:42.475 ================ 00:30:42.475 Number of Power States: 1 00:30:42.475 Current Power State: Power State #0 00:30:42.475 Power State #0: 00:30:42.475 Max Power: 0.00 W 00:30:42.475 Non-Operational State: Operational 00:30:42.475 Entry Latency: Not Reported 00:30:42.475 Exit Latency: Not Reported 00:30:42.475 Relative Read Throughput: 0 00:30:42.475 Relative Read Latency: 0 00:30:42.475 Relative Write Throughput: 0 00:30:42.475 Relative Write Latency: 0 00:30:42.475 Idle Power: Not Reported 00:30:42.475 Active Power: Not Reported 00:30:42.475 Non-Operational Permissive Mode: Not Supported 00:30:42.475 00:30:42.475 Health Information 00:30:42.475 ================== 00:30:42.475 Critical Warnings: 00:30:42.475 Available Spare Space: OK 00:30:42.475 Temperature: OK 00:30:42.475 Device Reliability: OK 00:30:42.475 Read Only: No 00:30:42.475 Volatile Memory Backup: OK 00:30:42.475 Current Temperature: 0 Kelvin (-273 Celsius) 00:30:42.475 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:30:42.475 Available Spare: 0% 00:30:42.475 Available Spare Threshold: 0% 00:30:42.475 Life Percentage Used:[2024-07-14 15:03:21.640259] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.475 [2024-07-14 15:03:21.640279] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000015700) 00:30:42.475 [2024-07-14 15:03:21.640299] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.475 [2024-07-14 15:03:21.640331] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:30:42.475 [2024-07-14 15:03:21.640559] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.475 [2024-07-14 15:03:21.640581] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.475 [2024-07-14 15:03:21.640594] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.475 [2024-07-14 15:03:21.640618] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x615000015700 00:30:42.475 [2024-07-14 15:03:21.640721] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:30:42.475 [2024-07-14 15:03:21.640752] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:42.475 [2024-07-14 15:03:21.640779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.475 [2024-07-14 15:03:21.640794] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x615000015700 00:30:42.475 [2024-07-14 15:03:21.640807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.475 [2024-07-14 15:03:21.640819] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x615000015700 00:30:42.475 [2024-07-14 15:03:21.640832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.475 [2024-07-14 15:03:21.640844] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:42.475 [2024-07-14 15:03:21.640857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.475 [2024-07-14 15:03:21.644888] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.475 [2024-07-14 15:03:21.644911] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.475 [2024-07-14 15:03:21.644923] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:42.476 [2024-07-14 15:03:21.644943] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.476 [2024-07-14 15:03:21.644979] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:42.476 [2024-07-14 15:03:21.645133] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.476 [2024-07-14 15:03:21.645156] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.476 [2024-07-14 15:03:21.645169] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.476 [2024-07-14 15:03:21.645181] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:42.476 [2024-07-14 15:03:21.645212] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.476 [2024-07-14 15:03:21.645227] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.476 [2024-07-14 15:03:21.645239] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:42.476 [2024-07-14 15:03:21.645259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.476 [2024-07-14 15:03:21.645319] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:42.476 [2024-07-14 15:03:21.645521] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.476 [2024-07-14 15:03:21.645544] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.476 [2024-07-14 15:03:21.645556] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.476 [2024-07-14 15:03:21.645567] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:42.476 [2024-07-14 15:03:21.645582] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:30:42.476 [2024-07-14 15:03:21.645595] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:30:42.476 [2024-07-14 15:03:21.645637] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.476 [2024-07-14 15:03:21.645659] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.476 [2024-07-14 15:03:21.645686] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:42.476 [2024-07-14 15:03:21.645705] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.476 [2024-07-14 15:03:21.645735] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:42.476 [2024-07-14 15:03:21.645890] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.476 [2024-07-14 15:03:21.645912] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.476 [2024-07-14 15:03:21.645924] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.476 [2024-07-14 15:03:21.645936] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:42.476 [2024-07-14 15:03:21.645964] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.476 [2024-07-14 15:03:21.645980] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.476 [2024-07-14 15:03:21.645991] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:42.476 [2024-07-14 15:03:21.646009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.476 [2024-07-14 15:03:21.646041] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:42.476 [2024-07-14 15:03:21.646177] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.476 [2024-07-14 15:03:21.646198] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.476 [2024-07-14 15:03:21.646210] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.476 [2024-07-14 15:03:21.646221] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:42.476 [2024-07-14 15:03:21.646247] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.476 [2024-07-14 15:03:21.646263] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.476 [2024-07-14 15:03:21.646274] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:42.476 [2024-07-14 15:03:21.646297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.476 [2024-07-14 15:03:21.646348] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:42.476 [2024-07-14 15:03:21.646471] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.476 [2024-07-14 15:03:21.646493] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.476 [2024-07-14 15:03:21.646505] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.476 [2024-07-14 15:03:21.646516] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:42.476 [2024-07-14 15:03:21.646543] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.476 [2024-07-14 15:03:21.646559] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.476 [2024-07-14 15:03:21.646574] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:42.476 [2024-07-14 15:03:21.646593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.476 [2024-07-14 15:03:21.646639] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:42.476 [2024-07-14 15:03:21.646833] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.476 [2024-07-14 15:03:21.646855] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.476 [2024-07-14 15:03:21.646867] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.476 [2024-07-14 15:03:21.646890] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:42.476 [2024-07-14 15:03:21.646920] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.476 [2024-07-14 15:03:21.646936] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.476 [2024-07-14 15:03:21.646946] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:42.476 [2024-07-14 15:03:21.646965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.476 [2024-07-14 15:03:21.646995] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:42.476 [2024-07-14 15:03:21.647129] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.476 [2024-07-14 15:03:21.647149] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.476 [2024-07-14 15:03:21.647161] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.476 [2024-07-14 15:03:21.647172] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:42.476 [2024-07-14 15:03:21.647198] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.476 [2024-07-14 15:03:21.647214] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.476 [2024-07-14 15:03:21.647225] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:42.476 [2024-07-14 15:03:21.647243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.476 [2024-07-14 15:03:21.647273] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:42.476 [2024-07-14 15:03:21.647378] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.476 [2024-07-14 15:03:21.647399] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.476 [2024-07-14 15:03:21.647411] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.476 [2024-07-14 15:03:21.647422] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:42.476 [2024-07-14 15:03:21.647449] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.476 [2024-07-14 15:03:21.647464] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.476 [2024-07-14 15:03:21.647475] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:42.476 [2024-07-14 15:03:21.647493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.476 [2024-07-14 15:03:21.647523] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:42.476 [2024-07-14 15:03:21.647630] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.476 [2024-07-14 15:03:21.647651] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.476 [2024-07-14 15:03:21.647663] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.476 [2024-07-14 15:03:21.647674] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:42.476 [2024-07-14 15:03:21.647701] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.476 [2024-07-14 15:03:21.647716] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.476 [2024-07-14 15:03:21.647731] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:42.476 [2024-07-14 15:03:21.647750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.476 [2024-07-14 15:03:21.647781] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:42.476 [2024-07-14 15:03:21.647998] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.476 [2024-07-14 15:03:21.648019] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.476 [2024-07-14 15:03:21.648032] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.476 [2024-07-14 15:03:21.648043] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:42.476 [2024-07-14 15:03:21.648070] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.476 [2024-07-14 15:03:21.648086] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.476 [2024-07-14 15:03:21.648097] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:42.476 [2024-07-14 15:03:21.648114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.476 [2024-07-14 15:03:21.648145] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:42.476 [2024-07-14 15:03:21.648248] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.476 [2024-07-14 15:03:21.648268] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.476 [2024-07-14 15:03:21.648280] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.476 [2024-07-14 15:03:21.648291] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:42.476 [2024-07-14 15:03:21.648323] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.476 [2024-07-14 15:03:21.648339] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.476 [2024-07-14 15:03:21.648350] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:42.476 [2024-07-14 15:03:21.648368] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.476 [2024-07-14 15:03:21.648398] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:42.476 [2024-07-14 15:03:21.648513] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.476 [2024-07-14 15:03:21.648533] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.476 [2024-07-14 15:03:21.648545] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.476 [2024-07-14 15:03:21.648556] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:42.476 [2024-07-14 15:03:21.648582] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.476 [2024-07-14 15:03:21.648598] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.476 [2024-07-14 15:03:21.648609] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:42.476 [2024-07-14 15:03:21.648635] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.476 [2024-07-14 15:03:21.648682] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:42.477 [2024-07-14 15:03:21.648869] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.477 [2024-07-14 15:03:21.652912] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.477 [2024-07-14 15:03:21.652942] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.477 [2024-07-14 15:03:21.652953] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:42.477 [2024-07-14 15:03:21.652996] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.477 [2024-07-14 15:03:21.653013] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.477 [2024-07-14 15:03:21.653023] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:42.477 [2024-07-14 15:03:21.653046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.477 [2024-07-14 15:03:21.653079] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:42.477 [2024-07-14 15:03:21.653228] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.477 [2024-07-14 15:03:21.653250] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.477 [2024-07-14 15:03:21.653262] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.477 [2024-07-14 15:03:21.653273] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:42.477 [2024-07-14 15:03:21.653296] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:30:42.477 0% 00:30:42.477 Data Units Read: 0 00:30:42.477 Data Units Written: 0 00:30:42.477 Host Read Commands: 0 00:30:42.477 Host Write Commands: 0 00:30:42.477 Controller Busy Time: 0 minutes 00:30:42.477 Power Cycles: 0 00:30:42.477 Power On Hours: 0 hours 00:30:42.477 Unsafe Shutdowns: 0 00:30:42.477 Unrecoverable Media Errors: 0 00:30:42.477 Lifetime Error Log Entries: 0 00:30:42.477 Warning Temperature Time: 0 minutes 00:30:42.477 Critical Temperature Time: 0 minutes 00:30:42.477 00:30:42.477 Number of Queues 00:30:42.477 ================ 00:30:42.477 Number of I/O Submission Queues: 127 00:30:42.477 Number of I/O Completion Queues: 127 00:30:42.477 00:30:42.477 Active Namespaces 00:30:42.477 ================= 00:30:42.477 Namespace ID:1 00:30:42.477 Error Recovery Timeout: Unlimited 00:30:42.477 Command Set Identifier: NVM (00h) 00:30:42.477 Deallocate: Supported 00:30:42.477 Deallocated/Unwritten Error: Not Supported 00:30:42.477 Deallocated Read Value: Unknown 00:30:42.477 Deallocate in Write Zeroes: Not Supported 00:30:42.477 Deallocated Guard Field: 0xFFFF 00:30:42.477 Flush: Supported 00:30:42.477 Reservation: Supported 00:30:42.477 Namespace Sharing Capabilities: Multiple Controllers 00:30:42.477 Size (in LBAs): 131072 (0GiB) 00:30:42.477 Capacity (in LBAs): 131072 (0GiB) 00:30:42.477 Utilization (in LBAs): 131072 (0GiB) 00:30:42.477 NGUID: ABCDEF0123456789ABCDEF0123456789 00:30:42.477 EUI64: ABCDEF0123456789 00:30:42.477 UUID: 69d2c0c3-fbf2-45b2-8bc6-3467bc6b18eb 00:30:42.477 Thin Provisioning: Not Supported 00:30:42.477 Per-NS Atomic Units: Yes 00:30:42.477 Atomic Boundary Size (Normal): 0 00:30:42.477 Atomic Boundary Size (PFail): 0 00:30:42.477 Atomic Boundary Offset: 0 00:30:42.477 Maximum Single Source Range Length: 65535 00:30:42.477 Maximum Copy Length: 65535 00:30:42.477 Maximum Source Range Count: 1 00:30:42.477 NGUID/EUI64 Never Reused: No 00:30:42.477 Namespace Write Protected: No 00:30:42.477 Number of LBA Formats: 1 00:30:42.477 Current LBA Format: LBA Format #00 00:30:42.477 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:42.477 00:30:42.477 15:03:21 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:30:42.477 15:03:21 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:42.477 15:03:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.477 15:03:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:42.477 15:03:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.477 15:03:21 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:30:42.477 15:03:21 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:30:42.477 15:03:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:42.477 15:03:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:30:42.477 15:03:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:42.477 15:03:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:30:42.477 15:03:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:42.477 15:03:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:42.477 rmmod nvme_tcp 00:30:42.477 rmmod nvme_fabrics 00:30:42.477 rmmod nvme_keyring 00:30:42.807 15:03:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:42.807 15:03:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:30:42.807 15:03:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:30:42.807 15:03:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1996478 ']' 00:30:42.807 15:03:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1996478 00:30:42.807 15:03:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 1996478 ']' 00:30:42.807 15:03:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 1996478 00:30:42.807 15:03:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:30:42.807 15:03:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:42.807 15:03:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1996478 00:30:42.807 15:03:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:42.807 15:03:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:42.807 15:03:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1996478' 00:30:42.807 killing process with pid 1996478 00:30:42.807 15:03:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 1996478 00:30:42.807 15:03:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 1996478 00:30:44.214 15:03:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:44.214 15:03:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:44.214 15:03:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:44.214 15:03:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:44.214 15:03:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:44.214 15:03:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:44.214 15:03:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:44.214 15:03:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:46.115 15:03:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:46.115 00:30:46.115 real 0m7.514s 00:30:46.115 user 0m10.461s 00:30:46.115 sys 0m2.174s 00:30:46.115 15:03:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:46.115 15:03:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:46.115 ************************************ 00:30:46.115 END TEST nvmf_identify 00:30:46.115 ************************************ 00:30:46.115 15:03:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:46.115 15:03:25 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:46.115 15:03:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:46.115 15:03:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:46.115 15:03:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:46.115 ************************************ 00:30:46.115 START TEST nvmf_perf 00:30:46.115 ************************************ 00:30:46.115 15:03:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:46.115 * Looking for test storage... 00:30:46.115 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:46.115 15:03:25 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:46.115 15:03:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:30:46.115 15:03:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:46.115 15:03:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:46.115 15:03:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:46.115 15:03:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:46.115 15:03:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:46.115 15:03:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:46.115 15:03:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:46.115 15:03:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:46.115 15:03:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:46.115 15:03:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:46.115 15:03:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:46.115 15:03:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:46.115 15:03:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:46.115 15:03:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:46.115 15:03:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:46.115 15:03:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:46.115 15:03:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:46.115 15:03:25 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:46.115 15:03:25 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:46.115 15:03:25 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:46.115 15:03:25 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.115 15:03:25 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.115 15:03:25 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.115 15:03:25 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:30:46.115 15:03:25 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.115 15:03:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:30:46.115 15:03:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:46.115 15:03:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:46.115 15:03:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:46.115 15:03:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:46.115 15:03:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:46.115 15:03:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:46.115 15:03:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:46.115 15:03:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:46.115 15:03:25 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:46.115 15:03:25 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:46.115 15:03:25 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:46.115 15:03:25 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:30:46.115 15:03:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:46.115 15:03:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:46.115 15:03:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:46.115 15:03:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:46.115 15:03:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:46.115 15:03:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:46.115 15:03:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:46.115 15:03:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:46.115 15:03:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:46.115 15:03:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:46.115 15:03:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:30:46.115 15:03:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:48.016 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:48.016 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:30:48.016 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:48.016 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:48.016 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:48.016 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:48.016 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:48.016 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:30:48.016 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:48.016 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:30:48.016 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:30:48.016 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:30:48.016 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:30:48.016 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:30:48.016 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:30:48.016 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:48.016 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:48.016 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:48.016 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:48.016 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:48.016 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:48.016 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:48.016 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:48.016 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:48.016 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:48.016 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:48.016 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:48.016 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:48.016 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:48.016 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:48.016 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:48.016 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:48.016 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:48.016 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:48.016 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:48.016 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:48.016 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:48.016 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:48.016 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:48.016 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:48.016 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:48.016 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:48.016 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:48.016 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:48.016 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:48.016 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:48.016 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:48.016 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:48.016 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:48.016 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:48.017 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:48.017 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:48.017 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:48.017 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:48.017 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:48.017 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:48.017 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:48.017 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:48.017 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:48.017 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:48.017 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:48.017 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:48.017 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:48.017 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:48.017 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:48.017 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:48.017 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:48.017 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:48.017 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:48.017 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:48.017 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:48.017 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:48.017 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:30:48.017 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:48.017 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:48.017 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:48.017 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:48.017 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:48.017 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:48.017 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:48.017 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:48.017 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:48.017 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:48.017 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:48.017 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:48.017 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:48.017 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:48.017 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:48.017 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:48.017 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:48.017 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:48.274 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:48.274 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:48.274 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:48.274 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:48.274 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:48.274 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:48.274 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:30:48.274 00:30:48.274 --- 10.0.0.2 ping statistics --- 00:30:48.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:48.274 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:30:48.274 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:48.274 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:48.274 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:30:48.274 00:30:48.274 --- 10.0.0.1 ping statistics --- 00:30:48.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:48.274 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:30:48.274 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:48.274 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:30:48.274 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:48.274 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:48.274 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:48.274 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:48.274 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:48.274 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:48.274 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:48.274 15:03:27 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:30:48.274 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:48.274 15:03:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:48.274 15:03:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:48.274 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1998704 00:30:48.274 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:48.274 15:03:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1998704 00:30:48.274 15:03:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 1998704 ']' 00:30:48.274 15:03:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:48.274 15:03:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:48.274 15:03:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:48.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:48.274 15:03:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:48.274 15:03:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:48.274 [2024-07-14 15:03:27.496365] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:48.274 [2024-07-14 15:03:27.496493] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:48.274 EAL: No free 2048 kB hugepages reported on node 1 00:30:48.532 [2024-07-14 15:03:27.637800] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:48.789 [2024-07-14 15:03:27.899025] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:48.789 [2024-07-14 15:03:27.899104] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:48.789 [2024-07-14 15:03:27.899132] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:48.789 [2024-07-14 15:03:27.899153] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:48.789 [2024-07-14 15:03:27.899175] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:48.789 [2024-07-14 15:03:27.899293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:48.789 [2024-07-14 15:03:27.899364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:48.789 [2024-07-14 15:03:27.899451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:48.789 [2024-07-14 15:03:27.899475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:49.353 15:03:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:49.353 15:03:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:30:49.353 15:03:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:49.353 15:03:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:49.353 15:03:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:49.353 15:03:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:49.353 15:03:28 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:49.353 15:03:28 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:52.627 15:03:31 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:30:52.627 15:03:31 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:30:52.627 15:03:31 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:30:52.627 15:03:31 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:53.192 15:03:32 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:30:53.192 15:03:32 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:30:53.192 15:03:32 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:30:53.192 15:03:32 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:30:53.192 15:03:32 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:30:53.192 [2024-07-14 15:03:32.482533] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:53.450 15:03:32 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:53.450 15:03:32 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:53.450 15:03:32 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:54.016 15:03:33 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:54.016 15:03:33 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:54.273 15:03:33 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:54.530 [2024-07-14 15:03:33.661164] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:54.530 15:03:33 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:54.787 15:03:33 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:30:54.787 15:03:33 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:30:54.787 15:03:33 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:30:54.787 15:03:33 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:30:56.159 Initializing NVMe Controllers 00:30:56.159 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:30:56.159 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:30:56.159 Initialization complete. Launching workers. 00:30:56.159 ======================================================== 00:30:56.159 Latency(us) 00:30:56.159 Device Information : IOPS MiB/s Average min max 00:30:56.159 PCIE (0000:88:00.0) NSID 1 from core 0: 74396.50 290.61 429.50 48.07 4375.14 00:30:56.159 ======================================================== 00:30:56.159 Total : 74396.50 290.61 429.50 48.07 4375.14 00:30:56.159 00:30:56.159 15:03:35 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:56.417 EAL: No free 2048 kB hugepages reported on node 1 00:30:57.350 Initializing NVMe Controllers 00:30:57.350 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:57.350 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:57.350 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:57.350 Initialization complete. Launching workers. 00:30:57.350 ======================================================== 00:30:57.350 Latency(us) 00:30:57.350 Device Information : IOPS MiB/s Average min max 00:30:57.350 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 89.00 0.35 11237.43 194.32 47895.54 00:30:57.350 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 66.00 0.26 15263.66 5008.15 50868.00 00:30:57.350 ======================================================== 00:30:57.350 Total : 155.00 0.61 12951.83 194.32 50868.00 00:30:57.350 00:30:57.608 15:03:36 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:57.608 EAL: No free 2048 kB hugepages reported on node 1 00:30:58.982 Initializing NVMe Controllers 00:30:58.982 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:58.982 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:58.982 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:58.982 Initialization complete. Launching workers. 00:30:58.982 ======================================================== 00:30:58.982 Latency(us) 00:30:58.982 Device Information : IOPS MiB/s Average min max 00:30:58.982 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5497.64 21.48 5832.03 936.75 12178.08 00:30:58.982 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3823.05 14.93 8425.71 6067.33 18462.71 00:30:58.982 ======================================================== 00:30:58.982 Total : 9320.69 36.41 6895.88 936.75 18462.71 00:30:58.982 00:30:59.240 15:03:38 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:30:59.240 15:03:38 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:30:59.240 15:03:38 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:59.240 EAL: No free 2048 kB hugepages reported on node 1 00:31:02.523 Initializing NVMe Controllers 00:31:02.523 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:02.523 Controller IO queue size 128, less than required. 00:31:02.523 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:02.523 Controller IO queue size 128, less than required. 00:31:02.523 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:02.523 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:02.523 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:02.523 Initialization complete. Launching workers. 00:31:02.523 ======================================================== 00:31:02.523 Latency(us) 00:31:02.523 Device Information : IOPS MiB/s Average min max 00:31:02.523 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1322.58 330.65 98407.69 61497.80 232303.85 00:31:02.523 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 539.31 134.83 264190.89 125432.61 518597.63 00:31:02.523 ======================================================== 00:31:02.523 Total : 1861.89 465.47 146427.96 61497.80 518597.63 00:31:02.523 00:31:02.523 15:03:41 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:31:02.523 EAL: No free 2048 kB hugepages reported on node 1 00:31:02.523 No valid NVMe controllers or AIO or URING devices found 00:31:02.523 Initializing NVMe Controllers 00:31:02.523 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:02.523 Controller IO queue size 128, less than required. 00:31:02.523 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:02.523 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:31:02.523 Controller IO queue size 128, less than required. 00:31:02.523 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:02.523 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:31:02.523 WARNING: Some requested NVMe devices were skipped 00:31:02.523 15:03:41 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:31:02.523 EAL: No free 2048 kB hugepages reported on node 1 00:31:05.856 Initializing NVMe Controllers 00:31:05.856 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:05.856 Controller IO queue size 128, less than required. 00:31:05.856 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:05.856 Controller IO queue size 128, less than required. 00:31:05.856 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:05.856 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:05.856 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:05.856 Initialization complete. Launching workers. 00:31:05.856 00:31:05.856 ==================== 00:31:05.856 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:31:05.856 TCP transport: 00:31:05.856 polls: 5864 00:31:05.856 idle_polls: 3308 00:31:05.856 sock_completions: 2556 00:31:05.856 nvme_completions: 4961 00:31:05.856 submitted_requests: 7430 00:31:05.856 queued_requests: 1 00:31:05.856 00:31:05.856 ==================== 00:31:05.856 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:31:05.856 TCP transport: 00:31:05.856 polls: 8499 00:31:05.856 idle_polls: 5975 00:31:05.856 sock_completions: 2524 00:31:05.856 nvme_completions: 5023 00:31:05.856 submitted_requests: 7492 00:31:05.856 queued_requests: 1 00:31:05.856 ======================================================== 00:31:05.856 Latency(us) 00:31:05.856 Device Information : IOPS MiB/s Average min max 00:31:05.856 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1239.96 309.99 110813.33 55267.28 424572.21 00:31:05.856 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1255.46 313.87 103372.14 69210.94 340210.11 00:31:05.856 ======================================================== 00:31:05.856 Total : 2495.42 623.86 107069.63 55267.28 424572.21 00:31:05.856 00:31:05.856 15:03:44 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:31:05.856 15:03:44 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:05.856 15:03:44 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:31:05.856 15:03:44 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:31:05.856 15:03:44 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:31:09.134 15:03:48 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=6f69d66d-eb76-49de-8215-2a370421a031 00:31:09.134 15:03:48 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 6f69d66d-eb76-49de-8215-2a370421a031 00:31:09.134 15:03:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=6f69d66d-eb76-49de-8215-2a370421a031 00:31:09.134 15:03:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:31:09.134 15:03:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:31:09.134 15:03:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:31:09.134 15:03:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:09.392 15:03:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:31:09.392 { 00:31:09.392 "uuid": "6f69d66d-eb76-49de-8215-2a370421a031", 00:31:09.392 "name": "lvs_0", 00:31:09.392 "base_bdev": "Nvme0n1", 00:31:09.392 "total_data_clusters": 238234, 00:31:09.392 "free_clusters": 238234, 00:31:09.392 "block_size": 512, 00:31:09.392 "cluster_size": 4194304 00:31:09.392 } 00:31:09.392 ]' 00:31:09.392 15:03:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="6f69d66d-eb76-49de-8215-2a370421a031") .free_clusters' 00:31:09.392 15:03:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=238234 00:31:09.392 15:03:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="6f69d66d-eb76-49de-8215-2a370421a031") .cluster_size' 00:31:09.392 15:03:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:31:09.392 15:03:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=952936 00:31:09.392 15:03:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 952936 00:31:09.392 952936 00:31:09.392 15:03:48 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:31:09.392 15:03:48 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:31:09.392 15:03:48 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6f69d66d-eb76-49de-8215-2a370421a031 lbd_0 20480 00:31:09.958 15:03:49 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=7509aac6-4417-4f70-a424-1e48691dd186 00:31:09.958 15:03:49 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 7509aac6-4417-4f70-a424-1e48691dd186 lvs_n_0 00:31:10.887 15:03:49 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=0ee19679-a602-4764-b1db-c86a44db7b05 00:31:10.888 15:03:49 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 0ee19679-a602-4764-b1db-c86a44db7b05 00:31:10.888 15:03:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=0ee19679-a602-4764-b1db-c86a44db7b05 00:31:10.888 15:03:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:31:10.888 15:03:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:31:10.888 15:03:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:31:10.888 15:03:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:10.888 15:03:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:31:10.888 { 00:31:10.888 "uuid": "6f69d66d-eb76-49de-8215-2a370421a031", 00:31:10.888 "name": "lvs_0", 00:31:10.888 "base_bdev": "Nvme0n1", 00:31:10.888 "total_data_clusters": 238234, 00:31:10.888 "free_clusters": 233114, 00:31:10.888 "block_size": 512, 00:31:10.888 "cluster_size": 4194304 00:31:10.888 }, 00:31:10.888 { 00:31:10.888 "uuid": "0ee19679-a602-4764-b1db-c86a44db7b05", 00:31:10.888 "name": "lvs_n_0", 00:31:10.888 "base_bdev": "7509aac6-4417-4f70-a424-1e48691dd186", 00:31:10.888 "total_data_clusters": 5114, 00:31:10.888 "free_clusters": 5114, 00:31:10.888 "block_size": 512, 00:31:10.888 "cluster_size": 4194304 00:31:10.888 } 00:31:10.888 ]' 00:31:10.888 15:03:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="0ee19679-a602-4764-b1db-c86a44db7b05") .free_clusters' 00:31:10.888 15:03:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:31:10.888 15:03:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="0ee19679-a602-4764-b1db-c86a44db7b05") .cluster_size' 00:31:11.145 15:03:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:31:11.145 15:03:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:31:11.145 15:03:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:31:11.145 20456 00:31:11.145 15:03:50 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:31:11.145 15:03:50 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0ee19679-a602-4764-b1db-c86a44db7b05 lbd_nest_0 20456 00:31:11.402 15:03:50 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=14eae0cc-4eb1-4ff4-a0fb-c56f1fcd84c1 00:31:11.402 15:03:50 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:11.402 15:03:50 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:31:11.402 15:03:50 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 14eae0cc-4eb1-4ff4-a0fb-c56f1fcd84c1 00:31:11.659 15:03:50 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:11.916 15:03:51 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:31:11.916 15:03:51 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:31:11.916 15:03:51 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:11.916 15:03:51 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:11.916 15:03:51 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:12.172 EAL: No free 2048 kB hugepages reported on node 1 00:31:24.355 Initializing NVMe Controllers 00:31:24.355 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:24.355 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:24.355 Initialization complete. Launching workers. 00:31:24.355 ======================================================== 00:31:24.355 Latency(us) 00:31:24.355 Device Information : IOPS MiB/s Average min max 00:31:24.355 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 48.10 0.02 20869.05 245.42 46110.35 00:31:24.355 ======================================================== 00:31:24.355 Total : 48.10 0.02 20869.05 245.42 46110.35 00:31:24.355 00:31:24.355 15:04:01 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:24.355 15:04:01 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:24.355 EAL: No free 2048 kB hugepages reported on node 1 00:31:34.318 Initializing NVMe Controllers 00:31:34.318 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:34.318 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:34.318 Initialization complete. Launching workers. 00:31:34.318 ======================================================== 00:31:34.318 Latency(us) 00:31:34.318 Device Information : IOPS MiB/s Average min max 00:31:34.318 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 77.20 9.65 12960.82 5049.22 48831.60 00:31:34.318 ======================================================== 00:31:34.318 Total : 77.20 9.65 12960.82 5049.22 48831.60 00:31:34.318 00:31:34.318 15:04:12 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:34.318 15:04:12 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:34.318 15:04:12 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:34.318 EAL: No free 2048 kB hugepages reported on node 1 00:31:44.277 Initializing NVMe Controllers 00:31:44.277 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:44.277 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:44.277 Initialization complete. Launching workers. 00:31:44.277 ======================================================== 00:31:44.278 Latency(us) 00:31:44.278 Device Information : IOPS MiB/s Average min max 00:31:44.278 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4676.03 2.28 6842.64 633.82 16033.58 00:31:44.278 ======================================================== 00:31:44.278 Total : 4676.03 2.28 6842.64 633.82 16033.58 00:31:44.278 00:31:44.278 15:04:22 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:44.278 15:04:22 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:44.278 EAL: No free 2048 kB hugepages reported on node 1 00:31:54.313 Initializing NVMe Controllers 00:31:54.313 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:54.313 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:54.313 Initialization complete. Launching workers. 00:31:54.313 ======================================================== 00:31:54.313 Latency(us) 00:31:54.313 Device Information : IOPS MiB/s Average min max 00:31:54.313 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3511.13 438.89 9118.35 1157.96 24122.15 00:31:54.313 ======================================================== 00:31:54.313 Total : 3511.13 438.89 9118.35 1157.96 24122.15 00:31:54.313 00:31:54.313 15:04:33 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:54.313 15:04:33 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:54.313 15:04:33 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:54.313 EAL: No free 2048 kB hugepages reported on node 1 00:32:04.287 Initializing NVMe Controllers 00:32:04.287 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:04.287 Controller IO queue size 128, less than required. 00:32:04.287 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:04.287 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:04.287 Initialization complete. Launching workers. 00:32:04.287 ======================================================== 00:32:04.287 Latency(us) 00:32:04.287 Device Information : IOPS MiB/s Average min max 00:32:04.287 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8439.94 4.12 15184.91 1852.17 35539.94 00:32:04.287 ======================================================== 00:32:04.287 Total : 8439.94 4.12 15184.91 1852.17 35539.94 00:32:04.287 00:32:04.287 15:04:43 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:04.287 15:04:43 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:04.544 EAL: No free 2048 kB hugepages reported on node 1 00:32:16.734 Initializing NVMe Controllers 00:32:16.734 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:16.734 Controller IO queue size 128, less than required. 00:32:16.734 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:16.734 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:16.734 Initialization complete. Launching workers. 00:32:16.734 ======================================================== 00:32:16.734 Latency(us) 00:32:16.734 Device Information : IOPS MiB/s Average min max 00:32:16.734 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1177.30 147.16 109666.29 24440.30 239196.87 00:32:16.734 ======================================================== 00:32:16.734 Total : 1177.30 147.16 109666.29 24440.30 239196.87 00:32:16.734 00:32:16.734 15:04:54 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:16.734 15:04:54 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 14eae0cc-4eb1-4ff4-a0fb-c56f1fcd84c1 00:32:16.734 15:04:55 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:16.734 15:04:55 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7509aac6-4417-4f70-a424-1e48691dd186 00:32:16.734 15:04:55 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:16.734 15:04:56 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:32:16.734 15:04:56 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:32:16.734 15:04:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:16.734 15:04:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:32:16.734 15:04:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:16.734 15:04:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:32:16.734 15:04:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:16.734 15:04:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:16.734 rmmod nvme_tcp 00:32:16.734 rmmod nvme_fabrics 00:32:16.992 rmmod nvme_keyring 00:32:16.992 15:04:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:16.993 15:04:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:32:16.993 15:04:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:32:16.993 15:04:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1998704 ']' 00:32:16.993 15:04:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1998704 00:32:16.993 15:04:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 1998704 ']' 00:32:16.993 15:04:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 1998704 00:32:16.993 15:04:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:32:16.993 15:04:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:16.993 15:04:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1998704 00:32:16.993 15:04:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:16.993 15:04:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:16.993 15:04:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1998704' 00:32:16.993 killing process with pid 1998704 00:32:16.993 15:04:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 1998704 00:32:16.993 15:04:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 1998704 00:32:19.515 15:04:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:19.515 15:04:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:19.515 15:04:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:19.515 15:04:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:19.515 15:04:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:19.515 15:04:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:19.515 15:04:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:19.515 15:04:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:21.416 15:05:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:21.416 00:32:21.416 real 1m35.401s 00:32:21.416 user 5m53.568s 00:32:21.416 sys 0m15.367s 00:32:21.416 15:05:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:21.416 15:05:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:21.416 ************************************ 00:32:21.416 END TEST nvmf_perf 00:32:21.416 ************************************ 00:32:21.416 15:05:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:21.416 15:05:00 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:21.675 15:05:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:21.675 15:05:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:21.675 15:05:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:21.675 ************************************ 00:32:21.675 START TEST nvmf_fio_host 00:32:21.675 ************************************ 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:21.675 * Looking for test storage... 00:32:21.675 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:32:21.675 15:05:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:23.576 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:23.576 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:23.576 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:23.576 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:23.576 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:23.576 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:32:23.576 00:32:23.576 --- 10.0.0.2 ping statistics --- 00:32:23.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:23.576 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:23.576 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:23.576 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:32:23.576 00:32:23.576 --- 10.0.0.1 ping statistics --- 00:32:23.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:23.576 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:23.576 15:05:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:23.577 15:05:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:32:23.577 15:05:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:32:23.577 15:05:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:23.577 15:05:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.577 15:05:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2011320 00:32:23.577 15:05:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:23.577 15:05:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:23.577 15:05:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2011320 00:32:23.577 15:05:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 2011320 ']' 00:32:23.577 15:05:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:23.577 15:05:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:23.577 15:05:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:23.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:23.577 15:05:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:23.577 15:05:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.834 [2024-07-14 15:05:02.893214] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:32:23.834 [2024-07-14 15:05:02.893365] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:23.834 EAL: No free 2048 kB hugepages reported on node 1 00:32:23.834 [2024-07-14 15:05:03.027131] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:24.092 [2024-07-14 15:05:03.281207] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:24.092 [2024-07-14 15:05:03.281282] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:24.092 [2024-07-14 15:05:03.281311] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:24.092 [2024-07-14 15:05:03.281336] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:24.092 [2024-07-14 15:05:03.281361] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:24.092 [2024-07-14 15:05:03.281481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:24.092 [2024-07-14 15:05:03.281562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:24.092 [2024-07-14 15:05:03.281643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:24.092 [2024-07-14 15:05:03.281653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:24.656 15:05:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:24.656 15:05:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:32:24.656 15:05:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:24.913 [2024-07-14 15:05:04.050162] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:24.913 15:05:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:32:24.913 15:05:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:24.913 15:05:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.913 15:05:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:32:25.171 Malloc1 00:32:25.171 15:05:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:25.428 15:05:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:25.685 15:05:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:25.942 [2024-07-14 15:05:05.129172] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:25.942 15:05:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:26.204 15:05:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:32:26.204 15:05:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:26.204 15:05:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:26.204 15:05:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:26.204 15:05:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:26.204 15:05:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:26.204 15:05:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:26.204 15:05:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:32:26.204 15:05:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:26.204 15:05:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:26.204 15:05:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:26.204 15:05:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:32:26.204 15:05:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:26.204 15:05:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:26.204 15:05:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:26.204 15:05:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:32:26.204 15:05:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:26.204 15:05:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:26.507 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:26.507 fio-3.35 00:32:26.507 Starting 1 thread 00:32:26.507 EAL: No free 2048 kB hugepages reported on node 1 00:32:29.036 00:32:29.036 test: (groupid=0, jobs=1): err= 0: pid=2011683: Sun Jul 14 15:05:08 2024 00:32:29.036 read: IOPS=6427, BW=25.1MiB/s (26.3MB/s)(50.4MiB/2009msec) 00:32:29.036 slat (usec): min=2, max=176, avg= 3.75, stdev= 2.47 00:32:29.036 clat (usec): min=3297, max=18516, avg=10795.88, stdev=916.96 00:32:29.036 lat (usec): min=3346, max=18519, avg=10799.63, stdev=916.78 00:32:29.036 clat percentiles (usec): 00:32:29.036 | 1.00th=[ 8848], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[10159], 00:32:29.036 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10814], 60.00th=[10945], 00:32:29.036 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11863], 95.00th=[12125], 00:32:29.036 | 99.00th=[12780], 99.50th=[13042], 99.90th=[16909], 99.95th=[17957], 00:32:29.036 | 99.99th=[18482] 00:32:29.036 bw ( KiB/s): min=24496, max=26360, per=99.92%, avg=25690.00, stdev=820.18, samples=4 00:32:29.036 iops : min= 6124, max= 6590, avg=6422.50, stdev=205.05, samples=4 00:32:29.036 write: IOPS=6430, BW=25.1MiB/s (26.3MB/s)(50.5MiB/2009msec); 0 zone resets 00:32:29.036 slat (usec): min=3, max=153, avg= 3.94, stdev= 1.92 00:32:29.036 clat (usec): min=1778, max=16661, avg=8976.17, stdev=764.68 00:32:29.036 lat (usec): min=1797, max=16664, avg=8980.12, stdev=764.62 00:32:29.036 clat percentiles (usec): 00:32:29.036 | 1.00th=[ 7242], 5.00th=[ 7898], 10.00th=[ 8160], 20.00th=[ 8455], 00:32:29.036 | 30.00th=[ 8586], 40.00th=[ 8848], 50.00th=[ 8979], 60.00th=[ 9110], 00:32:29.036 | 70.00th=[ 9372], 80.00th=[ 9503], 90.00th=[ 9765], 95.00th=[10028], 00:32:29.036 | 99.00th=[10552], 99.50th=[10945], 99.90th=[15139], 99.95th=[16188], 00:32:29.036 | 99.99th=[16581] 00:32:29.036 bw ( KiB/s): min=25544, max=26136, per=99.99%, avg=25720.00, stdev=282.31, samples=4 00:32:29.036 iops : min= 6386, max= 6534, avg=6430.00, stdev=70.58, samples=4 00:32:29.036 lat (msec) : 2=0.01%, 4=0.08%, 10=55.47%, 20=44.44% 00:32:29.036 cpu : usr=66.04%, sys=32.32%, ctx=76, majf=0, minf=1536 00:32:29.036 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:32:29.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:29.036 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:29.037 issued rwts: total=12913,12919,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:29.037 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:29.037 00:32:29.037 Run status group 0 (all jobs): 00:32:29.037 READ: bw=25.1MiB/s (26.3MB/s), 25.1MiB/s-25.1MiB/s (26.3MB/s-26.3MB/s), io=50.4MiB (52.9MB), run=2009-2009msec 00:32:29.037 WRITE: bw=25.1MiB/s (26.3MB/s), 25.1MiB/s-25.1MiB/s (26.3MB/s-26.3MB/s), io=50.5MiB (52.9MB), run=2009-2009msec 00:32:29.037 ----------------------------------------------------- 00:32:29.037 Suppressions used: 00:32:29.037 count bytes template 00:32:29.037 1 57 /usr/src/fio/parse.c 00:32:29.037 1 8 libtcmalloc_minimal.so 00:32:29.037 ----------------------------------------------------- 00:32:29.037 00:32:29.037 15:05:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:29.037 15:05:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:29.037 15:05:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:29.037 15:05:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:29.037 15:05:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:29.037 15:05:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:29.037 15:05:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:32:29.037 15:05:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:29.037 15:05:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:29.037 15:05:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:29.037 15:05:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:32:29.037 15:05:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:29.295 15:05:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:29.295 15:05:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:29.295 15:05:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:32:29.295 15:05:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:29.295 15:05:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:29.295 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:32:29.295 fio-3.35 00:32:29.295 Starting 1 thread 00:32:29.553 EAL: No free 2048 kB hugepages reported on node 1 00:32:32.081 00:32:32.081 test: (groupid=0, jobs=1): err= 0: pid=2012134: Sun Jul 14 15:05:11 2024 00:32:32.081 read: IOPS=6349, BW=99.2MiB/s (104MB/s)(199MiB/2007msec) 00:32:32.081 slat (usec): min=3, max=141, avg= 5.02, stdev= 2.17 00:32:32.081 clat (usec): min=2831, max=22396, avg=11589.45, stdev=2397.73 00:32:32.081 lat (usec): min=2836, max=22401, avg=11594.47, stdev=2397.83 00:32:32.081 clat percentiles (usec): 00:32:32.081 | 1.00th=[ 6325], 5.00th=[ 7963], 10.00th=[ 8848], 20.00th=[ 9765], 00:32:32.081 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11469], 60.00th=[11863], 00:32:32.081 | 70.00th=[12387], 80.00th=[13304], 90.00th=[14877], 95.00th=[15926], 00:32:32.081 | 99.00th=[17957], 99.50th=[19006], 99.90th=[20841], 99.95th=[21103], 00:32:32.081 | 99.99th=[21627] 00:32:32.081 bw ( KiB/s): min=41504, max=57504, per=49.54%, avg=50328.00, stdev=8298.74, samples=4 00:32:32.081 iops : min= 2594, max= 3594, avg=3145.50, stdev=518.67, samples=4 00:32:32.081 write: IOPS=3762, BW=58.8MiB/s (61.6MB/s)(104MiB/1763msec); 0 zone resets 00:32:32.081 slat (usec): min=33, max=147, avg=36.58, stdev= 5.68 00:32:32.081 clat (usec): min=6231, max=27158, avg=15558.30, stdev=2561.04 00:32:32.081 lat (usec): min=6265, max=27208, avg=15594.88, stdev=2560.96 00:32:32.081 clat percentiles (usec): 00:32:32.081 | 1.00th=[10159], 5.00th=[11469], 10.00th=[12387], 20.00th=[13304], 00:32:32.081 | 30.00th=[14222], 40.00th=[14877], 50.00th=[15401], 60.00th=[16057], 00:32:32.081 | 70.00th=[16712], 80.00th=[17695], 90.00th=[19006], 95.00th=[19792], 00:32:32.081 | 99.00th=[21627], 99.50th=[22414], 99.90th=[26608], 99.95th=[26870], 00:32:32.081 | 99.99th=[27132] 00:32:32.081 bw ( KiB/s): min=43040, max=61152, per=87.25%, avg=52520.00, stdev=9105.35, samples=4 00:32:32.081 iops : min= 2690, max= 3822, avg=3282.50, stdev=569.08, samples=4 00:32:32.081 lat (msec) : 4=0.11%, 10=15.85%, 20=82.30%, 50=1.74% 00:32:32.081 cpu : usr=78.96%, sys=19.64%, ctx=41, majf=0, minf=2091 00:32:32.081 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:32:32.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:32.081 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:32.081 issued rwts: total=12744,6633,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:32.081 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:32.081 00:32:32.081 Run status group 0 (all jobs): 00:32:32.081 READ: bw=99.2MiB/s (104MB/s), 99.2MiB/s-99.2MiB/s (104MB/s-104MB/s), io=199MiB (209MB), run=2007-2007msec 00:32:32.081 WRITE: bw=58.8MiB/s (61.6MB/s), 58.8MiB/s-58.8MiB/s (61.6MB/s-61.6MB/s), io=104MiB (109MB), run=1763-1763msec 00:32:32.081 ----------------------------------------------------- 00:32:32.081 Suppressions used: 00:32:32.081 count bytes template 00:32:32.081 1 57 /usr/src/fio/parse.c 00:32:32.081 228 21888 /usr/src/fio/iolog.c 00:32:32.081 1 8 libtcmalloc_minimal.so 00:32:32.081 ----------------------------------------------------- 00:32:32.081 00:32:32.081 15:05:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:32.339 15:05:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:32:32.339 15:05:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:32:32.339 15:05:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:32:32.339 15:05:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:32:32.339 15:05:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:32:32.339 15:05:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:32.339 15:05:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:32.339 15:05:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:32:32.339 15:05:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:32:32.339 15:05:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:32:32.339 15:05:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:32:35.615 Nvme0n1 00:32:35.615 15:05:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:32:38.889 15:05:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=3c25a1b4-b28a-4da8-8b94-626959d5cc11 00:32:38.889 15:05:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 3c25a1b4-b28a-4da8-8b94-626959d5cc11 00:32:38.890 15:05:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=3c25a1b4-b28a-4da8-8b94-626959d5cc11 00:32:38.890 15:05:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:32:38.890 15:05:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:32:38.890 15:05:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:32:38.890 15:05:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:38.890 15:05:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:32:38.890 { 00:32:38.890 "uuid": "3c25a1b4-b28a-4da8-8b94-626959d5cc11", 00:32:38.890 "name": "lvs_0", 00:32:38.890 "base_bdev": "Nvme0n1", 00:32:38.890 "total_data_clusters": 930, 00:32:38.890 "free_clusters": 930, 00:32:38.890 "block_size": 512, 00:32:38.890 "cluster_size": 1073741824 00:32:38.890 } 00:32:38.890 ]' 00:32:38.890 15:05:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="3c25a1b4-b28a-4da8-8b94-626959d5cc11") .free_clusters' 00:32:38.890 15:05:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=930 00:32:38.890 15:05:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="3c25a1b4-b28a-4da8-8b94-626959d5cc11") .cluster_size' 00:32:38.890 15:05:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:32:38.890 15:05:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=952320 00:32:38.890 15:05:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 952320 00:32:38.890 952320 00:32:38.890 15:05:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:32:39.148 90a6bf5c-05c8-4303-b409-3d351d3f8895 00:32:39.148 15:05:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:32:39.406 15:05:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:32:39.664 15:05:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:39.922 15:05:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:39.922 15:05:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:39.922 15:05:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:39.922 15:05:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:39.922 15:05:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:39.922 15:05:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:39.922 15:05:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:32:39.922 15:05:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:39.922 15:05:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:39.922 15:05:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:39.922 15:05:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:32:39.922 15:05:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:39.922 15:05:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:39.922 15:05:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:39.922 15:05:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:32:39.922 15:05:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:39.922 15:05:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:40.180 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:40.180 fio-3.35 00:32:40.180 Starting 1 thread 00:32:40.180 EAL: No free 2048 kB hugepages reported on node 1 00:32:42.708 00:32:42.708 test: (groupid=0, jobs=1): err= 0: pid=2013525: Sun Jul 14 15:05:21 2024 00:32:42.708 read: IOPS=4346, BW=17.0MiB/s (17.8MB/s)(34.8MiB/2052msec) 00:32:42.708 slat (usec): min=2, max=151, avg= 3.59, stdev= 2.44 00:32:42.708 clat (usec): min=1485, max=172753, avg=16006.13, stdev=13704.65 00:32:42.708 lat (usec): min=1488, max=172804, avg=16009.72, stdev=13705.00 00:32:42.708 clat percentiles (msec): 00:32:42.708 | 1.00th=[ 12], 5.00th=[ 13], 10.00th=[ 13], 20.00th=[ 14], 00:32:42.708 | 30.00th=[ 14], 40.00th=[ 15], 50.00th=[ 15], 60.00th=[ 15], 00:32:42.709 | 70.00th=[ 16], 80.00th=[ 16], 90.00th=[ 17], 95.00th=[ 17], 00:32:42.709 | 99.00th=[ 61], 99.50th=[ 157], 99.90th=[ 174], 99.95th=[ 174], 00:32:42.709 | 99.99th=[ 174] 00:32:42.709 bw ( KiB/s): min=12784, max=19656, per=100.00%, avg=17698.00, stdev=3284.33, samples=4 00:32:42.709 iops : min= 3196, max= 4914, avg=4424.50, stdev=821.08, samples=4 00:32:42.709 write: IOPS=4349, BW=17.0MiB/s (17.8MB/s)(34.9MiB/2052msec); 0 zone resets 00:32:42.709 slat (usec): min=3, max=123, avg= 3.77, stdev= 1.88 00:32:42.709 clat (usec): min=424, max=170358, avg=13280.65, stdev=12886.63 00:32:42.709 lat (usec): min=427, max=170364, avg=13284.41, stdev=12886.98 00:32:42.709 clat percentiles (msec): 00:32:42.709 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 12], 00:32:42.709 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 13], 00:32:42.709 | 70.00th=[ 13], 80.00th=[ 13], 90.00th=[ 14], 95.00th=[ 14], 00:32:42.709 | 99.00th=[ 57], 99.50th=[ 159], 99.90th=[ 171], 99.95th=[ 171], 00:32:42.709 | 99.99th=[ 171] 00:32:42.709 bw ( KiB/s): min=13352, max=19392, per=100.00%, avg=17738.00, stdev=2928.03, samples=4 00:32:42.709 iops : min= 3338, max= 4848, avg=4434.50, stdev=732.01, samples=4 00:32:42.709 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:32:42.709 lat (msec) : 2=0.03%, 4=0.08%, 10=1.89%, 20=96.55%, 100=0.71% 00:32:42.709 lat (msec) : 250=0.72% 00:32:42.709 cpu : usr=63.24%, sys=35.35%, ctx=101, majf=0, minf=1534 00:32:42.709 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:32:42.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.709 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:42.709 issued rwts: total=8920,8925,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.709 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:42.709 00:32:42.709 Run status group 0 (all jobs): 00:32:42.709 READ: bw=17.0MiB/s (17.8MB/s), 17.0MiB/s-17.0MiB/s (17.8MB/s-17.8MB/s), io=34.8MiB (36.5MB), run=2052-2052msec 00:32:42.709 WRITE: bw=17.0MiB/s (17.8MB/s), 17.0MiB/s-17.0MiB/s (17.8MB/s-17.8MB/s), io=34.9MiB (36.6MB), run=2052-2052msec 00:32:42.967 ----------------------------------------------------- 00:32:42.967 Suppressions used: 00:32:42.967 count bytes template 00:32:42.967 1 58 /usr/src/fio/parse.c 00:32:42.967 1 8 libtcmalloc_minimal.so 00:32:42.967 ----------------------------------------------------- 00:32:42.967 00:32:42.967 15:05:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:43.225 15:05:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:32:44.592 15:05:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=e1888c78-9d75-4601-a9d1-e72831fe82e3 00:32:44.592 15:05:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb e1888c78-9d75-4601-a9d1-e72831fe82e3 00:32:44.592 15:05:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=e1888c78-9d75-4601-a9d1-e72831fe82e3 00:32:44.592 15:05:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:32:44.592 15:05:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:32:44.592 15:05:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:32:44.592 15:05:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:44.592 15:05:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:32:44.592 { 00:32:44.592 "uuid": "3c25a1b4-b28a-4da8-8b94-626959d5cc11", 00:32:44.592 "name": "lvs_0", 00:32:44.592 "base_bdev": "Nvme0n1", 00:32:44.592 "total_data_clusters": 930, 00:32:44.592 "free_clusters": 0, 00:32:44.592 "block_size": 512, 00:32:44.592 "cluster_size": 1073741824 00:32:44.592 }, 00:32:44.592 { 00:32:44.592 "uuid": "e1888c78-9d75-4601-a9d1-e72831fe82e3", 00:32:44.592 "name": "lvs_n_0", 00:32:44.592 "base_bdev": "90a6bf5c-05c8-4303-b409-3d351d3f8895", 00:32:44.592 "total_data_clusters": 237847, 00:32:44.592 "free_clusters": 237847, 00:32:44.592 "block_size": 512, 00:32:44.592 "cluster_size": 4194304 00:32:44.592 } 00:32:44.592 ]' 00:32:44.592 15:05:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="e1888c78-9d75-4601-a9d1-e72831fe82e3") .free_clusters' 00:32:44.592 15:05:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=237847 00:32:44.592 15:05:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="e1888c78-9d75-4601-a9d1-e72831fe82e3") .cluster_size' 00:32:44.592 15:05:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:32:44.592 15:05:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=951388 00:32:44.592 15:05:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 951388 00:32:44.592 951388 00:32:44.592 15:05:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:32:45.964 9627a9cf-63fb-4474-88c7-fd328bef975a 00:32:45.964 15:05:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:32:45.964 15:05:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:32:46.221 15:05:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:32:46.479 15:05:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:46.479 15:05:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:46.479 15:05:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:46.479 15:05:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:46.479 15:05:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:46.479 15:05:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:46.479 15:05:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:32:46.479 15:05:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:46.479 15:05:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:46.479 15:05:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:46.479 15:05:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:32:46.479 15:05:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:46.479 15:05:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:46.479 15:05:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:46.479 15:05:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:32:46.479 15:05:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:46.479 15:05:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:46.736 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:46.736 fio-3.35 00:32:46.736 Starting 1 thread 00:32:46.736 EAL: No free 2048 kB hugepages reported on node 1 00:32:49.263 00:32:49.263 test: (groupid=0, jobs=1): err= 0: pid=2014263: Sun Jul 14 15:05:28 2024 00:32:49.263 read: IOPS=4378, BW=17.1MiB/s (17.9MB/s)(34.4MiB/2011msec) 00:32:49.263 slat (usec): min=2, max=141, avg= 3.55, stdev= 2.19 00:32:49.263 clat (usec): min=5983, max=27141, avg=15897.22, stdev=1502.38 00:32:49.263 lat (usec): min=5989, max=27145, avg=15900.77, stdev=1502.23 00:32:49.263 clat percentiles (usec): 00:32:49.263 | 1.00th=[12387], 5.00th=[13698], 10.00th=[14222], 20.00th=[14746], 00:32:49.263 | 30.00th=[15139], 40.00th=[15533], 50.00th=[15926], 60.00th=[16319], 00:32:49.263 | 70.00th=[16712], 80.00th=[17171], 90.00th=[17695], 95.00th=[18220], 00:32:49.263 | 99.00th=[19268], 99.50th=[19792], 99.90th=[25822], 99.95th=[26084], 00:32:49.263 | 99.99th=[27132] 00:32:49.263 bw ( KiB/s): min=16384, max=18056, per=99.76%, avg=17474.00, stdev=747.77, samples=4 00:32:49.263 iops : min= 4096, max= 4514, avg=4368.50, stdev=186.94, samples=4 00:32:49.263 write: IOPS=4377, BW=17.1MiB/s (17.9MB/s)(34.4MiB/2011msec); 0 zone resets 00:32:49.263 slat (usec): min=3, max=121, avg= 3.74, stdev= 1.74 00:32:49.263 clat (usec): min=2864, max=24120, avg=13055.99, stdev=1241.77 00:32:49.263 lat (usec): min=2875, max=24123, avg=13059.73, stdev=1241.69 00:32:49.263 clat percentiles (usec): 00:32:49.263 | 1.00th=[10159], 5.00th=[11207], 10.00th=[11731], 20.00th=[12125], 00:32:49.263 | 30.00th=[12518], 40.00th=[12780], 50.00th=[13042], 60.00th=[13304], 00:32:49.263 | 70.00th=[13566], 80.00th=[13960], 90.00th=[14484], 95.00th=[14877], 00:32:49.263 | 99.00th=[15795], 99.50th=[16319], 99.90th=[20841], 99.95th=[22676], 00:32:49.263 | 99.99th=[24249] 00:32:49.263 bw ( KiB/s): min=17304, max=17664, per=99.89%, avg=17492.00, stdev=149.02, samples=4 00:32:49.263 iops : min= 4326, max= 4416, avg=4373.00, stdev=37.26, samples=4 00:32:49.263 lat (msec) : 4=0.02%, 10=0.50%, 20=99.21%, 50=0.27% 00:32:49.263 cpu : usr=63.68%, sys=34.83%, ctx=97, majf=0, minf=1534 00:32:49.263 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:32:49.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.263 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:49.263 issued rwts: total=8806,8804,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:49.263 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:49.263 00:32:49.263 Run status group 0 (all jobs): 00:32:49.263 READ: bw=17.1MiB/s (17.9MB/s), 17.1MiB/s-17.1MiB/s (17.9MB/s-17.9MB/s), io=34.4MiB (36.1MB), run=2011-2011msec 00:32:49.263 WRITE: bw=17.1MiB/s (17.9MB/s), 17.1MiB/s-17.1MiB/s (17.9MB/s-17.9MB/s), io=34.4MiB (36.1MB), run=2011-2011msec 00:32:49.263 ----------------------------------------------------- 00:32:49.263 Suppressions used: 00:32:49.263 count bytes template 00:32:49.263 1 58 /usr/src/fio/parse.c 00:32:49.263 1 8 libtcmalloc_minimal.so 00:32:49.263 ----------------------------------------------------- 00:32:49.263 00:32:49.263 15:05:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:32:49.520 15:05:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:32:49.520 15:05:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:32:53.764 15:05:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:54.021 15:05:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:32:57.295 15:05:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:57.295 15:05:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:32:59.193 15:05:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:59.193 15:05:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:32:59.193 15:05:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:32:59.193 15:05:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:59.193 15:05:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:32:59.193 15:05:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:59.193 15:05:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:32:59.193 15:05:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:59.193 15:05:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:59.193 rmmod nvme_tcp 00:32:59.193 rmmod nvme_fabrics 00:32:59.193 rmmod nvme_keyring 00:32:59.193 15:05:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:59.193 15:05:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:32:59.193 15:05:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:32:59.193 15:05:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 2011320 ']' 00:32:59.193 15:05:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 2011320 00:32:59.193 15:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 2011320 ']' 00:32:59.193 15:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 2011320 00:32:59.193 15:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:32:59.193 15:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:59.193 15:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2011320 00:32:59.193 15:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:59.193 15:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:59.193 15:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2011320' 00:32:59.193 killing process with pid 2011320 00:32:59.193 15:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 2011320 00:32:59.193 15:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 2011320 00:33:01.094 15:05:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:01.094 15:05:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:01.094 15:05:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:01.094 15:05:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:01.094 15:05:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:01.094 15:05:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:01.094 15:05:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:01.094 15:05:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:03.001 15:05:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:03.001 00:33:03.001 real 0m41.193s 00:33:03.001 user 2m36.362s 00:33:03.001 sys 0m8.112s 00:33:03.001 15:05:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:03.001 15:05:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.001 ************************************ 00:33:03.001 END TEST nvmf_fio_host 00:33:03.001 ************************************ 00:33:03.001 15:05:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:33:03.001 15:05:41 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:03.001 15:05:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:03.001 15:05:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:03.001 15:05:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:03.001 ************************************ 00:33:03.001 START TEST nvmf_failover 00:33:03.001 ************************************ 00:33:03.001 15:05:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:03.001 * Looking for test storage... 00:33:03.001 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:03.001 15:05:42 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:03.001 15:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:33:03.001 15:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:03.001 15:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:03.001 15:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:03.001 15:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:03.001 15:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:03.001 15:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:03.001 15:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:03.001 15:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:03.001 15:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:03.001 15:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:03.001 15:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:03.001 15:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:03.001 15:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:03.001 15:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:03.001 15:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:03.001 15:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:03.001 15:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:03.001 15:05:42 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:03.001 15:05:42 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:03.001 15:05:42 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:03.001 15:05:42 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.001 15:05:42 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.001 15:05:42 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.001 15:05:42 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:33:03.001 15:05:42 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.001 15:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:33:03.001 15:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:03.001 15:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:03.001 15:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:03.001 15:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:03.001 15:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:03.001 15:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:03.001 15:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:03.001 15:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:03.001 15:05:42 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:03.001 15:05:42 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:03.001 15:05:42 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:03.001 15:05:42 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:03.001 15:05:42 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:33:03.001 15:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:03.001 15:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:03.001 15:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:03.001 15:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:03.001 15:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:03.001 15:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:03.001 15:05:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:03.001 15:05:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:03.001 15:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:03.001 15:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:03.001 15:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:33:03.001 15:05:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:04.903 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:04.903 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:04.903 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:04.903 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:04.903 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:04.903 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:33:04.903 00:33:04.903 --- 10.0.0.2 ping statistics --- 00:33:04.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:04.903 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:33:04.903 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:04.904 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:04.904 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:33:04.904 00:33:04.904 --- 10.0.0.1 ping statistics --- 00:33:04.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:04.904 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:33:04.904 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:04.904 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:33:04.904 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:04.904 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:04.904 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:04.904 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:04.904 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:04.904 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:04.904 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:04.904 15:05:44 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:33:04.904 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:04.904 15:05:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:04.904 15:05:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:04.904 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=2017763 00:33:04.904 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:04.904 15:05:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 2017763 00:33:04.904 15:05:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2017763 ']' 00:33:04.904 15:05:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:04.904 15:05:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:04.904 15:05:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:04.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:04.904 15:05:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:04.904 15:05:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:05.161 [2024-07-14 15:05:44.294418] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:33:05.161 [2024-07-14 15:05:44.294559] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:05.161 EAL: No free 2048 kB hugepages reported on node 1 00:33:05.161 [2024-07-14 15:05:44.428282] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:05.419 [2024-07-14 15:05:44.654911] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:05.419 [2024-07-14 15:05:44.654978] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:05.419 [2024-07-14 15:05:44.655023] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:05.419 [2024-07-14 15:05:44.655041] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:05.419 [2024-07-14 15:05:44.655059] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:05.419 [2024-07-14 15:05:44.655201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:05.419 [2024-07-14 15:05:44.655307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:05.419 [2024-07-14 15:05:44.655316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:05.984 15:05:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:05.984 15:05:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:33:05.984 15:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:05.984 15:05:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:05.984 15:05:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:05.984 15:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:05.984 15:05:45 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:06.242 [2024-07-14 15:05:45.474527] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:06.242 15:05:45 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:06.807 Malloc0 00:33:06.807 15:05:45 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:07.065 15:05:46 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:07.323 15:05:46 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:07.580 [2024-07-14 15:05:46.659714] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:07.580 15:05:46 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:07.838 [2024-07-14 15:05:46.956570] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:07.838 15:05:46 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:08.095 [2024-07-14 15:05:47.253651] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:08.095 15:05:47 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2018176 00:33:08.095 15:05:47 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:33:08.095 15:05:47 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:08.095 15:05:47 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2018176 /var/tmp/bdevperf.sock 00:33:08.095 15:05:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2018176 ']' 00:33:08.095 15:05:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:08.095 15:05:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:08.095 15:05:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:08.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:08.095 15:05:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:08.095 15:05:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:09.028 15:05:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:09.028 15:05:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:33:09.028 15:05:48 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:09.593 NVMe0n1 00:33:09.593 15:05:48 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:10.157 00:33:10.157 15:05:49 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2018441 00:33:10.157 15:05:49 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:10.157 15:05:49 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:33:11.091 15:05:50 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:11.349 [2024-07-14 15:05:50.565750] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.349 [2024-07-14 15:05:50.565865] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.349 [2024-07-14 15:05:50.565899] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.349 [2024-07-14 15:05:50.565920] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.349 [2024-07-14 15:05:50.565938] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.349 [2024-07-14 15:05:50.565955] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.349 [2024-07-14 15:05:50.565973] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.349 [2024-07-14 15:05:50.565990] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.349 [2024-07-14 15:05:50.566008] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.349 [2024-07-14 15:05:50.566025] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.349 [2024-07-14 15:05:50.566042] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.349 [2024-07-14 15:05:50.566058] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.349 [2024-07-14 15:05:50.566076] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.349 [2024-07-14 15:05:50.566094] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.349 [2024-07-14 15:05:50.566121] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.349 [2024-07-14 15:05:50.566139] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.349 [2024-07-14 15:05:50.566156] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.349 [2024-07-14 15:05:50.566173] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.349 [2024-07-14 15:05:50.566190] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.349 [2024-07-14 15:05:50.566207] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.349 [2024-07-14 15:05:50.566224] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.349 [2024-07-14 15:05:50.566240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.349 [2024-07-14 15:05:50.566257] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.349 [2024-07-14 15:05:50.566273] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.349 [2024-07-14 15:05:50.566290] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.349 [2024-07-14 15:05:50.566306] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.349 [2024-07-14 15:05:50.566323] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.349 [2024-07-14 15:05:50.566355] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.349 [2024-07-14 15:05:50.566372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.349 [2024-07-14 15:05:50.566388] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.349 [2024-07-14 15:05:50.566404] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.349 [2024-07-14 15:05:50.566420] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.349 [2024-07-14 15:05:50.566436] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.349 [2024-07-14 15:05:50.566452] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.349 [2024-07-14 15:05:50.566468] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.349 [2024-07-14 15:05:50.566484] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.349 [2024-07-14 15:05:50.566500] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.349 [2024-07-14 15:05:50.566516] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.349 [2024-07-14 15:05:50.566532] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.349 [2024-07-14 15:05:50.566548] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.349 [2024-07-14 15:05:50.566568] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.349 [2024-07-14 15:05:50.566585] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.349 [2024-07-14 15:05:50.566601] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.349 [2024-07-14 15:05:50.566617] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.349 [2024-07-14 15:05:50.566633] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.349 [2024-07-14 15:05:50.566649] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.349 [2024-07-14 15:05:50.566665] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.349 [2024-07-14 15:05:50.566681] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.349 [2024-07-14 15:05:50.566697] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.349 [2024-07-14 15:05:50.566712] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.349 [2024-07-14 15:05:50.566728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.349 [2024-07-14 15:05:50.566744] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.349 [2024-07-14 15:05:50.566760] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.349 [2024-07-14 15:05:50.566778] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.349 [2024-07-14 15:05:50.566794] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.350 [2024-07-14 15:05:50.566810] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.350 [2024-07-14 15:05:50.566827] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.350 [2024-07-14 15:05:50.566843] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.350 [2024-07-14 15:05:50.566858] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.350 [2024-07-14 15:05:50.566901] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.350 [2024-07-14 15:05:50.566918] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.350 [2024-07-14 15:05:50.566935] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.350 [2024-07-14 15:05:50.566952] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.350 [2024-07-14 15:05:50.566969] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.350 [2024-07-14 15:05:50.566986] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.350 [2024-07-14 15:05:50.567002] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.350 [2024-07-14 15:05:50.567019] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.350 [2024-07-14 15:05:50.567040] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.350 [2024-07-14 15:05:50.567058] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.350 [2024-07-14 15:05:50.567074] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:11.350 15:05:50 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:33:14.679 15:05:53 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:14.679 00:33:14.679 15:05:53 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:15.243 15:05:54 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:33:18.521 15:05:57 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:18.521 [2024-07-14 15:05:57.485261] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:18.521 15:05:57 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:33:19.454 15:05:58 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:19.712 15:05:58 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 2018441 00:33:26.275 0 00:33:26.275 15:06:04 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 2018176 00:33:26.275 15:06:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2018176 ']' 00:33:26.275 15:06:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2018176 00:33:26.275 15:06:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:33:26.275 15:06:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:26.275 15:06:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2018176 00:33:26.275 15:06:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:26.275 15:06:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:26.275 15:06:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2018176' 00:33:26.275 killing process with pid 2018176 00:33:26.275 15:06:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2018176 00:33:26.275 15:06:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2018176 00:33:26.275 15:06:05 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:26.275 [2024-07-14 15:05:47.353404] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:33:26.275 [2024-07-14 15:05:47.353572] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2018176 ] 00:33:26.275 EAL: No free 2048 kB hugepages reported on node 1 00:33:26.275 [2024-07-14 15:05:47.481013] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:26.275 [2024-07-14 15:05:47.715621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:26.275 Running I/O for 15 seconds... 00:33:26.275 [2024-07-14 15:05:50.569689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:59656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.275 [2024-07-14 15:05:50.569762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.275 [2024-07-14 15:05:50.569824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:59664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.275 [2024-07-14 15:05:50.569850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.275 [2024-07-14 15:05:50.569901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.275 [2024-07-14 15:05:50.569925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.275 [2024-07-14 15:05:50.569948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:59680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.275 [2024-07-14 15:05:50.569970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.275 [2024-07-14 15:05:50.569994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:59688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.275 [2024-07-14 15:05:50.570015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.275 [2024-07-14 15:05:50.570037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:59696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.275 [2024-07-14 15:05:50.570058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.275 [2024-07-14 15:05:50.570081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:59704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.275 [2024-07-14 15:05:50.570103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.275 [2024-07-14 15:05:50.570125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:59712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.275 [2024-07-14 15:05:50.570147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.276 [2024-07-14 15:05:50.570170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:59728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.276 [2024-07-14 15:05:50.570207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.276 [2024-07-14 15:05:50.570230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:59736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.276 [2024-07-14 15:05:50.570251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.276 [2024-07-14 15:05:50.570273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:59744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.276 [2024-07-14 15:05:50.570294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.276 [2024-07-14 15:05:50.570323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:59752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.276 [2024-07-14 15:05:50.570345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.276 [2024-07-14 15:05:50.570367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:59760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.276 [2024-07-14 15:05:50.570388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.276 [2024-07-14 15:05:50.570410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:59768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.276 [2024-07-14 15:05:50.570430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.276 [2024-07-14 15:05:50.570452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:59776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.276 [2024-07-14 15:05:50.570488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.276 [2024-07-14 15:05:50.570512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:59784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.276 [2024-07-14 15:05:50.570533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.276 [2024-07-14 15:05:50.570555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:59792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.276 [2024-07-14 15:05:50.570575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.276 [2024-07-14 15:05:50.570597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.276 [2024-07-14 15:05:50.570617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.276 [2024-07-14 15:05:50.570638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:59808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.276 [2024-07-14 15:05:50.570658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.276 [2024-07-14 15:05:50.570680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:59816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.276 [2024-07-14 15:05:50.570700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.276 [2024-07-14 15:05:50.570722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:59824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.276 [2024-07-14 15:05:50.570742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.276 [2024-07-14 15:05:50.570764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:59832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.276 [2024-07-14 15:05:50.570784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.276 [2024-07-14 15:05:50.570806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:59840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.276 [2024-07-14 15:05:50.570826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.276 [2024-07-14 15:05:50.570848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:59848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.276 [2024-07-14 15:05:50.570896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.276 [2024-07-14 15:05:50.570922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:59856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.276 [2024-07-14 15:05:50.570944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.276 [2024-07-14 15:05:50.570966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:59864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.276 [2024-07-14 15:05:50.570987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.276 [2024-07-14 15:05:50.571009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:59872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.276 [2024-07-14 15:05:50.571030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.276 [2024-07-14 15:05:50.571053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:59880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.276 [2024-07-14 15:05:50.571073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.276 [2024-07-14 15:05:50.571095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:59888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.276 [2024-07-14 15:05:50.571116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.276 [2024-07-14 15:05:50.571138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:59896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.276 [2024-07-14 15:05:50.571174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.276 [2024-07-14 15:05:50.571197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:59904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.276 [2024-07-14 15:05:50.571218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.276 [2024-07-14 15:05:50.571239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:59912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.276 [2024-07-14 15:05:50.571259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.276 [2024-07-14 15:05:50.571281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:59920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.276 [2024-07-14 15:05:50.571301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.276 [2024-07-14 15:05:50.571322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:59928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.276 [2024-07-14 15:05:50.571342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.276 [2024-07-14 15:05:50.571364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:59936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.276 [2024-07-14 15:05:50.571384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.276 [2024-07-14 15:05:50.571406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:59944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.276 [2024-07-14 15:05:50.571426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.276 [2024-07-14 15:05:50.571451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:59952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.276 [2024-07-14 15:05:50.571472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.276 [2024-07-14 15:05:50.571494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:59960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.276 [2024-07-14 15:05:50.571514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.276 [2024-07-14 15:05:50.571535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:59968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.276 [2024-07-14 15:05:50.571556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.276 [2024-07-14 15:05:50.571577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:59976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.276 [2024-07-14 15:05:50.571597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.276 [2024-07-14 15:05:50.571619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:59984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.276 [2024-07-14 15:05:50.571639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.276 [2024-07-14 15:05:50.571660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:59992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.276 [2024-07-14 15:05:50.571681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.277 [2024-07-14 15:05:50.571702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:60000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.277 [2024-07-14 15:05:50.571722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.277 [2024-07-14 15:05:50.571744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.277 [2024-07-14 15:05:50.571764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.277 [2024-07-14 15:05:50.571785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.277 [2024-07-14 15:05:50.571806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.277 [2024-07-14 15:05:50.571827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:60024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.277 [2024-07-14 15:05:50.571848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.277 [2024-07-14 15:05:50.571894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:60032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.277 [2024-07-14 15:05:50.571917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.277 [2024-07-14 15:05:50.571939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:60040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.277 [2024-07-14 15:05:50.571961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.277 [2024-07-14 15:05:50.571983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:60048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.277 [2024-07-14 15:05:50.572008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.277 [2024-07-14 15:05:50.572032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:60056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.277 [2024-07-14 15:05:50.572053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.277 [2024-07-14 15:05:50.572076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:60064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.277 [2024-07-14 15:05:50.572098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.277 [2024-07-14 15:05:50.572120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:60072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.277 [2024-07-14 15:05:50.572140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.277 [2024-07-14 15:05:50.572178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:60080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.277 [2024-07-14 15:05:50.572199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.277 [2024-07-14 15:05:50.572220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:60088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.277 [2024-07-14 15:05:50.572241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.277 [2024-07-14 15:05:50.572262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:60096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.277 [2024-07-14 15:05:50.572282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.277 [2024-07-14 15:05:50.572304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:60104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.277 [2024-07-14 15:05:50.572324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.277 [2024-07-14 15:05:50.572346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:60112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.277 [2024-07-14 15:05:50.572366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.277 [2024-07-14 15:05:50.572387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:60120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.277 [2024-07-14 15:05:50.572407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.277 [2024-07-14 15:05:50.572446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.277 [2024-07-14 15:05:50.572467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.277 [2024-07-14 15:05:50.572490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:60136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.277 [2024-07-14 15:05:50.572511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.277 [2024-07-14 15:05:50.572533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:60144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.277 [2024-07-14 15:05:50.572554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.277 [2024-07-14 15:05:50.572577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:60152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.277 [2024-07-14 15:05:50.572602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.277 [2024-07-14 15:05:50.572626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:60160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.277 [2024-07-14 15:05:50.572647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.277 [2024-07-14 15:05:50.572671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:59720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.277 [2024-07-14 15:05:50.572693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.277 [2024-07-14 15:05:50.572716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:60168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.277 [2024-07-14 15:05:50.572737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.277 [2024-07-14 15:05:50.572759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:60176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.277 [2024-07-14 15:05:50.572781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.277 [2024-07-14 15:05:50.572803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:60184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.277 [2024-07-14 15:05:50.572824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.277 [2024-07-14 15:05:50.572847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:60192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.277 [2024-07-14 15:05:50.572868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.277 [2024-07-14 15:05:50.572899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.277 [2024-07-14 15:05:50.572922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.277 [2024-07-14 15:05:50.572945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:60208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.277 [2024-07-14 15:05:50.572966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.277 [2024-07-14 15:05:50.572989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:60216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.277 [2024-07-14 15:05:50.573010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.277 [2024-07-14 15:05:50.573034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:60224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.277 [2024-07-14 15:05:50.573054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.277 [2024-07-14 15:05:50.573077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:60232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.277 [2024-07-14 15:05:50.573098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.277 [2024-07-14 15:05:50.573121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:60240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.277 [2024-07-14 15:05:50.573142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.278 [2024-07-14 15:05:50.573169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:60248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.278 [2024-07-14 15:05:50.573191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.278 [2024-07-14 15:05:50.573214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:60256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.278 [2024-07-14 15:05:50.573235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.278 [2024-07-14 15:05:50.573258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.278 [2024-07-14 15:05:50.573279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.278 [2024-07-14 15:05:50.573302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:60272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.278 [2024-07-14 15:05:50.573323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.278 [2024-07-14 15:05:50.573346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:60280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.278 [2024-07-14 15:05:50.573380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.278 [2024-07-14 15:05:50.573406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:60288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.278 [2024-07-14 15:05:50.573428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.278 [2024-07-14 15:05:50.573451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:60296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.278 [2024-07-14 15:05:50.573472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.278 [2024-07-14 15:05:50.573495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:60304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.278 [2024-07-14 15:05:50.573516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.278 [2024-07-14 15:05:50.573539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:60312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.278 [2024-07-14 15:05:50.573560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.278 [2024-07-14 15:05:50.573583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:60320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.278 [2024-07-14 15:05:50.573604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.278 [2024-07-14 15:05:50.573626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:60328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.278 [2024-07-14 15:05:50.573647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.278 [2024-07-14 15:05:50.573670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:60336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.278 [2024-07-14 15:05:50.573691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.278 [2024-07-14 15:05:50.573714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:60344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.278 [2024-07-14 15:05:50.573739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.278 [2024-07-14 15:05:50.573764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:60352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.278 [2024-07-14 15:05:50.573785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.278 [2024-07-14 15:05:50.573808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:60360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.278 [2024-07-14 15:05:50.573829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.278 [2024-07-14 15:05:50.573852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.278 [2024-07-14 15:05:50.573874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.278 [2024-07-14 15:05:50.573907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:60376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.278 [2024-07-14 15:05:50.573929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.278 [2024-07-14 15:05:50.573952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:60384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.278 [2024-07-14 15:05:50.573973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.278 [2024-07-14 15:05:50.573996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:60392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.278 [2024-07-14 15:05:50.574017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.278 [2024-07-14 15:05:50.574040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:60400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.278 [2024-07-14 15:05:50.574061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.278 [2024-07-14 15:05:50.574084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:60408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.278 [2024-07-14 15:05:50.574106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.278 [2024-07-14 15:05:50.574129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:60416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.278 [2024-07-14 15:05:50.574150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.278 [2024-07-14 15:05:50.574194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.278 [2024-07-14 15:05:50.574221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60424 len:8 PRP1 0x0 PRP2 0x0 00:33:26.278 [2024-07-14 15:05:50.574243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.278 [2024-07-14 15:05:50.574269] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.278 [2024-07-14 15:05:50.574288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.278 [2024-07-14 15:05:50.574316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60432 len:8 PRP1 0x0 PRP2 0x0 00:33:26.278 [2024-07-14 15:05:50.574338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.278 [2024-07-14 15:05:50.574359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.278 [2024-07-14 15:05:50.574380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.278 [2024-07-14 15:05:50.574398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60440 len:8 PRP1 0x0 PRP2 0x0 00:33:26.278 [2024-07-14 15:05:50.574418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.278 [2024-07-14 15:05:50.574438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.278 [2024-07-14 15:05:50.574454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.278 [2024-07-14 15:05:50.574471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60448 len:8 PRP1 0x0 PRP2 0x0 00:33:26.278 [2024-07-14 15:05:50.574490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.278 [2024-07-14 15:05:50.574510] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.278 [2024-07-14 15:05:50.574526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.278 [2024-07-14 15:05:50.574543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60456 len:8 PRP1 0x0 PRP2 0x0 00:33:26.278 [2024-07-14 15:05:50.574562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.278 [2024-07-14 15:05:50.574581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.278 [2024-07-14 15:05:50.574597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.279 [2024-07-14 15:05:50.574614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60464 len:8 PRP1 0x0 PRP2 0x0 00:33:26.279 [2024-07-14 15:05:50.574633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.279 [2024-07-14 15:05:50.574652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.279 [2024-07-14 15:05:50.574668] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.279 [2024-07-14 15:05:50.574685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60472 len:8 PRP1 0x0 PRP2 0x0 00:33:26.279 [2024-07-14 15:05:50.574704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.279 [2024-07-14 15:05:50.574724] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.279 [2024-07-14 15:05:50.574740] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.279 [2024-07-14 15:05:50.574757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60480 len:8 PRP1 0x0 PRP2 0x0 00:33:26.279 [2024-07-14 15:05:50.574776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.279 [2024-07-14 15:05:50.574795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.279 [2024-07-14 15:05:50.574812] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.279 [2024-07-14 15:05:50.574829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60488 len:8 PRP1 0x0 PRP2 0x0 00:33:26.279 [2024-07-14 15:05:50.574848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.279 [2024-07-14 15:05:50.574867] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.279 [2024-07-14 15:05:50.574892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.279 [2024-07-14 15:05:50.574911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60496 len:8 PRP1 0x0 PRP2 0x0 00:33:26.279 [2024-07-14 15:05:50.574930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.279 [2024-07-14 15:05:50.574953] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.279 [2024-07-14 15:05:50.574971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.279 [2024-07-14 15:05:50.574988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60504 len:8 PRP1 0x0 PRP2 0x0 00:33:26.279 [2024-07-14 15:05:50.575007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.279 [2024-07-14 15:05:50.575026] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.279 [2024-07-14 15:05:50.575042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.279 [2024-07-14 15:05:50.575058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60512 len:8 PRP1 0x0 PRP2 0x0 00:33:26.279 [2024-07-14 15:05:50.575077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.279 [2024-07-14 15:05:50.575095] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.279 [2024-07-14 15:05:50.575112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.279 [2024-07-14 15:05:50.575129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60520 len:8 PRP1 0x0 PRP2 0x0 00:33:26.279 [2024-07-14 15:05:50.575147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.279 [2024-07-14 15:05:50.575166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.279 [2024-07-14 15:05:50.575182] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.279 [2024-07-14 15:05:50.575199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60528 len:8 PRP1 0x0 PRP2 0x0 00:33:26.279 [2024-07-14 15:05:50.575217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.279 [2024-07-14 15:05:50.575236] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.279 [2024-07-14 15:05:50.575252] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.279 [2024-07-14 15:05:50.575268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60536 len:8 PRP1 0x0 PRP2 0x0 00:33:26.279 [2024-07-14 15:05:50.575287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.279 [2024-07-14 15:05:50.575306] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.279 [2024-07-14 15:05:50.575322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.279 [2024-07-14 15:05:50.575339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60544 len:8 PRP1 0x0 PRP2 0x0 00:33:26.279 [2024-07-14 15:05:50.575358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.279 [2024-07-14 15:05:50.575377] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.279 [2024-07-14 15:05:50.575394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.279 [2024-07-14 15:05:50.575410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60552 len:8 PRP1 0x0 PRP2 0x0 00:33:26.279 [2024-07-14 15:05:50.575429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.279 [2024-07-14 15:05:50.575448] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.279 [2024-07-14 15:05:50.575464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.279 [2024-07-14 15:05:50.575481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60560 len:8 PRP1 0x0 PRP2 0x0 00:33:26.279 [2024-07-14 15:05:50.575505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.279 [2024-07-14 15:05:50.575525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.279 [2024-07-14 15:05:50.575541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.279 [2024-07-14 15:05:50.575558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60568 len:8 PRP1 0x0 PRP2 0x0 00:33:26.279 [2024-07-14 15:05:50.575577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.279 [2024-07-14 15:05:50.575595] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.279 [2024-07-14 15:05:50.575611] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.279 [2024-07-14 15:05:50.575629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60576 len:8 PRP1 0x0 PRP2 0x0 00:33:26.279 [2024-07-14 15:05:50.575647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.279 [2024-07-14 15:05:50.575666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.279 [2024-07-14 15:05:50.575683] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.279 [2024-07-14 15:05:50.575699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60584 len:8 PRP1 0x0 PRP2 0x0 00:33:26.279 [2024-07-14 15:05:50.575718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.279 [2024-07-14 15:05:50.575736] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.279 [2024-07-14 15:05:50.575753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.279 [2024-07-14 15:05:50.575770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60592 len:8 PRP1 0x0 PRP2 0x0 00:33:26.279 [2024-07-14 15:05:50.575789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.279 [2024-07-14 15:05:50.575808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.279 [2024-07-14 15:05:50.575824] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.279 [2024-07-14 15:05:50.575841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60600 len:8 PRP1 0x0 PRP2 0x0 00:33:26.279 [2024-07-14 15:05:50.575860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.279 [2024-07-14 15:05:50.575887] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.279 [2024-07-14 15:05:50.575907] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.280 [2024-07-14 15:05:50.575937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60608 len:8 PRP1 0x0 PRP2 0x0 00:33:26.280 [2024-07-14 15:05:50.575957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.280 [2024-07-14 15:05:50.575977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.280 [2024-07-14 15:05:50.575994] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.280 [2024-07-14 15:05:50.576011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60616 len:8 PRP1 0x0 PRP2 0x0 00:33:26.280 [2024-07-14 15:05:50.576029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.280 [2024-07-14 15:05:50.576048] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.280 [2024-07-14 15:05:50.576065] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.280 [2024-07-14 15:05:50.576088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60624 len:8 PRP1 0x0 PRP2 0x0 00:33:26.280 [2024-07-14 15:05:50.576108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.280 [2024-07-14 15:05:50.576127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.280 [2024-07-14 15:05:50.576143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.280 [2024-07-14 15:05:50.576160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60632 len:8 PRP1 0x0 PRP2 0x0 00:33:26.280 [2024-07-14 15:05:50.576179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.280 [2024-07-14 15:05:50.576198] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.280 [2024-07-14 15:05:50.576214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.280 [2024-07-14 15:05:50.576231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60640 len:8 PRP1 0x0 PRP2 0x0 00:33:26.280 [2024-07-14 15:05:50.576250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.280 [2024-07-14 15:05:50.576268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.280 [2024-07-14 15:05:50.576284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.280 [2024-07-14 15:05:50.576300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60648 len:8 PRP1 0x0 PRP2 0x0 00:33:26.280 [2024-07-14 15:05:50.576319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.280 [2024-07-14 15:05:50.576337] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.280 [2024-07-14 15:05:50.576353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.280 [2024-07-14 15:05:50.576369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60656 len:8 PRP1 0x0 PRP2 0x0 00:33:26.280 [2024-07-14 15:05:50.576388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.280 [2024-07-14 15:05:50.576407] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.280 [2024-07-14 15:05:50.576423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.280 [2024-07-14 15:05:50.576439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60664 len:8 PRP1 0x0 PRP2 0x0 00:33:26.280 [2024-07-14 15:05:50.576458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.280 [2024-07-14 15:05:50.576476] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.280 [2024-07-14 15:05:50.576492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.280 [2024-07-14 15:05:50.576509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60672 len:8 PRP1 0x0 PRP2 0x0 00:33:26.280 [2024-07-14 15:05:50.576528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.280 [2024-07-14 15:05:50.576816] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f2f00 was disconnected and freed. reset controller. 00:33:26.280 [2024-07-14 15:05:50.576848] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:26.280 [2024-07-14 15:05:50.576908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:26.280 [2024-07-14 15:05:50.576935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.280 [2024-07-14 15:05:50.576964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:26.280 [2024-07-14 15:05:50.576984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.280 [2024-07-14 15:05:50.577005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:26.280 [2024-07-14 15:05:50.577024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.280 [2024-07-14 15:05:50.577044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:26.280 [2024-07-14 15:05:50.577063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.280 [2024-07-14 15:05:50.577082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.280 [2024-07-14 15:05:50.577176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:26.280 [2024-07-14 15:05:50.580973] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.280 [2024-07-14 15:05:50.703887] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:26.280 [2024-07-14 15:05:54.227407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:26.280 [2024-07-14 15:05:54.227508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.280 [2024-07-14 15:05:54.227540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:26.280 [2024-07-14 15:05:54.227562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.280 [2024-07-14 15:05:54.227584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:26.280 [2024-07-14 15:05:54.227604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.280 [2024-07-14 15:05:54.227625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:26.280 [2024-07-14 15:05:54.227645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.280 [2024-07-14 15:05:54.227665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:26.280 [2024-07-14 15:05:54.227773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:9616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.280 [2024-07-14 15:05:54.227803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.280 [2024-07-14 15:05:54.227843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:9624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.280 [2024-07-14 15:05:54.227893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.280 [2024-07-14 15:05:54.227922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.280 [2024-07-14 15:05:54.227950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.280 [2024-07-14 15:05:54.227974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:9640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.280 [2024-07-14 15:05:54.228005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.280 [2024-07-14 15:05:54.228029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.280 [2024-07-14 15:05:54.228051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.280 [2024-07-14 15:05:54.228074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:9656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.280 [2024-07-14 15:05:54.228095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.280 [2024-07-14 15:05:54.228118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.281 [2024-07-14 15:05:54.228139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.281 [2024-07-14 15:05:54.228162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:8672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.281 [2024-07-14 15:05:54.228202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.281 [2024-07-14 15:05:54.228225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.281 [2024-07-14 15:05:54.228245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.281 [2024-07-14 15:05:54.228268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.281 [2024-07-14 15:05:54.228289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.281 [2024-07-14 15:05:54.228310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:8696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.281 [2024-07-14 15:05:54.228332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.281 [2024-07-14 15:05:54.228354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.281 [2024-07-14 15:05:54.228374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.281 [2024-07-14 15:05:54.228396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:8712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.281 [2024-07-14 15:05:54.228417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.281 [2024-07-14 15:05:54.228439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.281 [2024-07-14 15:05:54.228459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.281 [2024-07-14 15:05:54.228481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.281 [2024-07-14 15:05:54.228501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.281 [2024-07-14 15:05:54.228523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.281 [2024-07-14 15:05:54.228543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.281 [2024-07-14 15:05:54.228569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.281 [2024-07-14 15:05:54.228591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.281 [2024-07-14 15:05:54.228614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:8752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.281 [2024-07-14 15:05:54.228635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.281 [2024-07-14 15:05:54.228657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.281 [2024-07-14 15:05:54.228679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.281 [2024-07-14 15:05:54.228701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.281 [2024-07-14 15:05:54.228722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.281 [2024-07-14 15:05:54.228744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.281 [2024-07-14 15:05:54.228765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.281 [2024-07-14 15:05:54.228787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.281 [2024-07-14 15:05:54.228807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.281 [2024-07-14 15:05:54.228829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.281 [2024-07-14 15:05:54.228850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.281 [2024-07-14 15:05:54.228873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.281 [2024-07-14 15:05:54.228930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.281 [2024-07-14 15:05:54.228956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.281 [2024-07-14 15:05:54.228977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.281 [2024-07-14 15:05:54.229000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.281 [2024-07-14 15:05:54.229021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.281 [2024-07-14 15:05:54.229044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.281 [2024-07-14 15:05:54.229065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.281 [2024-07-14 15:05:54.229087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.281 [2024-07-14 15:05:54.229109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.281 [2024-07-14 15:05:54.229132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.281 [2024-07-14 15:05:54.229157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.281 [2024-07-14 15:05:54.229190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.281 [2024-07-14 15:05:54.229226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.281 [2024-07-14 15:05:54.229249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:8848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.281 [2024-07-14 15:05:54.229269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.281 [2024-07-14 15:05:54.229291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.281 [2024-07-14 15:05:54.229312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.281 [2024-07-14 15:05:54.229334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.281 [2024-07-14 15:05:54.229376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.281 [2024-07-14 15:05:54.229400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.281 [2024-07-14 15:05:54.229422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.281 [2024-07-14 15:05:54.229444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.281 [2024-07-14 15:05:54.229465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.282 [2024-07-14 15:05:54.229487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:8888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.282 [2024-07-14 15:05:54.229508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.282 [2024-07-14 15:05:54.229530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.282 [2024-07-14 15:05:54.229551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.282 [2024-07-14 15:05:54.229573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.282 [2024-07-14 15:05:54.229594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.282 [2024-07-14 15:05:54.229616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.282 [2024-07-14 15:05:54.229636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.282 [2024-07-14 15:05:54.229658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.282 [2024-07-14 15:05:54.229679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.282 [2024-07-14 15:05:54.229701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:8928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.282 [2024-07-14 15:05:54.229722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.282 [2024-07-14 15:05:54.229744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:8936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.282 [2024-07-14 15:05:54.229769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.282 [2024-07-14 15:05:54.229792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.282 [2024-07-14 15:05:54.229813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.282 [2024-07-14 15:05:54.229835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.282 [2024-07-14 15:05:54.229855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.282 [2024-07-14 15:05:54.229903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.282 [2024-07-14 15:05:54.229934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.282 [2024-07-14 15:05:54.229957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.282 [2024-07-14 15:05:54.229978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.282 [2024-07-14 15:05:54.230000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.282 [2024-07-14 15:05:54.230022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.282 [2024-07-14 15:05:54.230045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.282 [2024-07-14 15:05:54.230066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.282 [2024-07-14 15:05:54.230089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.282 [2024-07-14 15:05:54.230109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.282 [2024-07-14 15:05:54.230133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.282 [2024-07-14 15:05:54.230154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.282 [2024-07-14 15:05:54.230187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.282 [2024-07-14 15:05:54.230208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.282 [2024-07-14 15:05:54.230231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.282 [2024-07-14 15:05:54.230252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.282 [2024-07-14 15:05:54.230275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.282 [2024-07-14 15:05:54.230296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.282 [2024-07-14 15:05:54.230319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.282 [2024-07-14 15:05:54.230340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.282 [2024-07-14 15:05:54.230367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.282 [2024-07-14 15:05:54.230390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.282 [2024-07-14 15:05:54.230413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.282 [2024-07-14 15:05:54.230435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.282 [2024-07-14 15:05:54.230458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.282 [2024-07-14 15:05:54.230479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.282 [2024-07-14 15:05:54.230502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.282 [2024-07-14 15:05:54.230523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.282 [2024-07-14 15:05:54.230546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.282 [2024-07-14 15:05:54.230567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.282 [2024-07-14 15:05:54.230590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.282 [2024-07-14 15:05:54.230611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.282 [2024-07-14 15:05:54.230634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.282 [2024-07-14 15:05:54.230655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.282 [2024-07-14 15:05:54.230678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.282 [2024-07-14 15:05:54.230700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.282 [2024-07-14 15:05:54.230723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.282 [2024-07-14 15:05:54.230745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.282 [2024-07-14 15:05:54.230768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.282 [2024-07-14 15:05:54.230789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.282 [2024-07-14 15:05:54.230811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.282 [2024-07-14 15:05:54.230832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.282 [2024-07-14 15:05:54.230856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.282 [2024-07-14 15:05:54.230883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.282 [2024-07-14 15:05:54.230909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.282 [2024-07-14 15:05:54.230940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.283 [2024-07-14 15:05:54.230964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:9144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.283 [2024-07-14 15:05:54.230986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.283 [2024-07-14 15:05:54.231008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.283 [2024-07-14 15:05:54.231030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.283 [2024-07-14 15:05:54.231053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.283 [2024-07-14 15:05:54.231074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.283 [2024-07-14 15:05:54.231097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.283 [2024-07-14 15:05:54.231118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.283 [2024-07-14 15:05:54.231140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.283 [2024-07-14 15:05:54.231162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.283 [2024-07-14 15:05:54.231185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.283 [2024-07-14 15:05:54.231206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.283 [2024-07-14 15:05:54.231229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.283 [2024-07-14 15:05:54.231250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.283 [2024-07-14 15:05:54.231273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.283 [2024-07-14 15:05:54.231295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.283 [2024-07-14 15:05:54.231318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.283 [2024-07-14 15:05:54.231339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.283 [2024-07-14 15:05:54.231362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.283 [2024-07-14 15:05:54.231384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.283 [2024-07-14 15:05:54.231407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.283 [2024-07-14 15:05:54.231428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.283 [2024-07-14 15:05:54.231451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.283 [2024-07-14 15:05:54.231473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.283 [2024-07-14 15:05:54.231496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.283 [2024-07-14 15:05:54.231522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.283 [2024-07-14 15:05:54.231546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.283 [2024-07-14 15:05:54.231568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.283 [2024-07-14 15:05:54.231593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.283 [2024-07-14 15:05:54.231615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.283 [2024-07-14 15:05:54.231638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.283 [2024-07-14 15:05:54.231659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.283 [2024-07-14 15:05:54.231683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.283 [2024-07-14 15:05:54.231704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.283 [2024-07-14 15:05:54.231728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:9280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.283 [2024-07-14 15:05:54.231749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.283 [2024-07-14 15:05:54.231772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.283 [2024-07-14 15:05:54.231794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.283 [2024-07-14 15:05:54.231818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.283 [2024-07-14 15:05:54.231839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.283 [2024-07-14 15:05:54.231862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.283 [2024-07-14 15:05:54.231891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.283 [2024-07-14 15:05:54.231927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.283 [2024-07-14 15:05:54.231950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.283 [2024-07-14 15:05:54.231973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.283 [2024-07-14 15:05:54.231994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.283 [2024-07-14 15:05:54.232017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.283 [2024-07-14 15:05:54.232039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.283 [2024-07-14 15:05:54.232063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.283 [2024-07-14 15:05:54.232085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.283 [2024-07-14 15:05:54.232113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.283 [2024-07-14 15:05:54.232135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.283 [2024-07-14 15:05:54.232158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.283 [2024-07-14 15:05:54.232186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.283 [2024-07-14 15:05:54.232209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.283 [2024-07-14 15:05:54.232231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.283 [2024-07-14 15:05:54.232254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:9368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.283 [2024-07-14 15:05:54.232275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.283 [2024-07-14 15:05:54.232298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.283 [2024-07-14 15:05:54.232332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.283 [2024-07-14 15:05:54.232359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.283 [2024-07-14 15:05:54.232381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.283 [2024-07-14 15:05:54.232404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.283 [2024-07-14 15:05:54.232426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.283 [2024-07-14 15:05:54.232450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.283 [2024-07-14 15:05:54.232471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.283 [2024-07-14 15:05:54.232494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.283 [2024-07-14 15:05:54.232517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.283 [2024-07-14 15:05:54.232540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.283 [2024-07-14 15:05:54.232562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.283 [2024-07-14 15:05:54.232585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.283 [2024-07-14 15:05:54.232607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.283 [2024-07-14 15:05:54.232630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.283 [2024-07-14 15:05:54.232652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.284 [2024-07-14 15:05:54.232675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.284 [2024-07-14 15:05:54.232701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.284 [2024-07-14 15:05:54.232726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.284 [2024-07-14 15:05:54.232748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.284 [2024-07-14 15:05:54.232770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.284 [2024-07-14 15:05:54.232792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.284 [2024-07-14 15:05:54.232814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.284 [2024-07-14 15:05:54.232836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.284 [2024-07-14 15:05:54.232859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.284 [2024-07-14 15:05:54.232885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.284 [2024-07-14 15:05:54.232911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.284 [2024-07-14 15:05:54.232933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.284 [2024-07-14 15:05:54.232956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.284 [2024-07-14 15:05:54.232978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.284 [2024-07-14 15:05:54.233001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:9680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.284 [2024-07-14 15:05:54.233022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.284 [2024-07-14 15:05:54.233045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.284 [2024-07-14 15:05:54.233067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.284 [2024-07-14 15:05:54.233090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.284 [2024-07-14 15:05:54.233112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.284 [2024-07-14 15:05:54.233135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.284 [2024-07-14 15:05:54.233156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.284 [2024-07-14 15:05:54.233179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.284 [2024-07-14 15:05:54.233208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.284 [2024-07-14 15:05:54.233231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:9520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.284 [2024-07-14 15:05:54.233252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.284 [2024-07-14 15:05:54.233279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.284 [2024-07-14 15:05:54.233302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.284 [2024-07-14 15:05:54.233326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.284 [2024-07-14 15:05:54.233347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.284 [2024-07-14 15:05:54.233370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.284 [2024-07-14 15:05:54.233392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.284 [2024-07-14 15:05:54.233414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.284 [2024-07-14 15:05:54.233436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.284 [2024-07-14 15:05:54.233459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.284 [2024-07-14 15:05:54.233480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.284 [2024-07-14 15:05:54.233503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.284 [2024-07-14 15:05:54.233524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.284 [2024-07-14 15:05:54.233547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.284 [2024-07-14 15:05:54.233569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.284 [2024-07-14 15:05:54.233592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.284 [2024-07-14 15:05:54.233613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.284 [2024-07-14 15:05:54.233637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.284 [2024-07-14 15:05:54.233659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.284 [2024-07-14 15:05:54.233681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.284 [2024-07-14 15:05:54.233703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.284 [2024-07-14 15:05:54.233723] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3180 is same with the state(5) to be set 00:33:26.284 [2024-07-14 15:05:54.233757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.284 [2024-07-14 15:05:54.233776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.284 [2024-07-14 15:05:54.233795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9608 len:8 PRP1 0x0 PRP2 0x0 00:33:26.284 [2024-07-14 15:05:54.233814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.284 [2024-07-14 15:05:54.234102] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f3180 was disconnected and freed. reset controller. 00:33:26.284 [2024-07-14 15:05:54.234138] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:33:26.284 [2024-07-14 15:05:54.234161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.284 [2024-07-14 15:05:54.238245] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.284 [2024-07-14 15:05:54.238347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:26.284 [2024-07-14 15:05:54.405610] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:26.284 [2024-07-14 15:05:58.780398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.284 [2024-07-14 15:05:58.780492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.284 [2024-07-14 15:05:58.780536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.284 [2024-07-14 15:05:58.780561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.284 [2024-07-14 15:05:58.780586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.284 [2024-07-14 15:05:58.780609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.284 [2024-07-14 15:05:58.780633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.284 [2024-07-14 15:05:58.780655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.284 [2024-07-14 15:05:58.780679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.284 [2024-07-14 15:05:58.780701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.284 [2024-07-14 15:05:58.780724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.285 [2024-07-14 15:05:58.780746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.285 [2024-07-14 15:05:58.780770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.285 [2024-07-14 15:05:58.780791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.285 [2024-07-14 15:05:58.780815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.285 [2024-07-14 15:05:58.780837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.285 [2024-07-14 15:05:58.780860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:11816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.285 [2024-07-14 15:05:58.780891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.285 [2024-07-14 15:05:58.780927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.285 [2024-07-14 15:05:58.780949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.285 [2024-07-14 15:05:58.780973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.285 [2024-07-14 15:05:58.780995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.285 [2024-07-14 15:05:58.781033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.285 [2024-07-14 15:05:58.781056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.285 [2024-07-14 15:05:58.781080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:11848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.285 [2024-07-14 15:05:58.781101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.285 [2024-07-14 15:05:58.781124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.285 [2024-07-14 15:05:58.781144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.285 [2024-07-14 15:05:58.781173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.285 [2024-07-14 15:05:58.781194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.285 [2024-07-14 15:05:58.781218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.285 [2024-07-14 15:05:58.781239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.285 [2024-07-14 15:05:58.781262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.285 [2024-07-14 15:05:58.781284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.285 [2024-07-14 15:05:58.781308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.285 [2024-07-14 15:05:58.781330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.285 [2024-07-14 15:05:58.781353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.285 [2024-07-14 15:05:58.781374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.285 [2024-07-14 15:05:58.781398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.285 [2024-07-14 15:05:58.781419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.285 [2024-07-14 15:05:58.781442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.285 [2024-07-14 15:05:58.781464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.285 [2024-07-14 15:05:58.781487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:11920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.285 [2024-07-14 15:05:58.781508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.285 [2024-07-14 15:05:58.781532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:11928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.285 [2024-07-14 15:05:58.781553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.285 [2024-07-14 15:05:58.781576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.285 [2024-07-14 15:05:58.781602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.285 [2024-07-14 15:05:58.781626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.285 [2024-07-14 15:05:58.781648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.285 [2024-07-14 15:05:58.781672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.285 [2024-07-14 15:05:58.781694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.285 [2024-07-14 15:05:58.781716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.285 [2024-07-14 15:05:58.781737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.285 [2024-07-14 15:05:58.781761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.285 [2024-07-14 15:05:58.781782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.285 [2024-07-14 15:05:58.781824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.285 [2024-07-14 15:05:58.781847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.285 [2024-07-14 15:05:58.781870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:11984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.285 [2024-07-14 15:05:58.781900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.285 [2024-07-14 15:05:58.781929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.285 [2024-07-14 15:05:58.781950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.285 [2024-07-14 15:05:58.781974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.285 [2024-07-14 15:05:58.781996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.285 [2024-07-14 15:05:58.782019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.285 [2024-07-14 15:05:58.782040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.285 [2024-07-14 15:05:58.782064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.285 [2024-07-14 15:05:58.782085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.285 [2024-07-14 15:05:58.782108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.285 [2024-07-14 15:05:58.782129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.285 [2024-07-14 15:05:58.782152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.285 [2024-07-14 15:05:58.782176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.285 [2024-07-14 15:05:58.782203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.285 [2024-07-14 15:05:58.782225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.285 [2024-07-14 15:05:58.782248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.285 [2024-07-14 15:05:58.782269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.285 [2024-07-14 15:05:58.782292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:12064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.285 [2024-07-14 15:05:58.782314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.286 [2024-07-14 15:05:58.782337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.286 [2024-07-14 15:05:58.782358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.286 [2024-07-14 15:05:58.782381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.286 [2024-07-14 15:05:58.782403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.286 [2024-07-14 15:05:58.782426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.286 [2024-07-14 15:05:58.782447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.286 [2024-07-14 15:05:58.782470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.286 [2024-07-14 15:05:58.782491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.286 [2024-07-14 15:05:58.782514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.286 [2024-07-14 15:05:58.782536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.286 [2024-07-14 15:05:58.782559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.286 [2024-07-14 15:05:58.782581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.286 [2024-07-14 15:05:58.782604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.286 [2024-07-14 15:05:58.782625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.286 [2024-07-14 15:05:58.782648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.286 [2024-07-14 15:05:58.782669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.286 [2024-07-14 15:05:58.782694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.286 [2024-07-14 15:05:58.782715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.286 [2024-07-14 15:05:58.782738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.286 [2024-07-14 15:05:58.782764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.286 [2024-07-14 15:05:58.782788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.286 [2024-07-14 15:05:58.782810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.286 [2024-07-14 15:05:58.782833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:12160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.286 [2024-07-14 15:05:58.782854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.286 [2024-07-14 15:05:58.782887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.286 [2024-07-14 15:05:58.782920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.286 [2024-07-14 15:05:58.782944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.286 [2024-07-14 15:05:58.782973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.286 [2024-07-14 15:05:58.782998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.286 [2024-07-14 15:05:58.783020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.286 [2024-07-14 15:05:58.783042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.286 [2024-07-14 15:05:58.783063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.286 [2024-07-14 15:05:58.783086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.286 [2024-07-14 15:05:58.783108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.286 [2024-07-14 15:05:58.783131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.286 [2024-07-14 15:05:58.783153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.286 [2024-07-14 15:05:58.783180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:12216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.286 [2024-07-14 15:05:58.783202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.286 [2024-07-14 15:05:58.783226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.286 [2024-07-14 15:05:58.783247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.286 [2024-07-14 15:05:58.783270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.286 [2024-07-14 15:05:58.783291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.286 [2024-07-14 15:05:58.783315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.286 [2024-07-14 15:05:58.783336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.286 [2024-07-14 15:05:58.783359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.286 [2024-07-14 15:05:58.783385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.286 [2024-07-14 15:05:58.783409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.286 [2024-07-14 15:05:58.783431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.286 [2024-07-14 15:05:58.783455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.286 [2024-07-14 15:05:58.783476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.286 [2024-07-14 15:05:58.783499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.286 [2024-07-14 15:05:58.783520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.287 [2024-07-14 15:05:58.783544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.287 [2024-07-14 15:05:58.783565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.287 [2024-07-14 15:05:58.783589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.287 [2024-07-14 15:05:58.783610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.287 [2024-07-14 15:05:58.783633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.287 [2024-07-14 15:05:58.783654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.287 [2024-07-14 15:05:58.783677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.287 [2024-07-14 15:05:58.783698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.287 [2024-07-14 15:05:58.783721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.287 [2024-07-14 15:05:58.783742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.287 [2024-07-14 15:05:58.783765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:12320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.287 [2024-07-14 15:05:58.783787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.287 [2024-07-14 15:05:58.783811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.287 [2024-07-14 15:05:58.783831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.287 [2024-07-14 15:05:58.783854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.287 [2024-07-14 15:05:58.783883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.287 [2024-07-14 15:05:58.783909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.287 [2024-07-14 15:05:58.783940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.287 [2024-07-14 15:05:58.783968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.287 [2024-07-14 15:05:58.783990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.287 [2024-07-14 15:05:58.784013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.287 [2024-07-14 15:05:58.784035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.287 [2024-07-14 15:05:58.784058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.287 [2024-07-14 15:05:58.784079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.287 [2024-07-14 15:05:58.784102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.287 [2024-07-14 15:05:58.784123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.287 [2024-07-14 15:05:58.784147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.287 [2024-07-14 15:05:58.784177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.287 [2024-07-14 15:05:58.784227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.287 [2024-07-14 15:05:58.784254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12392 len:8 PRP1 0x0 PRP2 0x0 00:33:26.287 [2024-07-14 15:05:58.784275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.287 [2024-07-14 15:05:58.784300] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.287 [2024-07-14 15:05:58.784319] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.287 [2024-07-14 15:05:58.784337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12400 len:8 PRP1 0x0 PRP2 0x0 00:33:26.287 [2024-07-14 15:05:58.784357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.287 [2024-07-14 15:05:58.784378] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.287 [2024-07-14 15:05:58.784395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.287 [2024-07-14 15:05:58.784412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12408 len:8 PRP1 0x0 PRP2 0x0 00:33:26.287 [2024-07-14 15:05:58.784431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.287 [2024-07-14 15:05:58.784451] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.287 [2024-07-14 15:05:58.784468] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.287 [2024-07-14 15:05:58.784486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:8 PRP1 0x0 PRP2 0x0 00:33:26.287 [2024-07-14 15:05:58.784505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.287 [2024-07-14 15:05:58.784524] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.287 [2024-07-14 15:05:58.784541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.287 [2024-07-14 15:05:58.784559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12424 len:8 PRP1 0x0 PRP2 0x0 00:33:26.287 [2024-07-14 15:05:58.784582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.287 [2024-07-14 15:05:58.784603] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.287 [2024-07-14 15:05:58.784620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.287 [2024-07-14 15:05:58.784637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12432 len:8 PRP1 0x0 PRP2 0x0 00:33:26.287 [2024-07-14 15:05:58.784656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.287 [2024-07-14 15:05:58.784676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.287 [2024-07-14 15:05:58.784692] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.287 [2024-07-14 15:05:58.784710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12440 len:8 PRP1 0x0 PRP2 0x0 00:33:26.287 [2024-07-14 15:05:58.784742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.287 [2024-07-14 15:05:58.784764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.287 [2024-07-14 15:05:58.784782] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.287 [2024-07-14 15:05:58.784799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:8 PRP1 0x0 PRP2 0x0 00:33:26.287 [2024-07-14 15:05:58.784819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.287 [2024-07-14 15:05:58.784838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.287 [2024-07-14 15:05:58.784855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.287 [2024-07-14 15:05:58.784873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12456 len:8 PRP1 0x0 PRP2 0x0 00:33:26.287 [2024-07-14 15:05:58.784918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.287 [2024-07-14 15:05:58.784938] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.287 [2024-07-14 15:05:58.784955] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.287 [2024-07-14 15:05:58.784973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12464 len:8 PRP1 0x0 PRP2 0x0 00:33:26.287 [2024-07-14 15:05:58.784992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.287 [2024-07-14 15:05:58.785011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.287 [2024-07-14 15:05:58.785027] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.287 [2024-07-14 15:05:58.785044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12472 len:8 PRP1 0x0 PRP2 0x0 00:33:26.287 [2024-07-14 15:05:58.785063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.288 [2024-07-14 15:05:58.785082] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.288 [2024-07-14 15:05:58.785099] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.288 [2024-07-14 15:05:58.785116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:8 PRP1 0x0 PRP2 0x0 00:33:26.288 [2024-07-14 15:05:58.785135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.288 [2024-07-14 15:05:58.785164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.288 [2024-07-14 15:05:58.785180] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.288 [2024-07-14 15:05:58.785202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12488 len:8 PRP1 0x0 PRP2 0x0 00:33:26.288 [2024-07-14 15:05:58.785222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.288 [2024-07-14 15:05:58.785242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.288 [2024-07-14 15:05:58.785259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.288 [2024-07-14 15:05:58.785276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12496 len:8 PRP1 0x0 PRP2 0x0 00:33:26.288 [2024-07-14 15:05:58.785296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.288 [2024-07-14 15:05:58.785315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.288 [2024-07-14 15:05:58.785332] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.288 [2024-07-14 15:05:58.785348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12504 len:8 PRP1 0x0 PRP2 0x0 00:33:26.288 [2024-07-14 15:05:58.785367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.288 [2024-07-14 15:05:58.785386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.288 [2024-07-14 15:05:58.785404] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.288 [2024-07-14 15:05:58.785421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:8 PRP1 0x0 PRP2 0x0 00:33:26.288 [2024-07-14 15:05:58.785440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.288 [2024-07-14 15:05:58.785460] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.288 [2024-07-14 15:05:58.785477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.288 [2024-07-14 15:05:58.785495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12520 len:8 PRP1 0x0 PRP2 0x0 00:33:26.288 [2024-07-14 15:05:58.785514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.288 [2024-07-14 15:05:58.785534] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.288 [2024-07-14 15:05:58.785550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.288 [2024-07-14 15:05:58.785567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12528 len:8 PRP1 0x0 PRP2 0x0 00:33:26.288 [2024-07-14 15:05:58.785587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.288 [2024-07-14 15:05:58.785606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.288 [2024-07-14 15:05:58.785622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.288 [2024-07-14 15:05:58.785639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12536 len:8 PRP1 0x0 PRP2 0x0 00:33:26.288 [2024-07-14 15:05:58.785658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.288 [2024-07-14 15:05:58.785677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.288 [2024-07-14 15:05:58.785694] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.288 [2024-07-14 15:05:58.785711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:8 PRP1 0x0 PRP2 0x0 00:33:26.288 [2024-07-14 15:05:58.785731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.288 [2024-07-14 15:05:58.785750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.288 [2024-07-14 15:05:58.785770] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.288 [2024-07-14 15:05:58.785788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12552 len:8 PRP1 0x0 PRP2 0x0 00:33:26.288 [2024-07-14 15:05:58.785807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.288 [2024-07-14 15:05:58.785826] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.288 [2024-07-14 15:05:58.785842] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.288 [2024-07-14 15:05:58.785859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12560 len:8 PRP1 0x0 PRP2 0x0 00:33:26.288 [2024-07-14 15:05:58.785885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.288 [2024-07-14 15:05:58.785906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.288 [2024-07-14 15:05:58.785930] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.288 [2024-07-14 15:05:58.785947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12568 len:8 PRP1 0x0 PRP2 0x0 00:33:26.288 [2024-07-14 15:05:58.785966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.288 [2024-07-14 15:05:58.785985] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.288 [2024-07-14 15:05:58.786001] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.288 [2024-07-14 15:05:58.786018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:8 PRP1 0x0 PRP2 0x0 00:33:26.288 [2024-07-14 15:05:58.786037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.288 [2024-07-14 15:05:58.786056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.288 [2024-07-14 15:05:58.786073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.288 [2024-07-14 15:05:58.786090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12584 len:8 PRP1 0x0 PRP2 0x0 00:33:26.288 [2024-07-14 15:05:58.786110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.288 [2024-07-14 15:05:58.786129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.288 [2024-07-14 15:05:58.786145] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.288 [2024-07-14 15:05:58.786171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12592 len:8 PRP1 0x0 PRP2 0x0 00:33:26.288 [2024-07-14 15:05:58.786190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.288 [2024-07-14 15:05:58.786209] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.288 [2024-07-14 15:05:58.786225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.288 [2024-07-14 15:05:58.786242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12600 len:8 PRP1 0x0 PRP2 0x0 00:33:26.288 [2024-07-14 15:05:58.786261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.288 [2024-07-14 15:05:58.786281] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.288 [2024-07-14 15:05:58.786298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.288 [2024-07-14 15:05:58.786314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:8 PRP1 0x0 PRP2 0x0 00:33:26.288 [2024-07-14 15:05:58.786333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.288 [2024-07-14 15:05:58.786356] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.288 [2024-07-14 15:05:58.786374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.288 [2024-07-14 15:05:58.786392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12616 len:8 PRP1 0x0 PRP2 0x0 00:33:26.288 [2024-07-14 15:05:58.786411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.288 [2024-07-14 15:05:58.786430] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.288 [2024-07-14 15:05:58.786446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.288 [2024-07-14 15:05:58.786463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12624 len:8 PRP1 0x0 PRP2 0x0 00:33:26.288 [2024-07-14 15:05:58.786482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.288 [2024-07-14 15:05:58.786501] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.289 [2024-07-14 15:05:58.786517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.289 [2024-07-14 15:05:58.786535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12632 len:8 PRP1 0x0 PRP2 0x0 00:33:26.289 [2024-07-14 15:05:58.786555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.289 [2024-07-14 15:05:58.786574] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.289 [2024-07-14 15:05:58.786590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.289 [2024-07-14 15:05:58.786607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:8 PRP1 0x0 PRP2 0x0 00:33:26.289 [2024-07-14 15:05:58.786626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.289 [2024-07-14 15:05:58.786645] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.289 [2024-07-14 15:05:58.786662] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.289 [2024-07-14 15:05:58.786680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12648 len:8 PRP1 0x0 PRP2 0x0 00:33:26.289 [2024-07-14 15:05:58.786700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.289 [2024-07-14 15:05:58.786719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.289 [2024-07-14 15:05:58.786736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.289 [2024-07-14 15:05:58.786753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12656 len:8 PRP1 0x0 PRP2 0x0 00:33:26.289 [2024-07-14 15:05:58.786772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.289 [2024-07-14 15:05:58.786791] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.289 [2024-07-14 15:05:58.786808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.289 [2024-07-14 15:05:58.786825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12664 len:8 PRP1 0x0 PRP2 0x0 00:33:26.289 [2024-07-14 15:05:58.786844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.289 [2024-07-14 15:05:58.786863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.289 [2024-07-14 15:05:58.786886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.289 [2024-07-14 15:05:58.786905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:8 PRP1 0x0 PRP2 0x0 00:33:26.289 [2024-07-14 15:05:58.786938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.289 [2024-07-14 15:05:58.786958] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.289 [2024-07-14 15:05:58.786975] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.289 [2024-07-14 15:05:58.786993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12680 len:8 PRP1 0x0 PRP2 0x0 00:33:26.289 [2024-07-14 15:05:58.787012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.289 [2024-07-14 15:05:58.787032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.289 [2024-07-14 15:05:58.787049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.289 [2024-07-14 15:05:58.787065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12688 len:8 PRP1 0x0 PRP2 0x0 00:33:26.289 [2024-07-14 15:05:58.787085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.289 [2024-07-14 15:05:58.787103] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.289 [2024-07-14 15:05:58.787119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.289 [2024-07-14 15:05:58.787136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12696 len:8 PRP1 0x0 PRP2 0x0 00:33:26.289 [2024-07-14 15:05:58.787207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.289 [2024-07-14 15:05:58.787230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.289 [2024-07-14 15:05:58.787248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.289 [2024-07-14 15:05:58.787265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:8 PRP1 0x0 PRP2 0x0 00:33:26.289 [2024-07-14 15:05:58.787285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.289 [2024-07-14 15:05:58.787304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.289 [2024-07-14 15:05:58.787321] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.289 [2024-07-14 15:05:58.787338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12712 len:8 PRP1 0x0 PRP2 0x0 00:33:26.289 [2024-07-14 15:05:58.787357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.289 [2024-07-14 15:05:58.787376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.289 [2024-07-14 15:05:58.787393] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.289 [2024-07-14 15:05:58.787410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12720 len:8 PRP1 0x0 PRP2 0x0 00:33:26.289 [2024-07-14 15:05:58.787429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.289 [2024-07-14 15:05:58.787447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.289 [2024-07-14 15:05:58.787464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.289 [2024-07-14 15:05:58.787481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12728 len:8 PRP1 0x0 PRP2 0x0 00:33:26.289 [2024-07-14 15:05:58.787499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.289 [2024-07-14 15:05:58.787519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.289 [2024-07-14 15:05:58.787535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.289 [2024-07-14 15:05:58.787557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:8 PRP1 0x0 PRP2 0x0 00:33:26.289 [2024-07-14 15:05:58.787577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.289 [2024-07-14 15:05:58.787597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.289 [2024-07-14 15:05:58.787613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.289 [2024-07-14 15:05:58.787630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12744 len:8 PRP1 0x0 PRP2 0x0 00:33:26.289 [2024-07-14 15:05:58.787650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.289 [2024-07-14 15:05:58.787669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.289 [2024-07-14 15:05:58.787686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.289 [2024-07-14 15:05:58.787703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12752 len:8 PRP1 0x0 PRP2 0x0 00:33:26.289 [2024-07-14 15:05:58.787722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.289 [2024-07-14 15:05:58.787741] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.289 [2024-07-14 15:05:58.787758] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.289 [2024-07-14 15:05:58.787775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12760 len:8 PRP1 0x0 PRP2 0x0 00:33:26.289 [2024-07-14 15:05:58.787794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.289 [2024-07-14 15:05:58.787813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.289 [2024-07-14 15:05:58.787829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.289 [2024-07-14 15:05:58.787846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:8 PRP1 0x0 PRP2 0x0 00:33:26.289 [2024-07-14 15:05:58.787865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.289 [2024-07-14 15:05:58.787892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.289 [2024-07-14 15:05:58.787928] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.289 [2024-07-14 15:05:58.787946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12000 len:8 PRP1 0x0 PRP2 0x0 00:33:26.289 [2024-07-14 15:05:58.787966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.289 [2024-07-14 15:05:58.788244] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f3900 was disconnected and freed. reset controller. 00:33:26.289 [2024-07-14 15:05:58.788275] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:33:26.289 [2024-07-14 15:05:58.788327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:26.289 [2024-07-14 15:05:58.788353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.290 [2024-07-14 15:05:58.788376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:26.290 [2024-07-14 15:05:58.788396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.290 [2024-07-14 15:05:58.788417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:26.290 [2024-07-14 15:05:58.788442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.290 [2024-07-14 15:05:58.788465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:26.290 [2024-07-14 15:05:58.788484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.290 [2024-07-14 15:05:58.788504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.290 [2024-07-14 15:05:58.788582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:26.290 [2024-07-14 15:05:58.792457] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.290 [2024-07-14 15:05:58.962722] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:26.290 00:33:26.290 Latency(us) 00:33:26.290 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:26.290 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:26.290 Verification LBA range: start 0x0 length 0x4000 00:33:26.290 NVMe0n1 : 15.01 6131.86 23.95 983.15 0.00 17952.77 813.13 23107.51 00:33:26.290 =================================================================================================================== 00:33:26.290 Total : 6131.86 23.95 983.15 0.00 17952.77 813.13 23107.51 00:33:26.290 Received shutdown signal, test time was about 15.000000 seconds 00:33:26.290 00:33:26.290 Latency(us) 00:33:26.290 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:26.290 =================================================================================================================== 00:33:26.290 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:26.290 15:06:05 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:33:26.290 15:06:05 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:33:26.290 15:06:05 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:33:26.290 15:06:05 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2020396 00:33:26.290 15:06:05 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:33:26.290 15:06:05 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2020396 /var/tmp/bdevperf.sock 00:33:26.290 15:06:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2020396 ']' 00:33:26.290 15:06:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:26.290 15:06:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:26.290 15:06:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:26.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:26.290 15:06:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:26.290 15:06:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:27.221 15:06:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:27.221 15:06:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:33:27.221 15:06:06 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:27.487 [2024-07-14 15:06:06.710482] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:27.487 15:06:06 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:27.745 [2024-07-14 15:06:06.979289] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:27.745 15:06:06 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:28.310 NVMe0n1 00:33:28.310 15:06:07 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:28.567 00:33:28.824 15:06:07 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:29.081 00:33:29.081 15:06:08 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:29.081 15:06:08 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:33:29.338 15:06:08 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:29.595 15:06:08 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:33:32.871 15:06:11 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:32.871 15:06:11 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:33:32.871 15:06:12 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2021697 00:33:32.871 15:06:12 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:32.871 15:06:12 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 2021697 00:33:34.242 0 00:33:34.242 15:06:13 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:34.242 [2024-07-14 15:06:05.562352] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:33:34.242 [2024-07-14 15:06:05.562509] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2020396 ] 00:33:34.242 EAL: No free 2048 kB hugepages reported on node 1 00:33:34.242 [2024-07-14 15:06:05.690271] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:34.242 [2024-07-14 15:06:05.924522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:34.242 [2024-07-14 15:06:08.845088] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:34.242 [2024-07-14 15:06:08.845225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:34.242 [2024-07-14 15:06:08.845258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.242 [2024-07-14 15:06:08.845285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:34.242 [2024-07-14 15:06:08.845306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.242 [2024-07-14 15:06:08.845326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:34.242 [2024-07-14 15:06:08.845347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.242 [2024-07-14 15:06:08.845368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:34.242 [2024-07-14 15:06:08.845388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.242 [2024-07-14 15:06:08.845407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.242 [2024-07-14 15:06:08.845496] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.242 [2024-07-14 15:06:08.845548] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:34.242 [2024-07-14 15:06:08.854931] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:34.242 Running I/O for 1 seconds... 00:33:34.242 00:33:34.242 Latency(us) 00:33:34.242 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:34.242 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:34.242 Verification LBA range: start 0x0 length 0x4000 00:33:34.242 NVMe0n1 : 1.02 6395.76 24.98 0.00 0.00 19922.20 4223.43 17767.54 00:33:34.242 =================================================================================================================== 00:33:34.242 Total : 6395.76 24.98 0.00 0.00 19922.20 4223.43 17767.54 00:33:34.242 15:06:13 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:34.242 15:06:13 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:33:34.242 15:06:13 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:34.499 15:06:13 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:34.499 15:06:13 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:33:34.757 15:06:14 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:35.015 15:06:14 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:33:38.294 15:06:17 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:38.294 15:06:17 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:33:38.294 15:06:17 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 2020396 00:33:38.294 15:06:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2020396 ']' 00:33:38.294 15:06:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2020396 00:33:38.294 15:06:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:33:38.294 15:06:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:38.294 15:06:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2020396 00:33:38.294 15:06:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:38.295 15:06:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:38.295 15:06:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2020396' 00:33:38.295 killing process with pid 2020396 00:33:38.295 15:06:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2020396 00:33:38.295 15:06:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2020396 00:33:39.229 15:06:18 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:33:39.229 15:06:18 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:39.486 15:06:18 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:33:39.486 15:06:18 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:39.486 15:06:18 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:33:39.486 15:06:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:39.486 15:06:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:33:39.486 15:06:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:39.486 15:06:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:33:39.486 15:06:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:39.486 15:06:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:39.486 rmmod nvme_tcp 00:33:39.486 rmmod nvme_fabrics 00:33:39.749 rmmod nvme_keyring 00:33:39.749 15:06:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:39.749 15:06:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:33:39.749 15:06:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:33:39.749 15:06:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 2017763 ']' 00:33:39.749 15:06:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 2017763 00:33:39.749 15:06:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2017763 ']' 00:33:39.749 15:06:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2017763 00:33:39.749 15:06:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:33:39.749 15:06:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:39.749 15:06:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2017763 00:33:39.749 15:06:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:39.749 15:06:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:39.749 15:06:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2017763' 00:33:39.749 killing process with pid 2017763 00:33:39.749 15:06:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2017763 00:33:39.749 15:06:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2017763 00:33:41.181 15:06:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:41.181 15:06:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:41.181 15:06:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:41.181 15:06:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:41.181 15:06:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:41.181 15:06:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:41.181 15:06:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:41.181 15:06:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:43.087 15:06:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:43.087 00:33:43.087 real 0m40.326s 00:33:43.087 user 2m21.906s 00:33:43.087 sys 0m5.941s 00:33:43.087 15:06:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:43.087 15:06:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:43.087 ************************************ 00:33:43.087 END TEST nvmf_failover 00:33:43.087 ************************************ 00:33:43.087 15:06:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:33:43.087 15:06:22 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:43.087 15:06:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:43.087 15:06:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:43.087 15:06:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:43.087 ************************************ 00:33:43.087 START TEST nvmf_host_discovery 00:33:43.087 ************************************ 00:33:43.087 15:06:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:43.346 * Looking for test storage... 00:33:43.346 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:43.346 15:06:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:43.346 15:06:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:33:43.346 15:06:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:43.347 15:06:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:43.347 15:06:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:43.347 15:06:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:43.347 15:06:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:43.347 15:06:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:43.347 15:06:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:43.347 15:06:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:43.347 15:06:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:43.347 15:06:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:43.347 15:06:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:43.347 15:06:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:43.347 15:06:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:43.347 15:06:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:43.347 15:06:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:43.347 15:06:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:43.347 15:06:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:43.347 15:06:22 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:43.347 15:06:22 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:43.347 15:06:22 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:43.347 15:06:22 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.347 15:06:22 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.347 15:06:22 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.347 15:06:22 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:33:43.347 15:06:22 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.347 15:06:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:33:43.347 15:06:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:43.347 15:06:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:43.347 15:06:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:43.347 15:06:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:43.347 15:06:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:43.347 15:06:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:43.347 15:06:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:43.347 15:06:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:43.347 15:06:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:33:43.347 15:06:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:33:43.347 15:06:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:33:43.347 15:06:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:33:43.347 15:06:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:33:43.347 15:06:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:33:43.347 15:06:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:33:43.347 15:06:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:43.347 15:06:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:43.347 15:06:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:43.347 15:06:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:43.347 15:06:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:43.347 15:06:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:43.347 15:06:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:43.347 15:06:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:43.347 15:06:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:43.347 15:06:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:43.347 15:06:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:33:43.347 15:06:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:45.248 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:45.248 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:45.248 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:45.248 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:45.249 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:45.249 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:45.249 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:45.249 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:45.249 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:33:45.249 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:45.249 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:45.249 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:45.249 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:45.249 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:45.249 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:45.249 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:45.249 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:45.249 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:45.249 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:45.249 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:45.249 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:45.249 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:45.249 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:45.249 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:45.249 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:45.506 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:45.506 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:45.506 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:45.506 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:45.506 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:45.506 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:45.506 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:45.506 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:45.506 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:33:45.506 00:33:45.506 --- 10.0.0.2 ping statistics --- 00:33:45.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:45.506 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:33:45.506 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:45.506 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:45.506 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:33:45.506 00:33:45.506 --- 10.0.0.1 ping statistics --- 00:33:45.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:45.506 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:33:45.506 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:45.506 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:33:45.506 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:45.506 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:45.507 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:45.507 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:45.507 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:45.507 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:45.507 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:45.507 15:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:33:45.507 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:45.507 15:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:45.507 15:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:45.507 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=2024555 00:33:45.507 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:45.507 15:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 2024555 00:33:45.507 15:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 2024555 ']' 00:33:45.507 15:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:45.507 15:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:45.507 15:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:45.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:45.507 15:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:45.507 15:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:45.507 [2024-07-14 15:06:24.775718] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:33:45.507 [2024-07-14 15:06:24.775889] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:45.764 EAL: No free 2048 kB hugepages reported on node 1 00:33:45.764 [2024-07-14 15:06:24.909713] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:46.022 [2024-07-14 15:06:25.132903] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:46.022 [2024-07-14 15:06:25.132978] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:46.022 [2024-07-14 15:06:25.133001] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:46.022 [2024-07-14 15:06:25.133023] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:46.022 [2024-07-14 15:06:25.133042] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:46.022 [2024-07-14 15:06:25.133083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:46.587 15:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:46.587 15:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:33:46.587 15:06:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:46.587 15:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:46.587 15:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:46.587 15:06:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:46.587 15:06:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:46.587 15:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:46.587 15:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:46.587 [2024-07-14 15:06:25.772212] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:46.587 15:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:46.587 15:06:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:33:46.587 15:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:46.587 15:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:46.587 [2024-07-14 15:06:25.780421] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:46.587 15:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:46.587 15:06:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:33:46.587 15:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:46.587 15:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:46.587 null0 00:33:46.587 15:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:46.587 15:06:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:33:46.587 15:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:46.587 15:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:46.587 null1 00:33:46.587 15:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:46.587 15:06:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:33:46.587 15:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:46.587 15:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:46.587 15:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:46.587 15:06:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2024709 00:33:46.587 15:06:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2024709 /tmp/host.sock 00:33:46.587 15:06:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:33:46.587 15:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 2024709 ']' 00:33:46.587 15:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:33:46.587 15:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:46.587 15:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:46.587 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:46.587 15:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:46.587 15:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:46.587 [2024-07-14 15:06:25.892805] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:33:46.587 [2024-07-14 15:06:25.892964] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2024709 ] 00:33:46.846 EAL: No free 2048 kB hugepages reported on node 1 00:33:46.846 [2024-07-14 15:06:26.025719] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:47.104 [2024-07-14 15:06:26.278709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:47.671 15:06:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:47.671 15:06:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:33:47.671 15:06:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:47.671 15:06:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:33:47.671 15:06:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:47.671 15:06:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.671 15:06:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:47.671 15:06:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:33:47.671 15:06:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:47.671 15:06:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.671 15:06:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:47.671 15:06:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:33:47.671 15:06:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:33:47.671 15:06:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:47.671 15:06:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:47.671 15:06:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:47.671 15:06:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:47.671 15:06:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.671 15:06:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:47.671 15:06:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:47.671 15:06:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:33:47.671 15:06:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:33:47.671 15:06:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:47.671 15:06:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:47.671 15:06:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:47.671 15:06:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.671 15:06:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:47.671 15:06:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:47.671 15:06:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:47.671 15:06:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:33:47.671 15:06:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:33:47.671 15:06:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:47.671 15:06:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.671 15:06:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:47.671 15:06:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:33:47.671 15:06:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:47.671 15:06:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:47.671 15:06:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:47.671 15:06:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:47.671 15:06:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.929 15:06:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:47.929 15:06:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:47.929 15:06:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:33:47.929 15:06:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:33:47.929 15:06:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:47.929 15:06:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:47.929 15:06:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.930 [2024-07-14 15:06:27.144288] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:47.930 15:06:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:48.188 15:06:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:48.188 15:06:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:48.188 15:06:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:33:48.188 15:06:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:33:48.188 15:06:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:48.188 15:06:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:33:48.188 15:06:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:48.188 15:06:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:48.188 15:06:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:48.188 15:06:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:48.188 15:06:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:48.188 15:06:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:48.188 15:06:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:48.188 15:06:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:48.188 15:06:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:33:48.188 15:06:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:48.188 15:06:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:48.188 15:06:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:48.188 15:06:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:48.188 15:06:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:48.188 15:06:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:48.188 15:06:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:48.188 15:06:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:33:48.188 15:06:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:33:48.754 [2024-07-14 15:06:27.923770] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:48.754 [2024-07-14 15:06:27.923816] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:48.754 [2024-07-14 15:06:27.923862] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:48.754 [2024-07-14 15:06:28.011165] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:49.012 [2024-07-14 15:06:28.075033] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:49.012 [2024-07-14 15:06:28.075066] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:49.012 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:49.012 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:49.012 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:33:49.270 15:06:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:49.270 15:06:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:49.270 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:49.270 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:49.270 15:06:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:49.270 15:06:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:49.270 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:49.270 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:49.270 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:49.270 15:06:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:33:49.270 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:49.271 15:06:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:49.529 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:49.530 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:49.530 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:49.530 15:06:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:33:49.530 15:06:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:33:49.530 15:06:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:49.530 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:49.530 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:49.530 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:49.530 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:49.530 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:33:49.530 15:06:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:33:49.530 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:49.530 15:06:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:49.530 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:49.530 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:49.530 15:06:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:33:49.530 15:06:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:49.530 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:33:49.530 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:49.530 15:06:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:33:49.530 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:49.530 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:49.530 [2024-07-14 15:06:28.806524] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:49.530 [2024-07-14 15:06:28.807642] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:49.530 [2024-07-14 15:06:28.807703] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:49.530 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:49.530 15:06:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:49.530 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:49.530 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:49.530 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:49.530 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:49.530 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:33:49.530 15:06:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:49.530 15:06:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:49.530 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:49.530 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:49.530 15:06:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:49.530 15:06:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:49.530 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:49.788 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:49.788 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:49.788 15:06:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:49.788 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:49.788 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:49.788 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:49.788 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:49.788 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:33:49.788 15:06:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:49.788 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:49.788 15:06:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:49.788 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:49.788 15:06:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:49.788 15:06:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:49.788 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:49.788 [2024-07-14 15:06:28.894505] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:33:49.788 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:49.788 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:49.788 15:06:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:33:49.788 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:33:49.788 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:49.788 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:49.788 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:49.788 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:33:49.788 15:06:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:49.788 15:06:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:49.788 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:49.788 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:49.789 15:06:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:49.789 15:06:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:49.789 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:49.789 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:33:49.789 15:06:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:33:50.049 [2024-07-14 15:06:29.160066] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:50.049 [2024-07-14 15:06:29.160114] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:50.049 [2024-07-14 15:06:29.160133] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:50.983 15:06:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:50.983 15:06:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:50.983 15:06:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:33:50.983 15:06:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:50.983 15:06:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:50.983 15:06:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.983 15:06:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.983 15:06:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:50.983 15:06:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:50.983 15:06:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.983 15:06:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:33:50.983 15:06:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:50.984 15:06:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:33:50.984 15:06:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:50.984 15:06:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:50.984 15:06:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:50.984 15:06:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:50.984 15:06:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:50.984 15:06:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:50.984 15:06:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:33:50.984 15:06:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:50.984 15:06:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:50.984 15:06:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.984 15:06:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.984 15:06:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.984 15:06:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:50.984 15:06:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:50.984 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:33:50.984 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:50.984 15:06:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:50.984 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.984 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.984 [2024-07-14 15:06:30.035179] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:50.984 [2024-07-14 15:06:30.035268] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:50.984 [2024-07-14 15:06:30.037516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:50.984 [2024-07-14 15:06:30.037583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.984 [2024-07-14 15:06:30.037628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:50.984 [2024-07-14 15:06:30.037653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.984 [2024-07-14 15:06:30.037677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:50.984 [2024-07-14 15:06:30.037700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.984 [2024-07-14 15:06:30.037736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:50.984 [2024-07-14 15:06:30.037761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.984 [2024-07-14 15:06:30.037783] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:50.984 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.984 15:06:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:50.984 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:50.984 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:50.984 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:50.984 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:50.984 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:33:50.984 15:06:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:50.984 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.984 15:06:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:50.984 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.984 15:06:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:50.984 15:06:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:50.984 [2024-07-14 15:06:30.047508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:50.984 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.984 [2024-07-14 15:06:30.057555] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:50.984 [2024-07-14 15:06:30.057863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.984 [2024-07-14 15:06:30.057932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:33:50.984 [2024-07-14 15:06:30.057961] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:50.984 [2024-07-14 15:06:30.057995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:50.984 [2024-07-14 15:06:30.058030] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:50.984 [2024-07-14 15:06:30.058053] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:50.984 [2024-07-14 15:06:30.058076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:50.984 [2024-07-14 15:06:30.058116] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.984 [2024-07-14 15:06:30.067680] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:50.984 [2024-07-14 15:06:30.067929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.984 [2024-07-14 15:06:30.067967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:33:50.984 [2024-07-14 15:06:30.067991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:50.984 [2024-07-14 15:06:30.068023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:50.984 [2024-07-14 15:06:30.068055] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:50.984 [2024-07-14 15:06:30.068082] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:50.984 [2024-07-14 15:06:30.068102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:50.984 [2024-07-14 15:06:30.068147] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.984 [2024-07-14 15:06:30.077799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:50.984 [2024-07-14 15:06:30.078098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.984 [2024-07-14 15:06:30.078136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:33:50.984 [2024-07-14 15:06:30.078160] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:50.984 [2024-07-14 15:06:30.078192] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:50.984 [2024-07-14 15:06:30.078241] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:50.984 [2024-07-14 15:06:30.078263] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:50.984 [2024-07-14 15:06:30.078283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:50.984 [2024-07-14 15:06:30.078313] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.984 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:50.984 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:50.984 15:06:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:50.984 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:50.984 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:50.984 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:50.984 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:50.984 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:33:50.985 [2024-07-14 15:06:30.087931] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:50.985 15:06:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:50.985 [2024-07-14 15:06:30.088209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.985 15:06:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:50.985 [2024-07-14 15:06:30.088247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:33:50.985 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.985 [2024-07-14 15:06:30.088273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:50.985 [2024-07-14 15:06:30.088307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:50.985 [2024-07-14 15:06:30.088339] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:50.985 [2024-07-14 15:06:30.088360] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:50.985 [2024-07-14 15:06:30.088380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:50.985 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.985 [2024-07-14 15:06:30.088410] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.985 15:06:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:50.985 15:06:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:50.985 [2024-07-14 15:06:30.098026] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:50.985 [2024-07-14 15:06:30.098262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.985 [2024-07-14 15:06:30.098304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:33:50.985 [2024-07-14 15:06:30.098331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:50.985 [2024-07-14 15:06:30.098367] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:50.985 [2024-07-14 15:06:30.098401] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:50.985 [2024-07-14 15:06:30.098426] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:50.985 [2024-07-14 15:06:30.098447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:50.985 [2024-07-14 15:06:30.098479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.985 [2024-07-14 15:06:30.108128] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:50.985 [2024-07-14 15:06:30.108341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.985 [2024-07-14 15:06:30.108378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:33:50.985 [2024-07-14 15:06:30.108401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:50.985 [2024-07-14 15:06:30.108433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:50.985 [2024-07-14 15:06:30.108464] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:50.985 [2024-07-14 15:06:30.108485] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:50.985 [2024-07-14 15:06:30.108504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:50.985 [2024-07-14 15:06:30.108550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.985 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.985 [2024-07-14 15:06:30.118227] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:50.985 [2024-07-14 15:06:30.118462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.985 [2024-07-14 15:06:30.118499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:33:50.985 [2024-07-14 15:06:30.118524] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:50.985 [2024-07-14 15:06:30.118557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:50.985 [2024-07-14 15:06:30.118589] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:50.985 [2024-07-14 15:06:30.118611] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:50.985 [2024-07-14 15:06:30.118631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:50.985 [2024-07-14 15:06:30.118660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.985 [2024-07-14 15:06:30.122017] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:33:50.985 [2024-07-14 15:06:30.122068] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:50.985 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:50.985 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:50.985 15:06:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:50.985 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:50.985 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:50.985 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:50.985 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:33:50.985 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:33:50.985 15:06:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:50.985 15:06:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:50.985 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.985 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.985 15:06:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:50.985 15:06:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:50.985 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.985 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:33:50.985 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:50.985 15:06:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:33:50.985 15:06:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:50.985 15:06:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:50.985 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:50.985 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:50.985 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:50.985 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:50.985 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:33:50.985 15:06:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:50.985 15:06:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:50.985 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.985 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.985 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.985 15:06:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:50.985 15:06:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:50.985 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:33:50.985 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:50.985 15:06:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:33:50.985 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.985 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.985 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.985 15:06:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:33:50.985 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:33:50.985 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:50.985 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:50.985 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:33:50.985 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:33:50.986 15:06:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:50.986 15:06:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:50.986 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.986 15:06:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:50.986 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.986 15:06:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:50.986 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.986 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:33:50.986 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:50.986 15:06:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:33:50.986 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:33:50.986 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:50.986 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:50.986 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:33:50.986 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:33:50.986 15:06:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:50.986 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.986 15:06:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:50.986 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.986 15:06:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:50.986 15:06:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:50.986 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:51.243 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:33:51.243 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:51.243 15:06:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:33:51.243 15:06:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:33:51.243 15:06:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:51.243 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:51.243 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:51.243 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:51.243 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:51.243 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:33:51.243 15:06:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:51.243 15:06:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:51.243 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:51.243 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.243 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:51.243 15:06:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:33:51.243 15:06:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:33:51.243 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:33:51.243 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:51.243 15:06:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:51.243 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:51.243 15:06:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.175 [2024-07-14 15:06:31.413886] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:52.175 [2024-07-14 15:06:31.413952] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:52.175 [2024-07-14 15:06:31.413994] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:52.433 [2024-07-14 15:06:31.540446] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:33:52.691 [2024-07-14 15:06:31.812273] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:52.691 [2024-07-14 15:06:31.812346] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.691 request: 00:33:52.691 { 00:33:52.691 "name": "nvme", 00:33:52.691 "trtype": "tcp", 00:33:52.691 "traddr": "10.0.0.2", 00:33:52.691 "adrfam": "ipv4", 00:33:52.691 "trsvcid": "8009", 00:33:52.691 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:52.691 "wait_for_attach": true, 00:33:52.691 "method": "bdev_nvme_start_discovery", 00:33:52.691 "req_id": 1 00:33:52.691 } 00:33:52.691 Got JSON-RPC error response 00:33:52.691 response: 00:33:52.691 { 00:33:52.691 "code": -17, 00:33:52.691 "message": "File exists" 00:33:52.691 } 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.691 request: 00:33:52.691 { 00:33:52.691 "name": "nvme_second", 00:33:52.691 "trtype": "tcp", 00:33:52.691 "traddr": "10.0.0.2", 00:33:52.691 "adrfam": "ipv4", 00:33:52.691 "trsvcid": "8009", 00:33:52.691 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:52.691 "wait_for_attach": true, 00:33:52.691 "method": "bdev_nvme_start_discovery", 00:33:52.691 "req_id": 1 00:33:52.691 } 00:33:52.691 Got JSON-RPC error response 00:33:52.691 response: 00:33:52.691 { 00:33:52.691 "code": -17, 00:33:52.691 "message": "File exists" 00:33:52.691 } 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:52.691 15:06:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:52.949 15:06:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:52.949 15:06:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:52.949 15:06:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:33:52.949 15:06:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:52.949 15:06:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:52.949 15:06:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:52.949 15:06:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:52.949 15:06:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:52.949 15:06:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:52.949 15:06:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:52.949 15:06:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:53.881 [2024-07-14 15:06:33.012156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.881 [2024-07-14 15:06:33.012247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3400 with addr=10.0.0.2, port=8010 00:33:53.881 [2024-07-14 15:06:33.012340] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:53.881 [2024-07-14 15:06:33.012368] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:53.881 [2024-07-14 15:06:33.012389] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:54.812 [2024-07-14 15:06:34.014729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.812 [2024-07-14 15:06:34.014815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3680 with addr=10.0.0.2, port=8010 00:33:54.812 [2024-07-14 15:06:34.014924] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:54.812 [2024-07-14 15:06:34.014952] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:54.812 [2024-07-14 15:06:34.014973] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:55.745 [2024-07-14 15:06:35.016713] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:33:55.745 request: 00:33:55.745 { 00:33:55.745 "name": "nvme_second", 00:33:55.745 "trtype": "tcp", 00:33:55.745 "traddr": "10.0.0.2", 00:33:55.745 "adrfam": "ipv4", 00:33:55.745 "trsvcid": "8010", 00:33:55.745 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:55.745 "wait_for_attach": false, 00:33:55.745 "attach_timeout_ms": 3000, 00:33:55.745 "method": "bdev_nvme_start_discovery", 00:33:55.745 "req_id": 1 00:33:55.745 } 00:33:55.745 Got JSON-RPC error response 00:33:55.745 response: 00:33:55.745 { 00:33:55.745 "code": -110, 00:33:55.745 "message": "Connection timed out" 00:33:55.745 } 00:33:55.745 15:06:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:55.745 15:06:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:33:55.745 15:06:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:55.745 15:06:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:55.745 15:06:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:55.745 15:06:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:33:55.745 15:06:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:55.745 15:06:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:55.745 15:06:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:55.745 15:06:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:55.745 15:06:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:55.745 15:06:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:55.745 15:06:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:56.002 15:06:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:33:56.002 15:06:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:33:56.002 15:06:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2024709 00:33:56.002 15:06:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:33:56.002 15:06:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:56.002 15:06:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:33:56.002 15:06:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:56.002 15:06:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:33:56.002 15:06:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:56.002 15:06:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:56.002 rmmod nvme_tcp 00:33:56.002 rmmod nvme_fabrics 00:33:56.002 rmmod nvme_keyring 00:33:56.002 15:06:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:56.002 15:06:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:33:56.002 15:06:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:33:56.002 15:06:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 2024555 ']' 00:33:56.002 15:06:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 2024555 00:33:56.002 15:06:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 2024555 ']' 00:33:56.002 15:06:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 2024555 00:33:56.002 15:06:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:33:56.002 15:06:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:56.002 15:06:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2024555 00:33:56.002 15:06:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:56.002 15:06:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:56.002 15:06:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2024555' 00:33:56.002 killing process with pid 2024555 00:33:56.003 15:06:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 2024555 00:33:56.003 15:06:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 2024555 00:33:57.421 15:06:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:57.421 15:06:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:57.421 15:06:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:57.421 15:06:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:57.421 15:06:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:57.421 15:06:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:57.421 15:06:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:57.421 15:06:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:59.320 15:06:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:59.320 00:33:59.320 real 0m16.132s 00:33:59.320 user 0m23.973s 00:33:59.321 sys 0m3.260s 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:59.321 ************************************ 00:33:59.321 END TEST nvmf_host_discovery 00:33:59.321 ************************************ 00:33:59.321 15:06:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:33:59.321 15:06:38 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:33:59.321 15:06:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:59.321 15:06:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:59.321 15:06:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:59.321 ************************************ 00:33:59.321 START TEST nvmf_host_multipath_status 00:33:59.321 ************************************ 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:33:59.321 * Looking for test storage... 00:33:59.321 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:33:59.321 15:06:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:01.241 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:01.241 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:34:01.241 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:01.241 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:01.241 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:01.241 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:01.241 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:01.241 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:34:01.241 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:01.241 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:34:01.241 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:34:01.241 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:34:01.241 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:34:01.241 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:34:01.241 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:34:01.241 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:01.241 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:01.241 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:01.241 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:01.241 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:01.241 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:01.241 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:01.241 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:01.241 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:01.241 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:01.241 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:01.241 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:01.241 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:01.241 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:01.241 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:01.241 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:01.241 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:01.241 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:01.241 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:01.241 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:01.241 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:01.241 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:01.241 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:01.241 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:01.241 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:01.241 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:01.241 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:01.241 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:01.241 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:01.241 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:01.241 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:01.241 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:01.241 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:01.241 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:01.241 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:01.241 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:01.242 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:01.242 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:01.242 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:01.242 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:01.242 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:01.242 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:01.242 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:01.242 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:01.242 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:01.242 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:01.242 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:01.242 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:01.242 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:01.242 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:01.242 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:01.242 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:01.242 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:01.242 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:01.242 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:01.242 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:01.242 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:01.242 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:34:01.242 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:01.242 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:01.242 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:01.242 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:01.242 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:01.242 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:01.242 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:01.242 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:01.242 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:01.242 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:01.242 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:01.242 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:01.242 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:01.242 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:01.242 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:01.242 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:01.499 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:01.499 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:01.499 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:01.499 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:01.499 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:01.499 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:01.500 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:01.500 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:01.500 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:34:01.500 00:34:01.500 --- 10.0.0.2 ping statistics --- 00:34:01.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:01.500 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:34:01.500 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:01.500 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:01.500 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:34:01.500 00:34:01.500 --- 10.0.0.1 ping statistics --- 00:34:01.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:01.500 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:34:01.500 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:01.500 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:34:01.500 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:01.500 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:01.500 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:01.500 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:01.500 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:01.500 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:01.500 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:01.500 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:34:01.500 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:01.500 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:01.500 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:01.500 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=2027999 00:34:01.500 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:34:01.500 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 2027999 00:34:01.500 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 2027999 ']' 00:34:01.500 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:01.500 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:01.500 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:01.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:01.500 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:01.500 15:06:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:01.500 [2024-07-14 15:06:40.733077] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:34:01.500 [2024-07-14 15:06:40.733203] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:01.757 EAL: No free 2048 kB hugepages reported on node 1 00:34:01.757 [2024-07-14 15:06:40.874052] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:02.015 [2024-07-14 15:06:41.130408] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:02.015 [2024-07-14 15:06:41.130477] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:02.015 [2024-07-14 15:06:41.130511] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:02.015 [2024-07-14 15:06:41.130531] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:02.015 [2024-07-14 15:06:41.130552] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:02.015 [2024-07-14 15:06:41.130801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:02.015 [2024-07-14 15:06:41.130809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:02.579 15:06:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:02.579 15:06:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:34:02.579 15:06:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:02.579 15:06:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:02.579 15:06:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:02.579 15:06:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:02.579 15:06:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2027999 00:34:02.579 15:06:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:02.837 [2024-07-14 15:06:41.942516] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:02.837 15:06:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:34:03.096 Malloc0 00:34:03.096 15:06:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:34:03.354 15:06:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:03.613 15:06:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:03.871 [2024-07-14 15:06:43.010080] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:03.871 15:06:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:04.128 [2024-07-14 15:06:43.262893] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:04.128 15:06:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2028400 00:34:04.128 15:06:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:34:04.128 15:06:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:04.128 15:06:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2028400 /var/tmp/bdevperf.sock 00:34:04.128 15:06:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 2028400 ']' 00:34:04.128 15:06:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:04.128 15:06:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:04.128 15:06:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:04.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:04.128 15:06:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:04.128 15:06:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:05.058 15:06:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:05.058 15:06:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:34:05.059 15:06:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:34:05.316 15:06:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:34:05.881 Nvme0n1 00:34:05.881 15:06:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:34:06.139 Nvme0n1 00:34:06.139 15:06:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:34:06.139 15:06:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:34:08.663 15:06:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:34:08.663 15:06:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:34:08.663 15:06:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:08.663 15:06:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:34:10.036 15:06:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:34:10.036 15:06:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:10.036 15:06:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:10.036 15:06:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:10.036 15:06:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:10.036 15:06:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:10.036 15:06:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:10.036 15:06:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:10.293 15:06:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:10.293 15:06:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:10.293 15:06:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:10.293 15:06:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:10.550 15:06:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:10.550 15:06:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:10.550 15:06:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:10.550 15:06:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:10.807 15:06:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:10.807 15:06:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:10.807 15:06:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:10.807 15:06:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:11.065 15:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:11.065 15:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:11.065 15:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:11.065 15:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:11.323 15:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:11.323 15:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:34:11.323 15:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:11.581 15:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:11.839 15:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:34:12.772 15:06:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:34:12.772 15:06:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:12.772 15:06:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:12.772 15:06:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:13.028 15:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:13.028 15:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:13.028 15:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:13.028 15:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:13.285 15:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:13.285 15:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:13.285 15:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:13.285 15:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:13.542 15:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:13.542 15:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:13.542 15:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:13.542 15:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:13.798 15:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:13.798 15:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:13.798 15:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:13.798 15:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:14.055 15:06:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:14.055 15:06:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:14.055 15:06:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:14.055 15:06:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:14.313 15:06:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:14.313 15:06:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:34:14.313 15:06:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:14.570 15:06:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:34:14.828 15:06:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:34:15.759 15:06:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:34:15.759 15:06:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:15.759 15:06:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:15.759 15:06:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:16.016 15:06:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:16.016 15:06:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:16.016 15:06:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:16.016 15:06:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:16.273 15:06:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:16.273 15:06:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:16.273 15:06:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:16.273 15:06:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:16.531 15:06:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:16.531 15:06:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:16.531 15:06:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:16.531 15:06:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:16.788 15:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:16.788 15:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:16.788 15:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:16.788 15:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:17.046 15:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:17.046 15:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:17.046 15:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:17.046 15:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:17.302 15:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:17.302 15:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:34:17.302 15:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:17.559 15:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:17.816 15:06:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:34:18.777 15:06:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:34:18.777 15:06:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:18.777 15:06:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:18.777 15:06:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:19.035 15:06:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:19.035 15:06:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:19.035 15:06:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:19.035 15:06:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:19.293 15:06:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:19.293 15:06:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:19.293 15:06:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:19.293 15:06:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:19.552 15:06:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:19.552 15:06:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:19.552 15:06:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:19.552 15:06:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:19.810 15:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:19.810 15:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:19.810 15:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:19.810 15:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:20.067 15:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:20.067 15:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:20.067 15:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:20.067 15:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:20.324 15:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:20.324 15:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:34:20.324 15:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:20.581 15:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:20.839 15:07:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:34:21.772 15:07:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:34:21.772 15:07:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:22.030 15:07:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:22.030 15:07:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:22.030 15:07:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:22.030 15:07:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:22.290 15:07:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:22.290 15:07:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:22.290 15:07:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:22.290 15:07:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:22.290 15:07:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:22.290 15:07:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:22.547 15:07:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:22.547 15:07:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:22.547 15:07:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:22.547 15:07:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:22.805 15:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:22.805 15:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:22.805 15:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:22.805 15:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:23.062 15:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:23.062 15:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:23.062 15:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:23.062 15:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:23.319 15:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:23.319 15:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:34:23.319 15:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:23.577 15:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:23.834 15:07:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:34:24.768 15:07:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:34:24.768 15:07:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:24.768 15:07:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:24.768 15:07:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:25.025 15:07:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:25.025 15:07:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:25.025 15:07:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:25.025 15:07:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:25.283 15:07:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:25.283 15:07:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:25.283 15:07:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:25.283 15:07:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:25.541 15:07:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:25.541 15:07:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:25.541 15:07:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:25.541 15:07:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:25.799 15:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:25.799 15:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:25.799 15:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:25.799 15:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:26.057 15:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:26.057 15:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:26.057 15:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:26.057 15:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:26.316 15:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:26.316 15:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:34:26.574 15:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:34:26.574 15:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:34:26.832 15:07:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:27.091 15:07:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:34:28.025 15:07:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:34:28.025 15:07:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:28.025 15:07:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:28.025 15:07:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:28.283 15:07:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:28.283 15:07:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:28.283 15:07:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:28.283 15:07:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:28.541 15:07:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:28.541 15:07:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:28.541 15:07:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:28.541 15:07:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:28.799 15:07:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:28.799 15:07:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:28.799 15:07:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:28.799 15:07:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:29.056 15:07:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:29.056 15:07:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:29.056 15:07:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:29.056 15:07:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:29.314 15:07:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:29.314 15:07:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:29.314 15:07:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:29.314 15:07:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:29.572 15:07:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:29.572 15:07:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:34:29.572 15:07:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:29.830 15:07:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:30.088 15:07:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:34:31.021 15:07:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:34:31.021 15:07:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:31.021 15:07:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:31.021 15:07:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:31.279 15:07:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:31.279 15:07:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:31.279 15:07:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:31.279 15:07:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:31.537 15:07:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:31.537 15:07:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:31.537 15:07:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:31.537 15:07:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:31.795 15:07:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:31.795 15:07:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:31.795 15:07:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:31.795 15:07:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:32.052 15:07:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:32.052 15:07:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:32.052 15:07:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:32.052 15:07:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:32.309 15:07:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:32.309 15:07:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:32.309 15:07:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:32.309 15:07:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:32.567 15:07:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:32.567 15:07:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:34:32.567 15:07:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:32.825 15:07:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:34:33.085 15:07:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:34:34.042 15:07:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:34:34.042 15:07:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:34.042 15:07:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:34.042 15:07:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:34.304 15:07:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:34.304 15:07:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:34.304 15:07:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:34.304 15:07:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:34.561 15:07:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:34.561 15:07:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:34.561 15:07:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:34.561 15:07:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:34.823 15:07:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:34.823 15:07:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:34.823 15:07:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:34.823 15:07:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:35.085 15:07:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:35.085 15:07:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:35.085 15:07:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:35.085 15:07:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:35.342 15:07:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:35.342 15:07:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:35.342 15:07:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:35.342 15:07:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:35.600 15:07:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:35.600 15:07:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:34:35.600 15:07:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:35.857 15:07:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:36.115 15:07:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:34:37.493 15:07:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:34:37.493 15:07:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:37.493 15:07:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:37.493 15:07:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:37.493 15:07:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:37.493 15:07:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:37.493 15:07:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:37.493 15:07:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:37.750 15:07:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:37.750 15:07:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:37.750 15:07:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:37.750 15:07:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:38.007 15:07:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:38.007 15:07:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:38.007 15:07:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:38.007 15:07:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:38.264 15:07:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:38.264 15:07:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:38.264 15:07:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:38.264 15:07:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:38.522 15:07:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:38.522 15:07:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:38.522 15:07:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:38.522 15:07:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:38.781 15:07:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:38.781 15:07:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2028400 00:34:38.781 15:07:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 2028400 ']' 00:34:38.781 15:07:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 2028400 00:34:38.781 15:07:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:34:38.781 15:07:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:38.781 15:07:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2028400 00:34:38.781 15:07:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:34:38.781 15:07:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:34:38.781 15:07:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2028400' 00:34:38.781 killing process with pid 2028400 00:34:38.781 15:07:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 2028400 00:34:38.781 15:07:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 2028400 00:34:39.348 Connection closed with partial response: 00:34:39.348 00:34:39.348 00:34:39.625 15:07:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2028400 00:34:39.625 15:07:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:39.625 [2024-07-14 15:06:43.356711] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:34:39.625 [2024-07-14 15:06:43.356885] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2028400 ] 00:34:39.625 EAL: No free 2048 kB hugepages reported on node 1 00:34:39.625 [2024-07-14 15:06:43.480756] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:39.625 [2024-07-14 15:06:43.710085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:39.625 Running I/O for 90 seconds... 00:34:39.625 [2024-07-14 15:06:59.805683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:60088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.625 [2024-07-14 15:06:59.805767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:39.625 [2024-07-14 15:06:59.805838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:60096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.625 [2024-07-14 15:06:59.805872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:39.625 [2024-07-14 15:06:59.805958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:60104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.625 [2024-07-14 15:06:59.805999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:39.625 [2024-07-14 15:06:59.806036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:60112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.625 [2024-07-14 15:06:59.806061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:39.625 [2024-07-14 15:06:59.806112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:60120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.625 [2024-07-14 15:06:59.806137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.625 [2024-07-14 15:06:59.806172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.625 [2024-07-14 15:06:59.806219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:39.625 [2024-07-14 15:06:59.806254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:60136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.625 [2024-07-14 15:06:59.806278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:39.625 [2024-07-14 15:06:59.806311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:60144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.625 [2024-07-14 15:06:59.806335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:39.625 [2024-07-14 15:06:59.806368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:60152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.625 [2024-07-14 15:06:59.806408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:39.625 [2024-07-14 15:06:59.806442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:60160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.625 [2024-07-14 15:06:59.806465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:39.625 [2024-07-14 15:06:59.806497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:60168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.625 [2024-07-14 15:06:59.806546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:39.625 [2024-07-14 15:06:59.806583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:60176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.625 [2024-07-14 15:06:59.806623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:39.625 [2024-07-14 15:06:59.806658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:60184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.625 [2024-07-14 15:06:59.806683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:39.625 [2024-07-14 15:06:59.806718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:60192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.625 [2024-07-14 15:06:59.806743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:39.625 [2024-07-14 15:06:59.806777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:60200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.625 [2024-07-14 15:06:59.806802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:39.625 [2024-07-14 15:06:59.806836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:60208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.625 [2024-07-14 15:06:59.806868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:39.625 [2024-07-14 15:06:59.806912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:60216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.625 [2024-07-14 15:06:59.806938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.806972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:60224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.806997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.807031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.807056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.807091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:60240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.807115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.807150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.807178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.807229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:60256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.807268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.807304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.807329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.807368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:60272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.807393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.807427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:60280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.807452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.807486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.807511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.807545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:60296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.807585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.807620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:60304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.807644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.807692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:60312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.807718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.807752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:60320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.807777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.807811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:60328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.807836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.807889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:60336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.807916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.807950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:60344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.807975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.808010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:60352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.808034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.808069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:60360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.808093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.808132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:60368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.808157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.808200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:60376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.808225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.808260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.808285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.808320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:60392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.808344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.808394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:60400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.808419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.808453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.808477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.808510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:60416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.808534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.808568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:60424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.808592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.808625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:60432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.808649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.808683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:60440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.808707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.808741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.808764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.808798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:60456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.808822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.808856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:60464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.808909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.808952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:60472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.808978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.809013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:60480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.809037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.809072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:60488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.809097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.809131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:60496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.809156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.809191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:60504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.809222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.809257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:60512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.809283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.809318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:60520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.809358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.809393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:60528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.809417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.809451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:60536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.809475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.809509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:60544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.809532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.809567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:60552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.809590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.809625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:60560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.809653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.809688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:60568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.809712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.809747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:60576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.809771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.809805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:60584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.809829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.809887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:60592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.809914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.810938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:60600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.810972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.811015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:60608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.811041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.811094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:60616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.811121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.811156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:60624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.811180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.811224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:60632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.811264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.811299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:60640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.811323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.811357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:60648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.811400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.811436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:60656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.811461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.811501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:60664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.811528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.811563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:60672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.811588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.811623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:60680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.811647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.811699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:60688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.811724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.811758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:60696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.811783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.811817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:60704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.811857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.811901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:59944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.626 [2024-07-14 15:06:59.811939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.811975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:59952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.626 [2024-07-14 15:06:59.812000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.812034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:60712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.812059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.812093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:60720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.812117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:39.626 [2024-07-14 15:06:59.812152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:60728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.626 [2024-07-14 15:06:59.812176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.812220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:60736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.627 [2024-07-14 15:06:59.812261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.812301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:60744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.627 [2024-07-14 15:06:59.812344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.812380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:60752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.627 [2024-07-14 15:06:59.812405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.812440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:60760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.627 [2024-07-14 15:06:59.812464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.812499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:60768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.627 [2024-07-14 15:06:59.812523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.812558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.627 [2024-07-14 15:06:59.812582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.812633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:60784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.627 [2024-07-14 15:06:59.812659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.812693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:60792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.627 [2024-07-14 15:06:59.812733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.812769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:60800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.627 [2024-07-14 15:06:59.812793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.812829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:60808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.627 [2024-07-14 15:06:59.812856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.812898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:60816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.627 [2024-07-14 15:06:59.812934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.812968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:60824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.627 [2024-07-14 15:06:59.812992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.813027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:60832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.627 [2024-07-14 15:06:59.813052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.813086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:60840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.627 [2024-07-14 15:06:59.813120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.813157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:60848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.627 [2024-07-14 15:06:59.813190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.813226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:60856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.627 [2024-07-14 15:06:59.813252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.813287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:60864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.627 [2024-07-14 15:06:59.813311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.813346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.627 [2024-07-14 15:06:59.813371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.813405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.627 [2024-07-14 15:06:59.813446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.813481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:60888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.627 [2024-07-14 15:06:59.813506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.813555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:60896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.627 [2024-07-14 15:06:59.813580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.813616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:60904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.627 [2024-07-14 15:06:59.813641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.813675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:60912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.627 [2024-07-14 15:06:59.813703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.813739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:60920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.627 [2024-07-14 15:06:59.813764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.813799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:60928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.627 [2024-07-14 15:06:59.813823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.813875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:60936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.627 [2024-07-14 15:06:59.813938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.813975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:60944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.627 [2024-07-14 15:06:59.814000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.814034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:60952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.627 [2024-07-14 15:06:59.814061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.814096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:59960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.627 [2024-07-14 15:06:59.814120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.814154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:59968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.627 [2024-07-14 15:06:59.814179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.814222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:59976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.627 [2024-07-14 15:06:59.814247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.814296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:59984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.627 [2024-07-14 15:06:59.814321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.814354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:59992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.627 [2024-07-14 15:06:59.814378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.814412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:60000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.627 [2024-07-14 15:06:59.814435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.814468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:60008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.627 [2024-07-14 15:06:59.814492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.814525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:60016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.627 [2024-07-14 15:06:59.814550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.814583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:60024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.627 [2024-07-14 15:06:59.814606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.814640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:60032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.627 [2024-07-14 15:06:59.814663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.814702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:60040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.627 [2024-07-14 15:06:59.814726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.814758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:60048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.627 [2024-07-14 15:06:59.814782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.814816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:60056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.627 [2024-07-14 15:06:59.814841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.814899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:60064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.627 [2024-07-14 15:06:59.814931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.814965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:60072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.627 [2024-07-14 15:06:59.814990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.815025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:60960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.627 [2024-07-14 15:06:59.815053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.815999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:60080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.627 [2024-07-14 15:06:59.816033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.816076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:60088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.627 [2024-07-14 15:06:59.816104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.816140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:60096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.627 [2024-07-14 15:06:59.816166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.816226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:60104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.627 [2024-07-14 15:06:59.816253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.816289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:60112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.627 [2024-07-14 15:06:59.816313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.816348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:60120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.627 [2024-07-14 15:06:59.816373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.816412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.627 [2024-07-14 15:06:59.816438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.816474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:60136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.627 [2024-07-14 15:06:59.816498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.816547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:60144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.627 [2024-07-14 15:06:59.816571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.816607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:60152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.627 [2024-07-14 15:06:59.816633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.816667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:60160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.627 [2024-07-14 15:06:59.816705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.816741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:60168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.627 [2024-07-14 15:06:59.816767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.816801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:60176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.627 [2024-07-14 15:06:59.816826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.816861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:60184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.627 [2024-07-14 15:06:59.816896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.816944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:60192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.627 [2024-07-14 15:06:59.816971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.817007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:60200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.627 [2024-07-14 15:06:59.817031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.817066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:60208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.627 [2024-07-14 15:06:59.817091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.817126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:60216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.627 [2024-07-14 15:06:59.817151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.817186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.627 [2024-07-14 15:06:59.817225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.817261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:60232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.627 [2024-07-14 15:06:59.817286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.817321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:60240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.627 [2024-07-14 15:06:59.817346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.817380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:60248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.627 [2024-07-14 15:06:59.817408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.817459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:60256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.627 [2024-07-14 15:06:59.817483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:39.627 [2024-07-14 15:06:59.817517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.627 [2024-07-14 15:06:59.817542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.817591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:60272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.817619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.817655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:60280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.817681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.817717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.817742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.817777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:60296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.817803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.817838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:60304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.817863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.817907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:60312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.817934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.817969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:60320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.817998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.818034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:60328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.818059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.818094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:60336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.818121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.818156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:60344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.818186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.818220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:60352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.818245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.818279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:60360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.818319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.818354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:60368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.818382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.818415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:60376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.818439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.818472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.818499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.818533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:60392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.818566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.818600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:60400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.818623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.818657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.818682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.818717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:60416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.818741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.818779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.818806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.818840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:60432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.818887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.818926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.818955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.818990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:60448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.819015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.819049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:60456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.819073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.819108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:60464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.819132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.819167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:60472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.819199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.819233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:60480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.819257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.819308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:60488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.819332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.819366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:60496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.819389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.819423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:60504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.819446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.819480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:60512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.819503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.819541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:60520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.819565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.819599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:60528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.819623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.819657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:60536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.819685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.819718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:60544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.819742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.819775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:60552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.819799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.819832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:60560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.819855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.819917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:60568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.819950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.819984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:60576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.820009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.820043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:60584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.820067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.821005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:60592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.821039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.821093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:60600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.821122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.821160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:60608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.821195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.821246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:60616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.821278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.821315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:60624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.821340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.821391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:60632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.821414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.821448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:60640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.821487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.821525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:60648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.821550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.821585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:60656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.821610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.821645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:60664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.821670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.821706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:60672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.821731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.821765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:60680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.821809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.821844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.821892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.821942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:60696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.821968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.822002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:60704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.822027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.822061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:59944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.628 [2024-07-14 15:06:59.822092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.822128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:59952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.628 [2024-07-14 15:06:59.822152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:39.628 [2024-07-14 15:06:59.822197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:60712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.628 [2024-07-14 15:06:59.822222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.822256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:60720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.629 [2024-07-14 15:06:59.822295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.822331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:60728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.629 [2024-07-14 15:06:59.822357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.822391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:60736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.629 [2024-07-14 15:06:59.822415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.822467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:60744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.629 [2024-07-14 15:06:59.822492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.822527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:60752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.629 [2024-07-14 15:06:59.822553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.822588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:60760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.629 [2024-07-14 15:06:59.822614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.822649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:60768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.629 [2024-07-14 15:06:59.822674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.822709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.629 [2024-07-14 15:06:59.822749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.822785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.629 [2024-07-14 15:06:59.822809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.822844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:60792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.629 [2024-07-14 15:06:59.822892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.822943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:60800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.629 [2024-07-14 15:06:59.822970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.823006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:60808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.629 [2024-07-14 15:06:59.823030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.823065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:60816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.629 [2024-07-14 15:06:59.823089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.823124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:60824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.629 [2024-07-14 15:06:59.823149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.823191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:60832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.629 [2024-07-14 15:06:59.823217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.823252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:60840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.629 [2024-07-14 15:06:59.823276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.823312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:60848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.629 [2024-07-14 15:06:59.823336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.823371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:60856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.629 [2024-07-14 15:06:59.823396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.823431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:60864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.629 [2024-07-14 15:06:59.823456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.823491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:60872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.629 [2024-07-14 15:06:59.823519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.823555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.629 [2024-07-14 15:06:59.823579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.823614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:60888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.629 [2024-07-14 15:06:59.823638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.823696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:60896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.629 [2024-07-14 15:06:59.823723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.823774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:60904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.629 [2024-07-14 15:06:59.823799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.823834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:60912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.629 [2024-07-14 15:06:59.823859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.823901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:60920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.629 [2024-07-14 15:06:59.823935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.823969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:60928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.629 [2024-07-14 15:06:59.823994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.824028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:60936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.629 [2024-07-14 15:06:59.824052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.824086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:60944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.629 [2024-07-14 15:06:59.824110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.824145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:60952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.629 [2024-07-14 15:06:59.824169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.824204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:59960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.629 [2024-07-14 15:06:59.824233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.824267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:59968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.629 [2024-07-14 15:06:59.824292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.824327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:59976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.629 [2024-07-14 15:06:59.824355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.824390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:59984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.629 [2024-07-14 15:06:59.824415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.824467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:59992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.629 [2024-07-14 15:06:59.824496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.824532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:60000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.629 [2024-07-14 15:06:59.824557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.824590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:60008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.629 [2024-07-14 15:06:59.824613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.824648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:60016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.629 [2024-07-14 15:06:59.824672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.824706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:60024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.629 [2024-07-14 15:06:59.824740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.824774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:60032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.629 [2024-07-14 15:06:59.824799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.824833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:60040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.629 [2024-07-14 15:06:59.824873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.824931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:60048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.629 [2024-07-14 15:06:59.824957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.824993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:60056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.629 [2024-07-14 15:06:59.825018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.825054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:60064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.629 [2024-07-14 15:06:59.825079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.825116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:60072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.629 [2024-07-14 15:06:59.825141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.826168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:60960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.629 [2024-07-14 15:06:59.826217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.826260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:60080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.629 [2024-07-14 15:06:59.826292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.826328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:60088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.629 [2024-07-14 15:06:59.826353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.826402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:60096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.629 [2024-07-14 15:06:59.826428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.826494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:60104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.629 [2024-07-14 15:06:59.826519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.826554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:60112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.629 [2024-07-14 15:06:59.826578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.826612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:60120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.629 [2024-07-14 15:06:59.826636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.826670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.629 [2024-07-14 15:06:59.826709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.826743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:60136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.629 [2024-07-14 15:06:59.826782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.826817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:60144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.629 [2024-07-14 15:06:59.826842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.826901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:60152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.629 [2024-07-14 15:06:59.826927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.826962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:60160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.629 [2024-07-14 15:06:59.826987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.827023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:60168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.629 [2024-07-14 15:06:59.827048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.827083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:60176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.629 [2024-07-14 15:06:59.827108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.827149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:60184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.629 [2024-07-14 15:06:59.827192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.827242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:60192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.629 [2024-07-14 15:06:59.827266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.827316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:60200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.629 [2024-07-14 15:06:59.827340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.827374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:60208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.629 [2024-07-14 15:06:59.827399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.827434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.629 [2024-07-14 15:06:59.827458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.827492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:60224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.629 [2024-07-14 15:06:59.827515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.827550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.629 [2024-07-14 15:06:59.827573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.827621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:60240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.629 [2024-07-14 15:06:59.827646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:39.629 [2024-07-14 15:06:59.827679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.827703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.827753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:60256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.827777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.827811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:60264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.827836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.827896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.827922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.827962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:60280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.827990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.828025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:60288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.828050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.828085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:60296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.828109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.828144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:60304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.828169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.828219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:60312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.828243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.828277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:60320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.828301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.828334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:60328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.828358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.828392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:60336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.828417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.828466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:60344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.828490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.828522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:60352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.828546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.828578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:60360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.828601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.828633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.828656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.828688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:60376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.828716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.828749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:60384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.828773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.828804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.828827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.828885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:60400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.828912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.828952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:60408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.828979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.829014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:60416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.829038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.829073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:60424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.829098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.829133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.829158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.829191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:60440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.829216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.829251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:60448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.829276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.829310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:60456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.829349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.829383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:60464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.829421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.829455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:60472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.829490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.829524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:60480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.829547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.829580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:60488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.829603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.829636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:60496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.829660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.829693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:60504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.829716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.829749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:60512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.829773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.829805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:60520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.829831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.829888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:60528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.829914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.829950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:60536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.829975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.830010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:60544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.830034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.830068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:60552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.830092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.830127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:60560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.830152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.830204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:60568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.830243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.830282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:60576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.830307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.831237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:60584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.831270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.831328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:60592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.831354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.831389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:60600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.831414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.831449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:60608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.831476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.831527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:60616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.831553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.831589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:60624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.831630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.831666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:60632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.831705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.831738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:60640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.831777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.831812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:60648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.831837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.831896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:60656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.831932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.831968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:60664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.831996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.832037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:60672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.832065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.832100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:60680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.832124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.832158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:60688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.832202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.832238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.832262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.832294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:60704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.630 [2024-07-14 15:06:59.832321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:39.630 [2024-07-14 15:06:59.832355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.631 [2024-07-14 15:06:59.832380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:39.631 [2024-07-14 15:06:59.832413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:59952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.631 [2024-07-14 15:06:59.832437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:39.631 [2024-07-14 15:06:59.832470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:60712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.631 [2024-07-14 15:06:59.832493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:39.631 [2024-07-14 15:06:59.832541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:60720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.631 [2024-07-14 15:06:59.832564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:39.631 [2024-07-14 15:06:59.832597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:60728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.631 [2024-07-14 15:06:59.832621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:39.631 [2024-07-14 15:06:59.832672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:60736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.631 [2024-07-14 15:06:59.832696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:39.631 [2024-07-14 15:06:59.832730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:60744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.631 [2024-07-14 15:06:59.832754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:39.631 [2024-07-14 15:06:59.832787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:60752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.631 [2024-07-14 15:06:59.832816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:39.631 [2024-07-14 15:06:59.832852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.631 [2024-07-14 15:06:59.832903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:39.631 [2024-07-14 15:06:59.832940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:60768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.631 [2024-07-14 15:06:59.832967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:39.631 [2024-07-14 15:06:59.833001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:60776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.631 [2024-07-14 15:06:59.833026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:39.631 [2024-07-14 15:06:59.833061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:60784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.631 [2024-07-14 15:06:59.833087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:39.631 [2024-07-14 15:06:59.833122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:60792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.631 [2024-07-14 15:06:59.833150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:39.631 [2024-07-14 15:06:59.833201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:60800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.631 [2024-07-14 15:06:59.833225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:39.631 [2024-07-14 15:06:59.833259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:60808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.631 [2024-07-14 15:06:59.833283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:39.631 [2024-07-14 15:06:59.833317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:60816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.631 [2024-07-14 15:06:59.833356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:39.631 [2024-07-14 15:06:59.833390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:60824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.631 [2024-07-14 15:06:59.833413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:39.631 [2024-07-14 15:06:59.833446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:60832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.631 [2024-07-14 15:06:59.833486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:39.631 [2024-07-14 15:06:59.833522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:60840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.632 [2024-07-14 15:06:59.833546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.833580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:60848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.632 [2024-07-14 15:06:59.833608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.833643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.632 [2024-07-14 15:06:59.833668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.833701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.632 [2024-07-14 15:06:59.833725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.833758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:60872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.632 [2024-07-14 15:06:59.833797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.833830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:60880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.632 [2024-07-14 15:06:59.833853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.833914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:60888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.632 [2024-07-14 15:06:59.833939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.833974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:60896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.632 [2024-07-14 15:06:59.833999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.834033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:60904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.632 [2024-07-14 15:06:59.834057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.834091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:60912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.632 [2024-07-14 15:06:59.834117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.834152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:60920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.632 [2024-07-14 15:06:59.834191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.834240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:60928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.632 [2024-07-14 15:06:59.834264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.834313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:60936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.632 [2024-07-14 15:06:59.834338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.834372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:60944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.632 [2024-07-14 15:06:59.834396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.834434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:60952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.632 [2024-07-14 15:06:59.834459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.834493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:59960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.632 [2024-07-14 15:06:59.834517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.834550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:59968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.632 [2024-07-14 15:06:59.834574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.834624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:59976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.632 [2024-07-14 15:06:59.834649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.834682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:59984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.632 [2024-07-14 15:06:59.834705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.834738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:59992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.632 [2024-07-14 15:06:59.834760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.834792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:60000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.632 [2024-07-14 15:06:59.834834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.834890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:60008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.632 [2024-07-14 15:06:59.834916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.834953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:60016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.632 [2024-07-14 15:06:59.834977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.835013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:60024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.632 [2024-07-14 15:06:59.835038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.835074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:60032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.632 [2024-07-14 15:06:59.835099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.835133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:60040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.632 [2024-07-14 15:06:59.835158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.835212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:60048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.632 [2024-07-14 15:06:59.835237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.835270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:60056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.632 [2024-07-14 15:06:59.835294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.835327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:60064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.632 [2024-07-14 15:06:59.835351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.836300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:60072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.632 [2024-07-14 15:06:59.836331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.836373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:60960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.632 [2024-07-14 15:06:59.836398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.836431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:60080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.632 [2024-07-14 15:06:59.836455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.836489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:60088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.632 [2024-07-14 15:06:59.836529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.836564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:60096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.632 [2024-07-14 15:06:59.836605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.836657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:60104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.632 [2024-07-14 15:06:59.836682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.836716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:60112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.632 [2024-07-14 15:06:59.836740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.836773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:60120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.632 [2024-07-14 15:06:59.836812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.836845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.632 [2024-07-14 15:06:59.836892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.836930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:60136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.632 [2024-07-14 15:06:59.836961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.836997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:60144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.632 [2024-07-14 15:06:59.837022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.837057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:60152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.632 [2024-07-14 15:06:59.837082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.837116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:60160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.632 [2024-07-14 15:06:59.837141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.837190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:60168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.632 [2024-07-14 15:06:59.837214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.837247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:60176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.632 [2024-07-14 15:06:59.837271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.837303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:60184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.632 [2024-07-14 15:06:59.837342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.837377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:60192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.632 [2024-07-14 15:06:59.837401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.837435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:60200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.632 [2024-07-14 15:06:59.837458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.837492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.632 [2024-07-14 15:06:59.837515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.837548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:60216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.632 [2024-07-14 15:06:59.837573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.837606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:60224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.632 [2024-07-14 15:06:59.837644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.837676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:60232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.632 [2024-07-14 15:06:59.837707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.837756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:60240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.632 [2024-07-14 15:06:59.837781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.837815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.632 [2024-07-14 15:06:59.837840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.837898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:60256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.632 [2024-07-14 15:06:59.837926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.837961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:60264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.632 [2024-07-14 15:06:59.837986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.838020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.632 [2024-07-14 15:06:59.838045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.838079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:60280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.632 [2024-07-14 15:06:59.838104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.838138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:60288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.632 [2024-07-14 15:06:59.838178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.838212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:60296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.632 [2024-07-14 15:06:59.838252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.838287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:60304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.632 [2024-07-14 15:06:59.838311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.838344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:60312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.632 [2024-07-14 15:06:59.838368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.838401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:60320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.632 [2024-07-14 15:06:59.838427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.838461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:60328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.632 [2024-07-14 15:06:59.838484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.838536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:60336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.632 [2024-07-14 15:06:59.838561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.838611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:60344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.632 [2024-07-14 15:06:59.838636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.838670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:60352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.632 [2024-07-14 15:06:59.838693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.838727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:60360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.632 [2024-07-14 15:06:59.838750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.838783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.632 [2024-07-14 15:06:59.838807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:39.632 [2024-07-14 15:06:59.838841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:60376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.632 [2024-07-14 15:06:59.838896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.838934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:60384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.838959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.838994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.839019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.839053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:60400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.839079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.839114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.839139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.839188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:60416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.839212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.839263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.839289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.839328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:60432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.839354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.839388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:60440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.839413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.839463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:60448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.839487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.839520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:60456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.839544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.839592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:60464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.839616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.839649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:60472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.839672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.839704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:60480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.839728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.839759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:60488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.839783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.839815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:60496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.839838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.839893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:60504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.839920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.839956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:60512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.839981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.840014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:60520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.840038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.840072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:60528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.840103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.840138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:60536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.840163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.840212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:60544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.840235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.840268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:60552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.840291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.840323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:60560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.840347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.840380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:60568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.840403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.841341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:60576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.841375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.841416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:60584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.841442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.841478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:60592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.841503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.841538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:60600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.841564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.841599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:60608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.841624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.841721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:60616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.841747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.841797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:60624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.841825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.841874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:60632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.841911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.841949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:60640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.841974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.842009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:60648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.842035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.842070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:60656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.842095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.842131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:60664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.842156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.842206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.842245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.842279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:60680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.842302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.842349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:60688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.842374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.842408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:60696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.842432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.842467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:60704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.842491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.842524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:59944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.633 [2024-07-14 15:06:59.842550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.842584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:59952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.633 [2024-07-14 15:06:59.842608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.842663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:60712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.842687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.842721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:60720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.842760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.842795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:60728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.842818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.842852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:60736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.842902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.842941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:60744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.842965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.843001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:60752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.843025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.843060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.843086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.843121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.843146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.843196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:60776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.843221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.843255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:60784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.843279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.843312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:60792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.843336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.843369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:60800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.843394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.843432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:60808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.843456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.843505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:60816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.843531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.843580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:60824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.843604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.843638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:60832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.843661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.843695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:60840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.843719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.843753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:60848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.843776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.843810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:60856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.843835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.843889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.843930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.843966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:60872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.843991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.844026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:60880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.844051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.844086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:60888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.844111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.844146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:60896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.844170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.844205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:60904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.633 [2024-07-14 15:06:59.844250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:39.633 [2024-07-14 15:06:59.844286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:60912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.634 [2024-07-14 15:06:59.844310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.844362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:60920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.634 [2024-07-14 15:06:59.844386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.844421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:60928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.634 [2024-07-14 15:06:59.844446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.844480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:60936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.634 [2024-07-14 15:06:59.844505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.844540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.634 [2024-07-14 15:06:59.844564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.844614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:60952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.634 [2024-07-14 15:06:59.844652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.844685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:59960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.634 [2024-07-14 15:06:59.844710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.844742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:59968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.634 [2024-07-14 15:06:59.844765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.844798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:59976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.634 [2024-07-14 15:06:59.844820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.844853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:59984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.634 [2024-07-14 15:06:59.844903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.844940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:59992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.634 [2024-07-14 15:06:59.844967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.845003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:60000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.634 [2024-07-14 15:06:59.845044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.845084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:60008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.634 [2024-07-14 15:06:59.845110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.845146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:60016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.634 [2024-07-14 15:06:59.845185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.845219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:60024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.634 [2024-07-14 15:06:59.845242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.845274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:60032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.634 [2024-07-14 15:06:59.845299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.845332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:60040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.634 [2024-07-14 15:06:59.845354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.845387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:60048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.634 [2024-07-14 15:06:59.845410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.845443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:60056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.634 [2024-07-14 15:06:59.845467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.846393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:60064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.634 [2024-07-14 15:06:59.846441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.846483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:60072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.634 [2024-07-14 15:06:59.846509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.846545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:60960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.634 [2024-07-14 15:06:59.846570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.846605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:60080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.634 [2024-07-14 15:06:59.846630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.846681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:60088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.634 [2024-07-14 15:06:59.846712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.846763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:60096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.634 [2024-07-14 15:06:59.846802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.846852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:60104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.634 [2024-07-14 15:06:59.846902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.846950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:60112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.634 [2024-07-14 15:06:59.846975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.847010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:60120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.634 [2024-07-14 15:06:59.847036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.847070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.634 [2024-07-14 15:06:59.847094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.847129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:60136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.634 [2024-07-14 15:06:59.847153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.847203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:60144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.634 [2024-07-14 15:06:59.847226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.847274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:60152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.634 [2024-07-14 15:06:59.847298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.847331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:60160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.634 [2024-07-14 15:06:59.847357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.847390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:60168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.634 [2024-07-14 15:06:59.847414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.847448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:60176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.634 [2024-07-14 15:06:59.847471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.847504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:60184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.634 [2024-07-14 15:06:59.847531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.847585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:60192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.634 [2024-07-14 15:06:59.847608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.847641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.634 [2024-07-14 15:06:59.847679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.847714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:60208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.634 [2024-07-14 15:06:59.847738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.847772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.634 [2024-07-14 15:06:59.847795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.847828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:60224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.634 [2024-07-14 15:06:59.847852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.847911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.634 [2024-07-14 15:06:59.847937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.847972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:60240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.634 [2024-07-14 15:06:59.847996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.848030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:60248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.634 [2024-07-14 15:06:59.848055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.848089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.634 [2024-07-14 15:06:59.848114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.848148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:60264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.634 [2024-07-14 15:06:59.848172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.848207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:60272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.634 [2024-07-14 15:06:59.848232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.848280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:60280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.634 [2024-07-14 15:06:59.848306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.848345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:60288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.634 [2024-07-14 15:06:59.848371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.848422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:60296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.634 [2024-07-14 15:06:59.848445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.848477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:60304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.634 [2024-07-14 15:06:59.848515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.848550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:60312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.634 [2024-07-14 15:06:59.848574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.848607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:60320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.634 [2024-07-14 15:06:59.848631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.848664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:60328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.634 [2024-07-14 15:06:59.848688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.848722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:60336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.634 [2024-07-14 15:06:59.848746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.848780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:60344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.634 [2024-07-14 15:06:59.848819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.848867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.634 [2024-07-14 15:06:59.848903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.848956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:60360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.634 [2024-07-14 15:06:59.848981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.849016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:60368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.634 [2024-07-14 15:06:59.849041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.849075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.634 [2024-07-14 15:06:59.849100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.849134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:60384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.634 [2024-07-14 15:06:59.849163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:39.634 [2024-07-14 15:06:59.849199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:60392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.634 [2024-07-14 15:06:59.849225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.849260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:60400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.849284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.849319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:60408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.849343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.849394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.849433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.849466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:60424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.849492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.849524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:60432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.849547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.849579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:60440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.849603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.849635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:60448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.849658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.849690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:60456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.849713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.849745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:60464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.849768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.849801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:60472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.849827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.849875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:60480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.849923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.849961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:60488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.849987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.850021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:60496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.850046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.850081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:60504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.850106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.850142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:60512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.850182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.850216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:60520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.850239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.850271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:60528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.850294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.850326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:60536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.850350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.850383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:60544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.850406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.850438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:60552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.850462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.850495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:60560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.850518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.851440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:60568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.851473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.851530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:60576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.851556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.851599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:60584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.851625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.851659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:60592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.851684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.851718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:60600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.851743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.851792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:60608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.851816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.851890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:60616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.851918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.851953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:60624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.851978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.852012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:60632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.852038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.852073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:60640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.852097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.852131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:60648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.852156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.852190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:60656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.852230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.852264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:60664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.852288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.852336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:60672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.852359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.852397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.852421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.852468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:60688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.852493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.852527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:60696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.852551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.852585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:60704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.852610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.852643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:59944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.635 [2024-07-14 15:06:59.852670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.852704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:59952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.635 [2024-07-14 15:06:59.852728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.852777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:60712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.852800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.852832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:60720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.852871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.852932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:60728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.852960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.852995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:60736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.853020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.853054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.853079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.853112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:60752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.853137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.853170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:60760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.853200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.853250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:60768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.853274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.853324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:60776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.853349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.853382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:60784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.853406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.853439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:60792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.853464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.853497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:60800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.853522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.853555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:60808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.853595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.853629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:60816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.853653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.853703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:60824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.853727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.853760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:60832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.853784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.853817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.853841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.853899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.853925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.853977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:60856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.854006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.854042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:60864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.854066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.854115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:60872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.854140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.854189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:60880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.854214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.854247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:60888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.854271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.854305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:60896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.854329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.854363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:60904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.854403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.854439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:60912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.635 [2024-07-14 15:06:59.854463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:39.635 [2024-07-14 15:06:59.854497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:60920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.636 [2024-07-14 15:06:59.854522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.854555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:60928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.636 [2024-07-14 15:06:59.854580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.854630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:60936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.636 [2024-07-14 15:06:59.854655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.854689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:60944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.636 [2024-07-14 15:06:59.854731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.854765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.636 [2024-07-14 15:06:59.854788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.854825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:59960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.636 [2024-07-14 15:06:59.854848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.854906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:59968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.636 [2024-07-14 15:06:59.854932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.854966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:59976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.636 [2024-07-14 15:06:59.854991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.855025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:59984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.636 [2024-07-14 15:06:59.855048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.855081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:59992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.636 [2024-07-14 15:06:59.855105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.855138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:60000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.636 [2024-07-14 15:06:59.855162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.855209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:60008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.636 [2024-07-14 15:06:59.855234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.855266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:60016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.636 [2024-07-14 15:06:59.855289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.855321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:60024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.636 [2024-07-14 15:06:59.855344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.855376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:60032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.636 [2024-07-14 15:06:59.855399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.855432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:60040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.636 [2024-07-14 15:06:59.855455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.855487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:60048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.636 [2024-07-14 15:06:59.855513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.856490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:60056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.636 [2024-07-14 15:06:59.856538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.856595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:60064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.636 [2024-07-14 15:06:59.856621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.856657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:60072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.636 [2024-07-14 15:06:59.856681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.856716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:60960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.636 [2024-07-14 15:06:59.856742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.856776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:60080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.636 [2024-07-14 15:06:59.856801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.856835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:60088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.636 [2024-07-14 15:06:59.856860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.856920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:60096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.636 [2024-07-14 15:06:59.856945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.856994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:60104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.636 [2024-07-14 15:06:59.857018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.857052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:60112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.636 [2024-07-14 15:06:59.857076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.857110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:60120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.636 [2024-07-14 15:06:59.857134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.857181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.636 [2024-07-14 15:06:59.857204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.857238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:60136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.636 [2024-07-14 15:06:59.857265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.857314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:60144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.636 [2024-07-14 15:06:59.857347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.857383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:60152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.636 [2024-07-14 15:06:59.857407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.857441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:60160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.636 [2024-07-14 15:06:59.857467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.857501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:60168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.636 [2024-07-14 15:06:59.857525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.857558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:60176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.636 [2024-07-14 15:06:59.857582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.857632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:60184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.636 [2024-07-14 15:06:59.857658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.857691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.636 [2024-07-14 15:06:59.857728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.857764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:60200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.636 [2024-07-14 15:06:59.857788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.857822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:60208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.636 [2024-07-14 15:06:59.857847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.857904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:60216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.636 [2024-07-14 15:06:59.857941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.857978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:60224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.636 [2024-07-14 15:06:59.858003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.858055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.636 [2024-07-14 15:06:59.858078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.858114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:60240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.636 [2024-07-14 15:06:59.858143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.858197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:60248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.636 [2024-07-14 15:06:59.858238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.858274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.636 [2024-07-14 15:06:59.858299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.858334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:60264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.636 [2024-07-14 15:06:59.858357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.858391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:60272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.636 [2024-07-14 15:06:59.858416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.858465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:60280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.636 [2024-07-14 15:06:59.858488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.858522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:60288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.636 [2024-07-14 15:06:59.858545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.858595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:60296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.636 [2024-07-14 15:06:59.858619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.858653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:60304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.636 [2024-07-14 15:06:59.858677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.858710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:60312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.636 [2024-07-14 15:06:59.858735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.858769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:60320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.636 [2024-07-14 15:06:59.858793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.858826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:60328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.636 [2024-07-14 15:06:59.858849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.858906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:60336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.636 [2024-07-14 15:06:59.858945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.858985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:60344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.636 [2024-07-14 15:06:59.859011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.859052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.636 [2024-07-14 15:06:59.859075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.859109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:60360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.636 [2024-07-14 15:06:59.859148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.859184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:60368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.636 [2024-07-14 15:06:59.859208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.859243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.636 [2024-07-14 15:06:59.859268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.859302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:60384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.636 [2024-07-14 15:06:59.859326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:39.636 [2024-07-14 15:06:59.859361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.859385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.859435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:60400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.859459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.859507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.859531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.859564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:60416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.859587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.859619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:60424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.859643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.859676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:60432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.859699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.859735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:60440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.859759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.859792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:60448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.859815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.859847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:60456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.859895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.859933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:60464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.859958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.859993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:60472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.860019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.860053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:60480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.860077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.860112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:60488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.860137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.860187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:60496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.860211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.860260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:60504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.860283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.860317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:60512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.860341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.860374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:60520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.860397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.860429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:60528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.860452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.860485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:60536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.860512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.860545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:60544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.860569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.860602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:60552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.860625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.861591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:60560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.861638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.861691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:60568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.861719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.861755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:60576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.861780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.861814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:60584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.861839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.861874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:60592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.861908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.861944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:60600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.861969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.862003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:60608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.862028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.862080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:60616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.862105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.862139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:60624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.862179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.862214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:60632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.862243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.862277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:60640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.862301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.862334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:60648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.862358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.862391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.862415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.862448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:60664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.862471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.862521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:60672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.862547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.862595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:60680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.862619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.862654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:60688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.862678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.862711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:60696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.862736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.862770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:60704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.862794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.862827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:59944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.637 [2024-07-14 15:06:59.862851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.862911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:59952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.637 [2024-07-14 15:06:59.862936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.862971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:60712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.862995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.863035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:60720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.863061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.863095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:60728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.863120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.863154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:60736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.863193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.863227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.863251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.863284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.863308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.863341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:60760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.863365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.863399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:60768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.863424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.863457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:60776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.863481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.863514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:60784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.863538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.863572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:60792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.863595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.863628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:60800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.863652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.863684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:60808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.863708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.863760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:60816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.863802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.863837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:60824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.863884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.863922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:60832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.863947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.863981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:60840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.864005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.864040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.864065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.864098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:60856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.864122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.864157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:60864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.864181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.864231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:60872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.864256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.864290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:60880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.864314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.864347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:60888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.864370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.864419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:60896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.864444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.864479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:60904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.864503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:39.637 [2024-07-14 15:06:59.864537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:60912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.637 [2024-07-14 15:06:59.864566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.864601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:60920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.638 [2024-07-14 15:06:59.864625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.864675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.638 [2024-07-14 15:06:59.864699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.864748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:60936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.638 [2024-07-14 15:06:59.864772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.864804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:60944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.638 [2024-07-14 15:06:59.864827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.864873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:60952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.638 [2024-07-14 15:06:59.864908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.864947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:59960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.638 [2024-07-14 15:06:59.864971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.865004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:59968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.638 [2024-07-14 15:06:59.865031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.865065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:59976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.638 [2024-07-14 15:06:59.865088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.865121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:59984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.638 [2024-07-14 15:06:59.865145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.865194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:59992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.638 [2024-07-14 15:06:59.865217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.865249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:60000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.638 [2024-07-14 15:06:59.865272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.865305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:60008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.638 [2024-07-14 15:06:59.865332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.865366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:60016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.638 [2024-07-14 15:06:59.865389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.865422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:60024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.638 [2024-07-14 15:06:59.865444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.865477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:60032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.638 [2024-07-14 15:06:59.865499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.865532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:60040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.638 [2024-07-14 15:06:59.865555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.865916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:60048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.638 [2024-07-14 15:06:59.865949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.866023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:60056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.638 [2024-07-14 15:06:59.866053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.866095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:60064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.638 [2024-07-14 15:06:59.866121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.866161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:60072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.638 [2024-07-14 15:06:59.866190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.866229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:60960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.638 [2024-07-14 15:06:59.866255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.866295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:60080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.638 [2024-07-14 15:06:59.866336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.866376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:60088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.638 [2024-07-14 15:06:59.866402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.866441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:60096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.638 [2024-07-14 15:06:59.866479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.866538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:60104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.638 [2024-07-14 15:06:59.866564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.866601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:60112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.638 [2024-07-14 15:06:59.866624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.866661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:60120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.638 [2024-07-14 15:06:59.866687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.866725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.638 [2024-07-14 15:06:59.866751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.866788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:60136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.638 [2024-07-14 15:06:59.866812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.866849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:60144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.638 [2024-07-14 15:06:59.866873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.866938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:60152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.638 [2024-07-14 15:06:59.866963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.867002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:60160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.638 [2024-07-14 15:06:59.867028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.867066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:60168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.638 [2024-07-14 15:06:59.867091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.867129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:60176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.638 [2024-07-14 15:06:59.867154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.867207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.638 [2024-07-14 15:06:59.867230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.867269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:60192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.638 [2024-07-14 15:06:59.867294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.867337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.638 [2024-07-14 15:06:59.867361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.867398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:60208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.638 [2024-07-14 15:06:59.867423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.867460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.638 [2024-07-14 15:06:59.867486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.867523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:60224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.638 [2024-07-14 15:06:59.867546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.867584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:60232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.638 [2024-07-14 15:06:59.867608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.867645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.638 [2024-07-14 15:06:59.867668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.867705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:60248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.638 [2024-07-14 15:06:59.867730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.867767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:60256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.638 [2024-07-14 15:06:59.867794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.867831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:60264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.638 [2024-07-14 15:06:59.867870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.867929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:60272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.638 [2024-07-14 15:06:59.867955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.867995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:60280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.638 [2024-07-14 15:06:59.868023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.868063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:60288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.638 [2024-07-14 15:06:59.868089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.868128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:60296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.638 [2024-07-14 15:06:59.868159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.868217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:60304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.638 [2024-07-14 15:06:59.868258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.868297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:60312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.638 [2024-07-14 15:06:59.868321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.868358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:60320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.638 [2024-07-14 15:06:59.868383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.868421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:60328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.638 [2024-07-14 15:06:59.868446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.868483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.638 [2024-07-14 15:06:59.868506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.868543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:60344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.638 [2024-07-14 15:06:59.868568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.868606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:60352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.638 [2024-07-14 15:06:59.868631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.868668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.638 [2024-07-14 15:06:59.868691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.868728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:60368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.638 [2024-07-14 15:06:59.868753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.868790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:60376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.638 [2024-07-14 15:06:59.868815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.868852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:60384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.638 [2024-07-14 15:06:59.868901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.868942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:60392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.638 [2024-07-14 15:06:59.868972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.869012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.638 [2024-07-14 15:06:59.869037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.869076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:60408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.638 [2024-07-14 15:06:59.869101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.869140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:60416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.638 [2024-07-14 15:06:59.869181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.869220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:60424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.638 [2024-07-14 15:06:59.869244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:39.638 [2024-07-14 15:06:59.869281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:60432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.638 [2024-07-14 15:06:59.869306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:06:59.869343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:60440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.639 [2024-07-14 15:06:59.869367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:06:59.869405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:60448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.639 [2024-07-14 15:06:59.869429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:06:59.869467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:60456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.639 [2024-07-14 15:06:59.869493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:06:59.869530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:60464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.639 [2024-07-14 15:06:59.869557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:06:59.869595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:60472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.639 [2024-07-14 15:06:59.869619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:06:59.869657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:60480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.639 [2024-07-14 15:06:59.869682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:06:59.869720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:60488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.639 [2024-07-14 15:06:59.869743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:06:59.869785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:60496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.639 [2024-07-14 15:06:59.869809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:06:59.869847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:60504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.639 [2024-07-14 15:06:59.869894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:06:59.869936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:60512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.639 [2024-07-14 15:06:59.869962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:06:59.870000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:60520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.639 [2024-07-14 15:06:59.870024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:06:59.870063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:60528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.639 [2024-07-14 15:06:59.870089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:06:59.870128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:60536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.639 [2024-07-14 15:06:59.870153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:06:59.870208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:60544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.639 [2024-07-14 15:06:59.870233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:06:59.870431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:60552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.639 [2024-07-14 15:06:59.870460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:07:15.357614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.639 [2024-07-14 15:07:15.357698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:07:15.357755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:90416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.639 [2024-07-14 15:07:15.357798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:07:15.357852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:90664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.639 [2024-07-14 15:07:15.357886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:07:15.357926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:90680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.639 [2024-07-14 15:07:15.357952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:07:15.357998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:90696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.639 [2024-07-14 15:07:15.358023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:07:15.358059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:90712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.639 [2024-07-14 15:07:15.358084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:07:15.358119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:90728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.639 [2024-07-14 15:07:15.358144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:07:15.358180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:90744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.639 [2024-07-14 15:07:15.358205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:07:15.358257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:90760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.639 [2024-07-14 15:07:15.358282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:07:15.358317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:90776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.639 [2024-07-14 15:07:15.358342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:07:15.358376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:90792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.639 [2024-07-14 15:07:15.358400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:07:15.358435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:90808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.639 [2024-07-14 15:07:15.358459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:07:15.358494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:90824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.639 [2024-07-14 15:07:15.358518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:07:15.358553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:90840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.639 [2024-07-14 15:07:15.358578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:07:15.358613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:90856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.639 [2024-07-14 15:07:15.358637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:07:15.358672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:90872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.639 [2024-07-14 15:07:15.358696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:07:15.358732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:90352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.639 [2024-07-14 15:07:15.358761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:07:15.361784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:90880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.639 [2024-07-14 15:07:15.361817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:07:15.361883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:90896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.639 [2024-07-14 15:07:15.361926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:07:15.361965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:90912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.639 [2024-07-14 15:07:15.361991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:07:15.362026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:90928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.639 [2024-07-14 15:07:15.362051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:07:15.362086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:90944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.639 [2024-07-14 15:07:15.362112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:07:15.362147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:90960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.639 [2024-07-14 15:07:15.362172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:07:15.362208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:90976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.639 [2024-07-14 15:07:15.362233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:07:15.362284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:90992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.639 [2024-07-14 15:07:15.362308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:07:15.362359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:91008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.639 [2024-07-14 15:07:15.362384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:07:15.362420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:91024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.639 [2024-07-14 15:07:15.362445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:07:15.362481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:91040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.639 [2024-07-14 15:07:15.362505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:07:15.362540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:91056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.639 [2024-07-14 15:07:15.362571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:07:15.362607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:90392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.639 [2024-07-14 15:07:15.362632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:07:15.362668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:90424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.639 [2024-07-14 15:07:15.362692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:07:15.362728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:91064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.639 [2024-07-14 15:07:15.362753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:07:15.362788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:91080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.639 [2024-07-14 15:07:15.362813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:07:15.362848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:91096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.639 [2024-07-14 15:07:15.362873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:07:15.362919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:91112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.639 [2024-07-14 15:07:15.362945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:07:15.362980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:90448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.639 [2024-07-14 15:07:15.363005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:07:15.363040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:90480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.639 [2024-07-14 15:07:15.363065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:07:15.363100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:90512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.639 [2024-07-14 15:07:15.363126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:07:15.363160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:90544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.639 [2024-07-14 15:07:15.363185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:07:15.363234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:90576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.639 [2024-07-14 15:07:15.363259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:07:15.363310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:90608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.639 [2024-07-14 15:07:15.363336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:07:15.363377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:90640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.639 [2024-07-14 15:07:15.363404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:07:15.363440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:91136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.639 [2024-07-14 15:07:15.363465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:07:15.363501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:91152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.639 [2024-07-14 15:07:15.363525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:07:15.363561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:91168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.639 [2024-07-14 15:07:15.363586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:39.639 [2024-07-14 15:07:15.365624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:91184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.640 [2024-07-14 15:07:15.365674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.365718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:91200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.640 [2024-07-14 15:07:15.365745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.365781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:91216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.640 [2024-07-14 15:07:15.365807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.365842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:91232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.640 [2024-07-14 15:07:15.365866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.365910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:91248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.640 [2024-07-14 15:07:15.365936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.365971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:91264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.640 [2024-07-14 15:07:15.365996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.366031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:91280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.640 [2024-07-14 15:07:15.366056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.366090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:90384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.640 [2024-07-14 15:07:15.366115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.366156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:90664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.640 [2024-07-14 15:07:15.366197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.366233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:90696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.640 [2024-07-14 15:07:15.366292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.366332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:90728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.640 [2024-07-14 15:07:15.366358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.366393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:90760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.640 [2024-07-14 15:07:15.366417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.366452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:90792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.640 [2024-07-14 15:07:15.366477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.366512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.640 [2024-07-14 15:07:15.366536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.366571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:90856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.640 [2024-07-14 15:07:15.366596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.366631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:90352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.640 [2024-07-14 15:07:15.366671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.366706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:91304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.640 [2024-07-14 15:07:15.366730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.366764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:91320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.640 [2024-07-14 15:07:15.366787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.366821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:91336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.640 [2024-07-14 15:07:15.366845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.366901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:91352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.640 [2024-07-14 15:07:15.366928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.366965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:90456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.640 [2024-07-14 15:07:15.366994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.367030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:90488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.640 [2024-07-14 15:07:15.367056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.367091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:90520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.640 [2024-07-14 15:07:15.367115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.367150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:90552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.640 [2024-07-14 15:07:15.367174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.367225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:90584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.640 [2024-07-14 15:07:15.367249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.367283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:90616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.640 [2024-07-14 15:07:15.367307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.367340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:90648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.640 [2024-07-14 15:07:15.367366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.367400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:90896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.640 [2024-07-14 15:07:15.367424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.367457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:90928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.640 [2024-07-14 15:07:15.367481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.367516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:90960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.640 [2024-07-14 15:07:15.367541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.369558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:90992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.640 [2024-07-14 15:07:15.369593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.369636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:91024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.640 [2024-07-14 15:07:15.369662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.369697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:91056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.640 [2024-07-14 15:07:15.369728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.369764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:90424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.640 [2024-07-14 15:07:15.369790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.369825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:91080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.640 [2024-07-14 15:07:15.369850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.369909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:91112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.640 [2024-07-14 15:07:15.369951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.369988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:90480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.640 [2024-07-14 15:07:15.370013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.370048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:90544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.640 [2024-07-14 15:07:15.370073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.370108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:90608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.640 [2024-07-14 15:07:15.370133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.370168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:91136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.640 [2024-07-14 15:07:15.370193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.370227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:91168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.640 [2024-07-14 15:07:15.370252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.370286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:90688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.640 [2024-07-14 15:07:15.370311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.370363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:90720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.640 [2024-07-14 15:07:15.370387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.370421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:90752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.640 [2024-07-14 15:07:15.370445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.370479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:90784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.640 [2024-07-14 15:07:15.370502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.370541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:90816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.640 [2024-07-14 15:07:15.370565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.370599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:90848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.640 [2024-07-14 15:07:15.370623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.370657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:91376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.640 [2024-07-14 15:07:15.370681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.370714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:91392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.640 [2024-07-14 15:07:15.370738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.370772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:90888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.640 [2024-07-14 15:07:15.370796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.370829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:90920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.640 [2024-07-14 15:07:15.370870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.370915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:90952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.640 [2024-07-14 15:07:15.370940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.370976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:90984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.640 [2024-07-14 15:07:15.371000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.371035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:91016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.640 [2024-07-14 15:07:15.371059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.371094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:91048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.640 [2024-07-14 15:07:15.371120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.371154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:91416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.640 [2024-07-14 15:07:15.371179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.371214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:91072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.640 [2024-07-14 15:07:15.371238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.371277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:91104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.640 [2024-07-14 15:07:15.371302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.371338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:91200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.640 [2024-07-14 15:07:15.371362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.371397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:91232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.640 [2024-07-14 15:07:15.371422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.371458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:91264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.640 [2024-07-14 15:07:15.371483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.371533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:90384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.640 [2024-07-14 15:07:15.371557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.371592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:90696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.640 [2024-07-14 15:07:15.371617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.372633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:90760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.640 [2024-07-14 15:07:15.372664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.372723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:90824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.640 [2024-07-14 15:07:15.372749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.372783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:90352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.640 [2024-07-14 15:07:15.372808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.372842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:91320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.640 [2024-07-14 15:07:15.372892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.372933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:91352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.640 [2024-07-14 15:07:15.372958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:39.640 [2024-07-14 15:07:15.372994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:90488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.641 [2024-07-14 15:07:15.373019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.373056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:90552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.641 [2024-07-14 15:07:15.373086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.373124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:90616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.641 [2024-07-14 15:07:15.373149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.373201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:90896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.641 [2024-07-14 15:07:15.373226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.373279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:90960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.641 [2024-07-14 15:07:15.373304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.374385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:91144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.641 [2024-07-14 15:07:15.374431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.374475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:91176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.641 [2024-07-14 15:07:15.374501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.374537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:91208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.641 [2024-07-14 15:07:15.374562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.374598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:91240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.641 [2024-07-14 15:07:15.374623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.374658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:91272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.641 [2024-07-14 15:07:15.374683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.374718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:91432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.641 [2024-07-14 15:07:15.374743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.374794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.641 [2024-07-14 15:07:15.374819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.374853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:91464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.641 [2024-07-14 15:07:15.374913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.374950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:91480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.641 [2024-07-14 15:07:15.374981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.375019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:90712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.641 [2024-07-14 15:07:15.375044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.375078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:90776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.641 [2024-07-14 15:07:15.375103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.375139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:90840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.641 [2024-07-14 15:07:15.375185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.375221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:91488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.641 [2024-07-14 15:07:15.375244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.375278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:91312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.641 [2024-07-14 15:07:15.375301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.375334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:91344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.641 [2024-07-14 15:07:15.375358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.375391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:91024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.641 [2024-07-14 15:07:15.375414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.375447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:90424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.641 [2024-07-14 15:07:15.375471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.375531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:91112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.641 [2024-07-14 15:07:15.375571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.375607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:90544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.641 [2024-07-14 15:07:15.375631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.375666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:91136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.641 [2024-07-14 15:07:15.375691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.375726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:90688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.641 [2024-07-14 15:07:15.375751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.375791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:90752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.641 [2024-07-14 15:07:15.375817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.375853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:90816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.641 [2024-07-14 15:07:15.375888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.375926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:91376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.641 [2024-07-14 15:07:15.375951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.375987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:90888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.641 [2024-07-14 15:07:15.376012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.376048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:90952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.641 [2024-07-14 15:07:15.376072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.376106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:91016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.641 [2024-07-14 15:07:15.376136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.376187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:91416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.641 [2024-07-14 15:07:15.376211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.376261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:91104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.641 [2024-07-14 15:07:15.376285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.376318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:91232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.641 [2024-07-14 15:07:15.376344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.376378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:90384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.641 [2024-07-14 15:07:15.376401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.377585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:91368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.641 [2024-07-14 15:07:15.377616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.377673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:90912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.641 [2024-07-14 15:07:15.377714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.377757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:90976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.641 [2024-07-14 15:07:15.377783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.377818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:91040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.641 [2024-07-14 15:07:15.377843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.377891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:91096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.641 [2024-07-14 15:07:15.377917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.377953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:90824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.641 [2024-07-14 15:07:15.377978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.378014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:91320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.641 [2024-07-14 15:07:15.378038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.378073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:90488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.641 [2024-07-14 15:07:15.378098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.378133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:90616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.641 [2024-07-14 15:07:15.378174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.378211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:90960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.641 [2024-07-14 15:07:15.378251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.379752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:91496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.641 [2024-07-14 15:07:15.379799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.379853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:91512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.641 [2024-07-14 15:07:15.379907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.379946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:91528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.641 [2024-07-14 15:07:15.379972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.380006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:91544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.641 [2024-07-14 15:07:15.380032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.380072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:91560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.641 [2024-07-14 15:07:15.380097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.380133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:91576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.641 [2024-07-14 15:07:15.380158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.380208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:91176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.641 [2024-07-14 15:07:15.380233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.380267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:91240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.641 [2024-07-14 15:07:15.380291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.380324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:91432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.641 [2024-07-14 15:07:15.380363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.380396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:91464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.641 [2024-07-14 15:07:15.380420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.380453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:90712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.641 [2024-07-14 15:07:15.380476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.380508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:90840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.641 [2024-07-14 15:07:15.380531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.380564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:91312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.641 [2024-07-14 15:07:15.380588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.380621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:91024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.641 [2024-07-14 15:07:15.380644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.380677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:91112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.641 [2024-07-14 15:07:15.380701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.380734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:91136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.641 [2024-07-14 15:07:15.380758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.380792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:90752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.641 [2024-07-14 15:07:15.380819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.380867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:91376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.641 [2024-07-14 15:07:15.380902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.380939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:90952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.641 [2024-07-14 15:07:15.380964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.380999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:91416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.641 [2024-07-14 15:07:15.381024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:39.641 [2024-07-14 15:07:15.381059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:91232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.641 [2024-07-14 15:07:15.381084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.381119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:91384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.642 [2024-07-14 15:07:15.381145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.381196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:91408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.642 [2024-07-14 15:07:15.381221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.381272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:91184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.642 [2024-07-14 15:07:15.381307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.381342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:91248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.642 [2024-07-14 15:07:15.381366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.381400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:90664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.642 [2024-07-14 15:07:15.381423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.381455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:90792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.642 [2024-07-14 15:07:15.381482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.381515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:91304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.642 [2024-07-14 15:07:15.381538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.381571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:90912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.642 [2024-07-14 15:07:15.381602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.381637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:91040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.642 [2024-07-14 15:07:15.381661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.381694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:90824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.642 [2024-07-14 15:07:15.381717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.381750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:90488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.642 [2024-07-14 15:07:15.381774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.381808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:90960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.642 [2024-07-14 15:07:15.381831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.385745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:91584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.642 [2024-07-14 15:07:15.385778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.385835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:91600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.642 [2024-07-14 15:07:15.385861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.385921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:91616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.642 [2024-07-14 15:07:15.385948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.385983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:91632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.642 [2024-07-14 15:07:15.386008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.386043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:91648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.642 [2024-07-14 15:07:15.386069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.386105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:91664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.642 [2024-07-14 15:07:15.386130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.386165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:91680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.642 [2024-07-14 15:07:15.386191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.386226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.642 [2024-07-14 15:07:15.386251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.386292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:91712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.642 [2024-07-14 15:07:15.386317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.386353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:91512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.642 [2024-07-14 15:07:15.386393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.386430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:91544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.642 [2024-07-14 15:07:15.386470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.386506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:91576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.642 [2024-07-14 15:07:15.386531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.386566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:91240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.642 [2024-07-14 15:07:15.386591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.386627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:91464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.642 [2024-07-14 15:07:15.386652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.386687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:90840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.642 [2024-07-14 15:07:15.386712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.386748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:91024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.642 [2024-07-14 15:07:15.386773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.386824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:91136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.642 [2024-07-14 15:07:15.386848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.386914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:91376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.642 [2024-07-14 15:07:15.386941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.386979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:91416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.642 [2024-07-14 15:07:15.387004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.387040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:91384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.642 [2024-07-14 15:07:15.387065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.387106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:91184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.642 [2024-07-14 15:07:15.387133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.387169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:90664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.642 [2024-07-14 15:07:15.387210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.387245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:91304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.642 [2024-07-14 15:07:15.387270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.387305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:91040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.642 [2024-07-14 15:07:15.387329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.387364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:90488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.642 [2024-07-14 15:07:15.387402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.387437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:91440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.642 [2024-07-14 15:07:15.387460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.387493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:91472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.642 [2024-07-14 15:07:15.387518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.387552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:91056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.642 [2024-07-14 15:07:15.387575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.387609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:91168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.642 [2024-07-14 15:07:15.387636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.388564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:91200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.642 [2024-07-14 15:07:15.388595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.388652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:90696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.642 [2024-07-14 15:07:15.388677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.388712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:91352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.642 [2024-07-14 15:07:15.388736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.388769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:91728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.642 [2024-07-14 15:07:15.388798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.388848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:91744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.642 [2024-07-14 15:07:15.388875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.388922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:91760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.642 [2024-07-14 15:07:15.388947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.388982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:91776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.642 [2024-07-14 15:07:15.389007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.389043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:91792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.642 [2024-07-14 15:07:15.389068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.389103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:91808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.642 [2024-07-14 15:07:15.389128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.389181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:91824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.642 [2024-07-14 15:07:15.389206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.389258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:91840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.642 [2024-07-14 15:07:15.389283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.389317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:91856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.642 [2024-07-14 15:07:15.389342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.389377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:91872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.642 [2024-07-14 15:07:15.389402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.389437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:91888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.642 [2024-07-14 15:07:15.389462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.389497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:91904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.642 [2024-07-14 15:07:15.389521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.389556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:91920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.642 [2024-07-14 15:07:15.389586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.389623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:91936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.642 [2024-07-14 15:07:15.389648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.390150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:91504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.642 [2024-07-14 15:07:15.390184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:39.642 [2024-07-14 15:07:15.390225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:91536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.642 [2024-07-14 15:07:15.390251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.390287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:91568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.643 [2024-07-14 15:07:15.390312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.390347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:91480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.643 [2024-07-14 15:07:15.390387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.390422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:91600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.643 [2024-07-14 15:07:15.390447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.390480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:91632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.643 [2024-07-14 15:07:15.390504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.390538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:91664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.643 [2024-07-14 15:07:15.390578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.390615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:91696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.643 [2024-07-14 15:07:15.390639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.390674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:91512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.643 [2024-07-14 15:07:15.390712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.390749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:91576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.643 [2024-07-14 15:07:15.390775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.390810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:91464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.643 [2024-07-14 15:07:15.390834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.390874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:91024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.643 [2024-07-14 15:07:15.390910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.390946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:91376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.643 [2024-07-14 15:07:15.390971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.391006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:91384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.643 [2024-07-14 15:07:15.391031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.391065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:90664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.643 [2024-07-14 15:07:15.391090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.391124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:91040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.643 [2024-07-14 15:07:15.391149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.391183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:91440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.643 [2024-07-14 15:07:15.391207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.391258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:91056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.643 [2024-07-14 15:07:15.391282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.393468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:91952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.643 [2024-07-14 15:07:15.393513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.393555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:91968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.643 [2024-07-14 15:07:15.393580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.393612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:91984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.643 [2024-07-14 15:07:15.393636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.393668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.643 [2024-07-14 15:07:15.393691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.393723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:91728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.643 [2024-07-14 15:07:15.393747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.393785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:91760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.643 [2024-07-14 15:07:15.393809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.393841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:91792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.643 [2024-07-14 15:07:15.393865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.393923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:91824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.643 [2024-07-14 15:07:15.393948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.393981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:91856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.643 [2024-07-14 15:07:15.394006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.394057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:91888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.643 [2024-07-14 15:07:15.394082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.394116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:91920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.643 [2024-07-14 15:07:15.394140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.394175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:91320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.643 [2024-07-14 15:07:15.394216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.394251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:91536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.643 [2024-07-14 15:07:15.394275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.394309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:91480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.643 [2024-07-14 15:07:15.394333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.394382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:91632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.643 [2024-07-14 15:07:15.394406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.394439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:91696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.643 [2024-07-14 15:07:15.394462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.394495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:91576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.643 [2024-07-14 15:07:15.394518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.394551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:91024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.643 [2024-07-14 15:07:15.394578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.394612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:91384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.643 [2024-07-14 15:07:15.394635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.394668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:91040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.643 [2024-07-14 15:07:15.394691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.394725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:91056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.643 [2024-07-14 15:07:15.394748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.398806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:92000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.643 [2024-07-14 15:07:15.398838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.398902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:92016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.643 [2024-07-14 15:07:15.398945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.398980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.643 [2024-07-14 15:07:15.399004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.399037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:92048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.643 [2024-07-14 15:07:15.399061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.399094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:92064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.643 [2024-07-14 15:07:15.399118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.399151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:92080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.643 [2024-07-14 15:07:15.399175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.399223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:92096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.643 [2024-07-14 15:07:15.399248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.399282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:92112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.643 [2024-07-14 15:07:15.399305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.399337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:92128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.643 [2024-07-14 15:07:15.399365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.399414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:92144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.643 [2024-07-14 15:07:15.399439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.399489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:91608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.643 [2024-07-14 15:07:15.399515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.399549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:91640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.643 [2024-07-14 15:07:15.399574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.399609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:91672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.643 [2024-07-14 15:07:15.399634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.399668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:91704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.643 [2024-07-14 15:07:15.399692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.399726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:91496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.643 [2024-07-14 15:07:15.399751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.399785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:91560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.643 [2024-07-14 15:07:15.399809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.399858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:91112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.643 [2024-07-14 15:07:15.399891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.399944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:91968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.643 [2024-07-14 15:07:15.399971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.400020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:90696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.643 [2024-07-14 15:07:15.400044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.400076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:91760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.643 [2024-07-14 15:07:15.400101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.400134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:91824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.643 [2024-07-14 15:07:15.400158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.400196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:91888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.643 [2024-07-14 15:07:15.400236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.400269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:91320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.643 [2024-07-14 15:07:15.400293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.400325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:91480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.643 [2024-07-14 15:07:15.400349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.400381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:91696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.643 [2024-07-14 15:07:15.400408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.400442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:91024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.643 [2024-07-14 15:07:15.400464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.400496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:91040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.643 [2024-07-14 15:07:15.400519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.400550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:90824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.643 [2024-07-14 15:07:15.400574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:39.643 [2024-07-14 15:07:15.400606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:92152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.643 [2024-07-14 15:07:15.400629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.400660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:92168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.644 [2024-07-14 15:07:15.400683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.400716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:92184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.644 [2024-07-14 15:07:15.400738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.400770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:92200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.644 [2024-07-14 15:07:15.400793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.400826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:92216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.644 [2024-07-14 15:07:15.400850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.400918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:92232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.644 [2024-07-14 15:07:15.400957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.400994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:92248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.644 [2024-07-14 15:07:15.401018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.401052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:91736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.644 [2024-07-14 15:07:15.401075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.401108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:91768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.644 [2024-07-14 15:07:15.401132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.401166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:91800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.644 [2024-07-14 15:07:15.401207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.401240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:91832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.644 [2024-07-14 15:07:15.401263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.401295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:91864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.644 [2024-07-14 15:07:15.401318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.401350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:91896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.644 [2024-07-14 15:07:15.401373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.401404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:91928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.644 [2024-07-14 15:07:15.401428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.401470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:92264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.644 [2024-07-14 15:07:15.401493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.401526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:92280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.644 [2024-07-14 15:07:15.401549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.401581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:91616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.644 [2024-07-14 15:07:15.401604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.401636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:91680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.644 [2024-07-14 15:07:15.401664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.401697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:91544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.644 [2024-07-14 15:07:15.401720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.401753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:91416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.644 [2024-07-14 15:07:15.401776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.404682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:92296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.644 [2024-07-14 15:07:15.404714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.404770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:92312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.644 [2024-07-14 15:07:15.404795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.404829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:92328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.644 [2024-07-14 15:07:15.404868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.404928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:92344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.644 [2024-07-14 15:07:15.404955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.404990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:92360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.644 [2024-07-14 15:07:15.405014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.405049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:91960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.644 [2024-07-14 15:07:15.405074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.405108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:91744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.644 [2024-07-14 15:07:15.405133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.405167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:91808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.644 [2024-07-14 15:07:15.405191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.405226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:91872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.644 [2024-07-14 15:07:15.405250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.405284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:91936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.644 [2024-07-14 15:07:15.405330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.405365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.644 [2024-07-14 15:07:15.405388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.405421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:92016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.644 [2024-07-14 15:07:15.405444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.405476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:92048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.644 [2024-07-14 15:07:15.405499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.405532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:92080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.644 [2024-07-14 15:07:15.405555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.405587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:92112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.644 [2024-07-14 15:07:15.405609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.405641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:92144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.644 [2024-07-14 15:07:15.405665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.405697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.644 [2024-07-14 15:07:15.405720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.405752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:91704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.644 [2024-07-14 15:07:15.405775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.405807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:91560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.644 [2024-07-14 15:07:15.405830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.405886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:91968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.644 [2024-07-14 15:07:15.405911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.405945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:91760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.644 [2024-07-14 15:07:15.405970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.406004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:91888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.644 [2024-07-14 15:07:15.406034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.406068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:91480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.644 [2024-07-14 15:07:15.406092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.406125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:91024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.644 [2024-07-14 15:07:15.406165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.406201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.644 [2024-07-14 15:07:15.406226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.406892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:92168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.644 [2024-07-14 15:07:15.406924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.406966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:92200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.644 [2024-07-14 15:07:15.406991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.407041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:92232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.644 [2024-07-14 15:07:15.407066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.407115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:91736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.644 [2024-07-14 15:07:15.407138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.407170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:91800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.644 [2024-07-14 15:07:15.407209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.407261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:91864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.644 [2024-07-14 15:07:15.407287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.407322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:91928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.644 [2024-07-14 15:07:15.407346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.407381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:92280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.644 [2024-07-14 15:07:15.407406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.407441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:91680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.644 [2024-07-14 15:07:15.407467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.407529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:91416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.644 [2024-07-14 15:07:15.407554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.407587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:91664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.644 [2024-07-14 15:07:15.407611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.407661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:91464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.644 [2024-07-14 15:07:15.407687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.407722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.644 [2024-07-14 15:07:15.407747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.407782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:92408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.644 [2024-07-14 15:07:15.407807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.407842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:91992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.644 [2024-07-14 15:07:15.407867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:39.644 [2024-07-14 15:07:15.407927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:92024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.644 [2024-07-14 15:07:15.407955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:39.645 [2024-07-14 15:07:15.407991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:92056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.645 [2024-07-14 15:07:15.408017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:39.645 [2024-07-14 15:07:15.408051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:92088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.645 [2024-07-14 15:07:15.408076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:39.645 [2024-07-14 15:07:15.408112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:92120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.645 [2024-07-14 15:07:15.408136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:39.645 [2024-07-14 15:07:15.409869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:91952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.645 [2024-07-14 15:07:15.409911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:39.645 [2024-07-14 15:07:15.409991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:91728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.645 [2024-07-14 15:07:15.410022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:39.645 [2024-07-14 15:07:15.410065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:91856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.645 [2024-07-14 15:07:15.410092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:39.645 [2024-07-14 15:07:15.410127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:91632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.645 [2024-07-14 15:07:15.410152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:39.645 [2024-07-14 15:07:15.410187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:92160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.645 [2024-07-14 15:07:15.410212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:39.645 [2024-07-14 15:07:15.410248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:92192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.645 [2024-07-14 15:07:15.410286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:39.645 [2024-07-14 15:07:15.410324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:92224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.645 [2024-07-14 15:07:15.410349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:39.645 [2024-07-14 15:07:15.410384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:92432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.645 [2024-07-14 15:07:15.410409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:39.645 [2024-07-14 15:07:15.410444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:92448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.645 [2024-07-14 15:07:15.410468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:39.645 [2024-07-14 15:07:15.410503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:92464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.645 [2024-07-14 15:07:15.410527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:39.645 [2024-07-14 15:07:15.410562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:92480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.645 [2024-07-14 15:07:15.410587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:39.645 [2024-07-14 15:07:15.410621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:92256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.645 [2024-07-14 15:07:15.410647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:39.645 [2024-07-14 15:07:15.410682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:92312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.645 [2024-07-14 15:07:15.410707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:39.645 [2024-07-14 15:07:15.410742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:92344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.645 [2024-07-14 15:07:15.410767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:39.645 [2024-07-14 15:07:15.410802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:91960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.645 [2024-07-14 15:07:15.410831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:39.645 [2024-07-14 15:07:15.410867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:91808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.645 [2024-07-14 15:07:15.410901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:39.645 [2024-07-14 15:07:15.410937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:91936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.645 [2024-07-14 15:07:15.410962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:39.645 [2024-07-14 15:07:15.411004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:92016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.645 [2024-07-14 15:07:15.411029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:39.645 [2024-07-14 15:07:15.411065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:92080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.645 [2024-07-14 15:07:15.411090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:39.645 [2024-07-14 15:07:15.411124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:92144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.645 [2024-07-14 15:07:15.411149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:39.645 [2024-07-14 15:07:15.411183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.645 [2024-07-14 15:07:15.411208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:39.645 [2024-07-14 15:07:15.411244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:91968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.645 [2024-07-14 15:07:15.411269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:39.645 [2024-07-14 15:07:15.411303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:91888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.645 [2024-07-14 15:07:15.411328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:39.645 [2024-07-14 15:07:15.411363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:91024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.645 [2024-07-14 15:07:15.411387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:39.645 [2024-07-14 15:07:15.411422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:92272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.645 [2024-07-14 15:07:15.411446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:39.645 [2024-07-14 15:07:15.411480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:92200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.645 [2024-07-14 15:07:15.411505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:39.645 [2024-07-14 15:07:15.411540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:91736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.645 [2024-07-14 15:07:15.411569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:39.645 [2024-07-14 15:07:15.411605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:91864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.645 [2024-07-14 15:07:15.411630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:39.645 [2024-07-14 15:07:15.411664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:92280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.645 [2024-07-14 15:07:15.411689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:39.645 [2024-07-14 15:07:15.411724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:91416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.645 [2024-07-14 15:07:15.411749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:39.645 [2024-07-14 15:07:15.411782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:91464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.645 [2024-07-14 15:07:15.411807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:39.645 [2024-07-14 15:07:15.411842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.645 [2024-07-14 15:07:15.411867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:39.645 [2024-07-14 15:07:15.411911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:92024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.645 [2024-07-14 15:07:15.411936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:39.645 [2024-07-14 15:07:15.411971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:92088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.645 [2024-07-14 15:07:15.411996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:39.645 [2024-07-14 15:07:15.414960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:92488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.645 [2024-07-14 15:07:15.414997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:39.645 [2024-07-14 15:07:15.415068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:92504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.645 [2024-07-14 15:07:15.415097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:39.645 [2024-07-14 15:07:15.415133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:92520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.645 [2024-07-14 15:07:15.415158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:39.645 [2024-07-14 15:07:15.415192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:92536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.645 [2024-07-14 15:07:15.415217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.645 [2024-07-14 15:07:15.415251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:92552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.645 [2024-07-14 15:07:15.415276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:39.645 [2024-07-14 15:07:15.415316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:92568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.645 [2024-07-14 15:07:15.415342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:39.645 [2024-07-14 15:07:15.415376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:92584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.645 [2024-07-14 15:07:15.415401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:39.645 [2024-07-14 15:07:15.415436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:92600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.645 [2024-07-14 15:07:15.415460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:39.645 [2024-07-14 15:07:15.415494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:92304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.645 [2024-07-14 15:07:15.415519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:39.645 [2024-07-14 15:07:15.415553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:92336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.645 [2024-07-14 15:07:15.415578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:39.645 [2024-07-14 15:07:15.415612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:92368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.645 [2024-07-14 15:07:15.415636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:39.645 [2024-07-14 15:07:15.415670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.645 [2024-07-14 15:07:15.415695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:39.645 [2024-07-14 15:07:15.415728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:92064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.645 [2024-07-14 15:07:15.415753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:39.645 [2024-07-14 15:07:15.415788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:92128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.645 [2024-07-14 15:07:15.415813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:39.645 Received shutdown signal, test time was about 32.287113 seconds 00:34:39.645 00:34:39.645 Latency(us) 00:34:39.645 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:39.645 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:34:39.645 Verification LBA range: start 0x0 length 0x4000 00:34:39.645 Nvme0n1 : 32.29 5911.15 23.09 0.00 0.00 21618.36 289.75 4101097.24 00:34:39.645 =================================================================================================================== 00:34:39.645 Total : 5911.15 23.09 0.00 0.00 21618.36 289.75 4101097.24 00:34:39.645 15:07:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:39.902 15:07:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:34:39.902 15:07:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:39.902 15:07:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:34:39.902 15:07:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:39.902 15:07:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:34:39.903 15:07:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:39.903 15:07:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:34:39.903 15:07:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:39.903 15:07:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:39.903 rmmod nvme_tcp 00:34:39.903 rmmod nvme_fabrics 00:34:39.903 rmmod nvme_keyring 00:34:39.903 15:07:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:39.903 15:07:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:34:39.903 15:07:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:34:39.903 15:07:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 2027999 ']' 00:34:39.903 15:07:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 2027999 00:34:39.903 15:07:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 2027999 ']' 00:34:39.903 15:07:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 2027999 00:34:39.903 15:07:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:34:39.903 15:07:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:39.903 15:07:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2027999 00:34:39.903 15:07:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:39.903 15:07:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:39.903 15:07:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2027999' 00:34:39.903 killing process with pid 2027999 00:34:39.903 15:07:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 2027999 00:34:39.903 15:07:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 2027999 00:34:41.809 15:07:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:41.809 15:07:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:41.809 15:07:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:41.809 15:07:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:41.809 15:07:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:41.809 15:07:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:41.809 15:07:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:41.809 15:07:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:43.712 15:07:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:43.712 00:34:43.712 real 0m44.195s 00:34:43.712 user 2m11.317s 00:34:43.712 sys 0m10.115s 00:34:43.712 15:07:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:43.712 15:07:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:43.712 ************************************ 00:34:43.712 END TEST nvmf_host_multipath_status 00:34:43.712 ************************************ 00:34:43.712 15:07:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:43.712 15:07:22 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:34:43.712 15:07:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:43.712 15:07:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:43.712 15:07:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:43.712 ************************************ 00:34:43.712 START TEST nvmf_discovery_remove_ifc 00:34:43.712 ************************************ 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:34:43.712 * Looking for test storage... 00:34:43.712 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:34:43.712 15:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:45.615 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:45.615 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:45.615 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:45.615 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:45.615 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:45.874 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:45.874 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:45.874 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:45.874 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:45.874 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:34:45.874 00:34:45.874 --- 10.0.0.2 ping statistics --- 00:34:45.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:45.874 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:34:45.874 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:45.874 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:45.874 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:34:45.874 00:34:45.874 --- 10.0.0.1 ping statistics --- 00:34:45.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:45.874 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:34:45.874 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:45.874 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:34:45.874 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:45.874 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:45.874 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:45.874 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:45.874 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:45.874 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:45.874 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:45.874 15:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:34:45.874 15:07:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:45.874 15:07:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:45.874 15:07:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:45.874 15:07:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=2034736 00:34:45.874 15:07:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:45.874 15:07:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 2034736 00:34:45.874 15:07:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 2034736 ']' 00:34:45.874 15:07:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:45.874 15:07:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:45.874 15:07:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:45.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:45.874 15:07:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:45.874 15:07:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:45.874 [2024-07-14 15:07:25.102535] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:34:45.874 [2024-07-14 15:07:25.102703] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:46.132 EAL: No free 2048 kB hugepages reported on node 1 00:34:46.132 [2024-07-14 15:07:25.269442] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:46.389 [2024-07-14 15:07:25.523067] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:46.389 [2024-07-14 15:07:25.523143] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:46.389 [2024-07-14 15:07:25.523172] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:46.389 [2024-07-14 15:07:25.523198] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:46.389 [2024-07-14 15:07:25.523219] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:46.389 [2024-07-14 15:07:25.523266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:46.954 15:07:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:46.955 15:07:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:34:46.955 15:07:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:46.955 15:07:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:46.955 15:07:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:46.955 15:07:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:46.955 15:07:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:34:46.955 15:07:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:46.955 15:07:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:46.955 [2024-07-14 15:07:26.033778] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:46.955 [2024-07-14 15:07:26.041987] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:34:46.955 null0 00:34:46.955 [2024-07-14 15:07:26.073893] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:46.955 15:07:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:46.955 15:07:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2034891 00:34:46.955 15:07:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:34:46.955 15:07:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2034891 /tmp/host.sock 00:34:46.955 15:07:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 2034891 ']' 00:34:46.955 15:07:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:34:46.955 15:07:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:46.955 15:07:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:34:46.955 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:34:46.955 15:07:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:46.955 15:07:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:46.955 [2024-07-14 15:07:26.181888] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:34:46.955 [2024-07-14 15:07:26.182031] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2034891 ] 00:34:46.955 EAL: No free 2048 kB hugepages reported on node 1 00:34:47.214 [2024-07-14 15:07:26.323049] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:47.474 [2024-07-14 15:07:26.573003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:48.042 15:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:48.042 15:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:34:48.042 15:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:48.042 15:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:34:48.042 15:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.042 15:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:48.042 15:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.042 15:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:34:48.042 15:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.042 15:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:48.300 15:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.300 15:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:34:48.300 15:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.300 15:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:49.261 [2024-07-14 15:07:28.506274] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:49.261 [2024-07-14 15:07:28.506315] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:49.261 [2024-07-14 15:07:28.506361] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:49.519 [2024-07-14 15:07:28.592662] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:34:49.519 [2024-07-14 15:07:28.777782] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:34:49.519 [2024-07-14 15:07:28.777883] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:34:49.519 [2024-07-14 15:07:28.777983] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:34:49.519 [2024-07-14 15:07:28.778024] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:49.519 [2024-07-14 15:07:28.778069] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:49.519 15:07:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.519 15:07:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:34:49.519 15:07:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:49.519 15:07:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:49.519 15:07:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:49.520 15:07:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.520 15:07:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:49.520 15:07:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:49.520 15:07:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:49.520 [2024-07-14 15:07:28.784606] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x6150001f2780 was disconnected and freed. delete nvme_qpair. 00:34:49.520 15:07:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.520 15:07:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:34:49.520 15:07:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:34:49.779 15:07:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:34:49.779 15:07:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:34:49.779 15:07:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:49.779 15:07:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:49.779 15:07:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:49.779 15:07:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.779 15:07:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:49.779 15:07:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:49.780 15:07:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:49.780 15:07:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.780 15:07:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:49.780 15:07:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:50.716 15:07:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:50.716 15:07:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:50.716 15:07:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:50.716 15:07:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:50.716 15:07:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:50.716 15:07:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:50.716 15:07:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:50.716 15:07:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:50.716 15:07:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:50.716 15:07:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:52.098 15:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:52.098 15:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:52.098 15:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:52.098 15:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:52.098 15:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:52.098 15:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:52.098 15:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:52.098 15:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:52.098 15:07:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:52.098 15:07:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:53.083 15:07:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:53.083 15:07:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:53.083 15:07:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:53.083 15:07:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:53.083 15:07:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:53.083 15:07:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:53.083 15:07:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:53.083 15:07:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:53.083 15:07:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:53.083 15:07:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:54.020 15:07:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:54.020 15:07:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:54.020 15:07:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:54.020 15:07:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.020 15:07:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:54.020 15:07:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:54.020 15:07:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:54.020 15:07:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.020 15:07:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:54.020 15:07:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:54.957 15:07:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:54.957 15:07:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:54.958 15:07:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:54.958 15:07:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.958 15:07:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:54.958 15:07:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:54.958 15:07:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:54.958 15:07:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.958 15:07:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:54.958 15:07:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:54.958 [2024-07-14 15:07:34.219024] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:34:54.958 [2024-07-14 15:07:34.219119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:54.958 [2024-07-14 15:07:34.219148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.958 [2024-07-14 15:07:34.219186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:54.958 [2024-07-14 15:07:34.219205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.958 [2024-07-14 15:07:34.219242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:54.958 [2024-07-14 15:07:34.219266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.958 [2024-07-14 15:07:34.219289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:54.958 [2024-07-14 15:07:34.219317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.958 [2024-07-14 15:07:34.219340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:54.958 [2024-07-14 15:07:34.219361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.958 [2024-07-14 15:07:34.219391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:34:54.958 [2024-07-14 15:07:34.229047] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:34:54.958 [2024-07-14 15:07:34.239103] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:55.894 15:07:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:55.894 15:07:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:55.894 15:07:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:55.894 15:07:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:55.894 15:07:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:55.894 15:07:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:55.894 15:07:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:56.153 [2024-07-14 15:07:35.251139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:34:56.153 [2024-07-14 15:07:35.251237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:34:56.153 [2024-07-14 15:07:35.251280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:34:56.153 [2024-07-14 15:07:35.251344] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:34:56.153 [2024-07-14 15:07:35.252082] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:34:56.153 [2024-07-14 15:07:35.252128] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:56.153 [2024-07-14 15:07:35.252158] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:56.153 [2024-07-14 15:07:35.252181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:56.153 [2024-07-14 15:07:35.252250] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:56.153 [2024-07-14 15:07:35.252277] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:56.153 15:07:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.154 15:07:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:56.154 15:07:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:57.092 [2024-07-14 15:07:36.254828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:57.092 [2024-07-14 15:07:36.254897] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:57.092 [2024-07-14 15:07:36.254924] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:57.092 [2024-07-14 15:07:36.254960] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:34:57.092 [2024-07-14 15:07:36.255001] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:57.092 [2024-07-14 15:07:36.255062] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:34:57.092 [2024-07-14 15:07:36.255142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:57.092 [2024-07-14 15:07:36.255188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.092 [2024-07-14 15:07:36.255219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:57.092 [2024-07-14 15:07:36.255251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.092 [2024-07-14 15:07:36.255276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:57.092 [2024-07-14 15:07:36.255298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.092 [2024-07-14 15:07:36.255321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:57.092 [2024-07-14 15:07:36.255343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.092 [2024-07-14 15:07:36.255366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:57.092 [2024-07-14 15:07:36.255388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.092 [2024-07-14 15:07:36.255409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:34:57.092 [2024-07-14 15:07:36.255487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:34:57.092 [2024-07-14 15:07:36.256483] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:34:57.092 [2024-07-14 15:07:36.256519] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:34:57.092 15:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:57.092 15:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:57.092 15:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:57.092 15:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:57.092 15:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:57.092 15:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:57.092 15:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:57.092 15:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:57.092 15:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:34:57.092 15:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:57.092 15:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:57.092 15:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:34:57.092 15:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:57.092 15:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:57.092 15:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:57.092 15:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:57.092 15:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:57.092 15:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:57.092 15:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:57.092 15:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:57.092 15:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:57.092 15:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:58.466 15:07:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:58.466 15:07:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:58.466 15:07:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:58.467 15:07:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.467 15:07:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:58.467 15:07:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:58.467 15:07:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:58.467 15:07:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.467 15:07:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:58.467 15:07:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:59.036 [2024-07-14 15:07:38.272682] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:59.036 [2024-07-14 15:07:38.272724] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:59.036 [2024-07-14 15:07:38.272765] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:59.293 [2024-07-14 15:07:38.400248] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:34:59.293 15:07:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:59.293 15:07:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:59.293 15:07:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:59.293 15:07:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.293 15:07:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:59.293 15:07:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:59.293 15:07:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:59.293 15:07:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.293 15:07:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:59.293 15:07:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:59.552 [2024-07-14 15:07:38.627063] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:34:59.552 [2024-07-14 15:07:38.627126] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:34:59.552 [2024-07-14 15:07:38.627227] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:34:59.552 [2024-07-14 15:07:38.627271] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:34:59.552 [2024-07-14 15:07:38.627297] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:59.552 [2024-07-14 15:07:38.672088] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x6150001f2f00 was disconnected and freed. delete nvme_qpair. 00:35:00.488 15:07:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:00.488 15:07:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:00.488 15:07:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:00.488 15:07:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:00.488 15:07:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:00.488 15:07:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:00.488 15:07:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:00.488 15:07:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:00.489 15:07:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:35:00.489 15:07:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:35:00.489 15:07:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2034891 00:35:00.489 15:07:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 2034891 ']' 00:35:00.489 15:07:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 2034891 00:35:00.489 15:07:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:35:00.489 15:07:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:00.489 15:07:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2034891 00:35:00.489 15:07:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:00.489 15:07:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:00.489 15:07:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2034891' 00:35:00.489 killing process with pid 2034891 00:35:00.489 15:07:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 2034891 00:35:00.489 15:07:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 2034891 00:35:01.425 15:07:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:35:01.425 15:07:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:01.425 15:07:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:35:01.425 15:07:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:01.425 15:07:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:35:01.425 15:07:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:01.425 15:07:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:01.425 rmmod nvme_tcp 00:35:01.425 rmmod nvme_fabrics 00:35:01.425 rmmod nvme_keyring 00:35:01.425 15:07:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:01.425 15:07:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:35:01.425 15:07:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:35:01.425 15:07:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 2034736 ']' 00:35:01.425 15:07:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 2034736 00:35:01.425 15:07:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 2034736 ']' 00:35:01.425 15:07:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 2034736 00:35:01.425 15:07:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:35:01.425 15:07:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:01.425 15:07:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2034736 00:35:01.425 15:07:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:01.425 15:07:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:01.425 15:07:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2034736' 00:35:01.425 killing process with pid 2034736 00:35:01.425 15:07:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 2034736 00:35:01.425 15:07:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 2034736 00:35:02.802 15:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:02.802 15:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:02.802 15:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:02.802 15:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:02.802 15:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:02.802 15:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:02.802 15:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:02.802 15:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:04.709 15:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:04.709 00:35:04.709 real 0m21.202s 00:35:04.709 user 0m31.095s 00:35:04.709 sys 0m3.373s 00:35:04.709 15:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:04.709 15:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:04.709 ************************************ 00:35:04.709 END TEST nvmf_discovery_remove_ifc 00:35:04.709 ************************************ 00:35:04.709 15:07:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:35:04.709 15:07:44 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:04.709 15:07:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:35:04.709 15:07:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:04.709 15:07:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:04.967 ************************************ 00:35:04.967 START TEST nvmf_identify_kernel_target 00:35:04.967 ************************************ 00:35:04.967 15:07:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:04.967 * Looking for test storage... 00:35:04.967 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:04.967 15:07:44 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:04.967 15:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:35:04.967 15:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:04.967 15:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:04.967 15:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:04.967 15:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:04.967 15:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:04.967 15:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:04.967 15:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:04.967 15:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:04.967 15:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:04.967 15:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:04.967 15:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:04.967 15:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:04.967 15:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:04.967 15:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:04.967 15:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:04.967 15:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:04.967 15:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:04.967 15:07:44 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:04.967 15:07:44 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:04.967 15:07:44 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:04.967 15:07:44 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.967 15:07:44 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.967 15:07:44 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.968 15:07:44 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:35:04.968 15:07:44 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.968 15:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:35:04.968 15:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:04.968 15:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:04.968 15:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:04.968 15:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:04.968 15:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:04.968 15:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:04.968 15:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:04.968 15:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:04.968 15:07:44 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:35:04.968 15:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:04.968 15:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:04.968 15:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:04.968 15:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:04.968 15:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:04.968 15:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:04.968 15:07:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:04.968 15:07:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:04.968 15:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:04.968 15:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:04.968 15:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:35:04.968 15:07:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:06.866 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:06.866 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:35:06.866 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:06.866 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:06.866 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:06.866 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:06.866 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:06.866 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:35:06.866 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:06.866 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:35:06.866 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:35:06.866 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:35:06.866 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:35:06.866 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:35:06.866 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:35:06.866 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:06.866 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:06.866 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:06.866 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:06.866 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:06.866 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:06.866 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:06.866 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:06.866 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:06.866 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:06.866 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:06.866 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:06.866 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:06.866 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:06.866 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:06.866 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:06.866 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:06.866 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:06.866 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:06.866 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:06.866 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:06.866 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:06.866 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:06.866 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:06.866 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:06.866 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:06.866 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:06.866 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:06.866 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:06.866 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:06.866 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:06.866 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:06.866 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:06.866 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:06.867 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:06.867 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:06.867 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:06.867 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:06.867 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:06.867 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:06.867 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:06.867 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:06.867 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:06.867 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:06.867 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:06.867 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:06.867 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:06.867 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:06.867 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:06.867 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:06.867 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:06.867 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:06.867 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:06.867 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:06.867 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:06.867 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:06.867 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:06.867 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:35:06.867 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:06.867 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:06.867 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:06.867 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:06.867 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:06.867 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:06.867 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:06.867 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:06.867 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:06.867 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:06.867 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:06.867 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:06.867 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:06.867 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:06.867 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:06.867 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:06.867 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:06.867 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:06.867 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:06.867 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:07.126 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:07.126 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:07.126 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:07.126 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:07.126 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:35:07.126 00:35:07.126 --- 10.0.0.2 ping statistics --- 00:35:07.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:07.126 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:35:07.126 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:07.126 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:07.126 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:35:07.126 00:35:07.126 --- 10.0.0.1 ping statistics --- 00:35:07.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:07.126 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:35:07.126 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:07.126 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:35:07.126 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:07.126 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:07.126 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:07.126 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:07.126 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:07.126 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:07.126 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:07.126 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:35:07.126 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:35:07.126 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:35:07.126 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:07.126 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:07.126 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:07.126 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:07.126 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:07.126 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:07.126 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:07.126 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:07.126 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:07.126 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:35:07.126 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:07.126 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:07.126 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:35:07.126 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:07.126 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:07.126 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:07.126 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:35:07.126 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:35:07.126 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:35:07.126 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:07.126 15:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:08.061 Waiting for block devices as requested 00:35:08.061 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:08.322 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:08.322 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:08.582 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:08.582 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:08.582 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:08.582 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:08.843 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:08.843 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:08.843 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:08.843 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:09.103 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:09.103 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:09.103 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:09.362 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:09.362 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:09.362 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:09.362 15:07:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:35:09.362 15:07:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:09.362 15:07:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:35:09.362 15:07:48 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:35:09.362 15:07:48 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:09.362 15:07:48 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:35:09.362 15:07:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:35:09.362 15:07:48 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:35:09.362 15:07:48 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:09.620 No valid GPT data, bailing 00:35:09.620 15:07:48 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:09.620 15:07:48 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:35:09.620 15:07:48 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:35:09.620 15:07:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:35:09.620 15:07:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:35:09.620 15:07:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:09.620 15:07:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:09.620 15:07:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:09.620 15:07:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:09.620 15:07:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:35:09.620 15:07:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:35:09.620 15:07:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:35:09.620 15:07:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:35:09.620 15:07:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:35:09.620 15:07:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:35:09.620 15:07:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:35:09.620 15:07:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:09.620 15:07:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:35:09.620 00:35:09.620 Discovery Log Number of Records 2, Generation counter 2 00:35:09.620 =====Discovery Log Entry 0====== 00:35:09.620 trtype: tcp 00:35:09.620 adrfam: ipv4 00:35:09.620 subtype: current discovery subsystem 00:35:09.620 treq: not specified, sq flow control disable supported 00:35:09.620 portid: 1 00:35:09.620 trsvcid: 4420 00:35:09.620 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:09.620 traddr: 10.0.0.1 00:35:09.620 eflags: none 00:35:09.620 sectype: none 00:35:09.620 =====Discovery Log Entry 1====== 00:35:09.620 trtype: tcp 00:35:09.620 adrfam: ipv4 00:35:09.620 subtype: nvme subsystem 00:35:09.620 treq: not specified, sq flow control disable supported 00:35:09.620 portid: 1 00:35:09.620 trsvcid: 4420 00:35:09.620 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:09.620 traddr: 10.0.0.1 00:35:09.620 eflags: none 00:35:09.620 sectype: none 00:35:09.620 15:07:48 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:35:09.620 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:35:09.620 EAL: No free 2048 kB hugepages reported on node 1 00:35:09.880 ===================================================== 00:35:09.880 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:35:09.880 ===================================================== 00:35:09.880 Controller Capabilities/Features 00:35:09.880 ================================ 00:35:09.880 Vendor ID: 0000 00:35:09.880 Subsystem Vendor ID: 0000 00:35:09.880 Serial Number: 9909d05cdf1e50f2273c 00:35:09.880 Model Number: Linux 00:35:09.880 Firmware Version: 6.7.0-68 00:35:09.880 Recommended Arb Burst: 0 00:35:09.880 IEEE OUI Identifier: 00 00 00 00:35:09.880 Multi-path I/O 00:35:09.880 May have multiple subsystem ports: No 00:35:09.880 May have multiple controllers: No 00:35:09.880 Associated with SR-IOV VF: No 00:35:09.880 Max Data Transfer Size: Unlimited 00:35:09.880 Max Number of Namespaces: 0 00:35:09.880 Max Number of I/O Queues: 1024 00:35:09.880 NVMe Specification Version (VS): 1.3 00:35:09.880 NVMe Specification Version (Identify): 1.3 00:35:09.880 Maximum Queue Entries: 1024 00:35:09.880 Contiguous Queues Required: No 00:35:09.880 Arbitration Mechanisms Supported 00:35:09.880 Weighted Round Robin: Not Supported 00:35:09.880 Vendor Specific: Not Supported 00:35:09.880 Reset Timeout: 7500 ms 00:35:09.880 Doorbell Stride: 4 bytes 00:35:09.880 NVM Subsystem Reset: Not Supported 00:35:09.880 Command Sets Supported 00:35:09.880 NVM Command Set: Supported 00:35:09.880 Boot Partition: Not Supported 00:35:09.880 Memory Page Size Minimum: 4096 bytes 00:35:09.880 Memory Page Size Maximum: 4096 bytes 00:35:09.880 Persistent Memory Region: Not Supported 00:35:09.880 Optional Asynchronous Events Supported 00:35:09.880 Namespace Attribute Notices: Not Supported 00:35:09.880 Firmware Activation Notices: Not Supported 00:35:09.880 ANA Change Notices: Not Supported 00:35:09.880 PLE Aggregate Log Change Notices: Not Supported 00:35:09.880 LBA Status Info Alert Notices: Not Supported 00:35:09.880 EGE Aggregate Log Change Notices: Not Supported 00:35:09.880 Normal NVM Subsystem Shutdown event: Not Supported 00:35:09.880 Zone Descriptor Change Notices: Not Supported 00:35:09.880 Discovery Log Change Notices: Supported 00:35:09.880 Controller Attributes 00:35:09.880 128-bit Host Identifier: Not Supported 00:35:09.880 Non-Operational Permissive Mode: Not Supported 00:35:09.880 NVM Sets: Not Supported 00:35:09.880 Read Recovery Levels: Not Supported 00:35:09.880 Endurance Groups: Not Supported 00:35:09.880 Predictable Latency Mode: Not Supported 00:35:09.880 Traffic Based Keep ALive: Not Supported 00:35:09.880 Namespace Granularity: Not Supported 00:35:09.880 SQ Associations: Not Supported 00:35:09.880 UUID List: Not Supported 00:35:09.880 Multi-Domain Subsystem: Not Supported 00:35:09.880 Fixed Capacity Management: Not Supported 00:35:09.880 Variable Capacity Management: Not Supported 00:35:09.880 Delete Endurance Group: Not Supported 00:35:09.880 Delete NVM Set: Not Supported 00:35:09.880 Extended LBA Formats Supported: Not Supported 00:35:09.880 Flexible Data Placement Supported: Not Supported 00:35:09.880 00:35:09.880 Controller Memory Buffer Support 00:35:09.880 ================================ 00:35:09.880 Supported: No 00:35:09.880 00:35:09.880 Persistent Memory Region Support 00:35:09.880 ================================ 00:35:09.880 Supported: No 00:35:09.880 00:35:09.880 Admin Command Set Attributes 00:35:09.880 ============================ 00:35:09.880 Security Send/Receive: Not Supported 00:35:09.880 Format NVM: Not Supported 00:35:09.880 Firmware Activate/Download: Not Supported 00:35:09.880 Namespace Management: Not Supported 00:35:09.880 Device Self-Test: Not Supported 00:35:09.880 Directives: Not Supported 00:35:09.880 NVMe-MI: Not Supported 00:35:09.880 Virtualization Management: Not Supported 00:35:09.880 Doorbell Buffer Config: Not Supported 00:35:09.880 Get LBA Status Capability: Not Supported 00:35:09.880 Command & Feature Lockdown Capability: Not Supported 00:35:09.880 Abort Command Limit: 1 00:35:09.881 Async Event Request Limit: 1 00:35:09.881 Number of Firmware Slots: N/A 00:35:09.881 Firmware Slot 1 Read-Only: N/A 00:35:09.881 Firmware Activation Without Reset: N/A 00:35:09.881 Multiple Update Detection Support: N/A 00:35:09.881 Firmware Update Granularity: No Information Provided 00:35:09.881 Per-Namespace SMART Log: No 00:35:09.881 Asymmetric Namespace Access Log Page: Not Supported 00:35:09.881 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:35:09.881 Command Effects Log Page: Not Supported 00:35:09.881 Get Log Page Extended Data: Supported 00:35:09.881 Telemetry Log Pages: Not Supported 00:35:09.881 Persistent Event Log Pages: Not Supported 00:35:09.881 Supported Log Pages Log Page: May Support 00:35:09.881 Commands Supported & Effects Log Page: Not Supported 00:35:09.881 Feature Identifiers & Effects Log Page:May Support 00:35:09.881 NVMe-MI Commands & Effects Log Page: May Support 00:35:09.881 Data Area 4 for Telemetry Log: Not Supported 00:35:09.881 Error Log Page Entries Supported: 1 00:35:09.881 Keep Alive: Not Supported 00:35:09.881 00:35:09.881 NVM Command Set Attributes 00:35:09.881 ========================== 00:35:09.881 Submission Queue Entry Size 00:35:09.881 Max: 1 00:35:09.881 Min: 1 00:35:09.881 Completion Queue Entry Size 00:35:09.881 Max: 1 00:35:09.881 Min: 1 00:35:09.881 Number of Namespaces: 0 00:35:09.881 Compare Command: Not Supported 00:35:09.881 Write Uncorrectable Command: Not Supported 00:35:09.881 Dataset Management Command: Not Supported 00:35:09.881 Write Zeroes Command: Not Supported 00:35:09.881 Set Features Save Field: Not Supported 00:35:09.881 Reservations: Not Supported 00:35:09.881 Timestamp: Not Supported 00:35:09.881 Copy: Not Supported 00:35:09.881 Volatile Write Cache: Not Present 00:35:09.881 Atomic Write Unit (Normal): 1 00:35:09.881 Atomic Write Unit (PFail): 1 00:35:09.881 Atomic Compare & Write Unit: 1 00:35:09.881 Fused Compare & Write: Not Supported 00:35:09.881 Scatter-Gather List 00:35:09.881 SGL Command Set: Supported 00:35:09.881 SGL Keyed: Not Supported 00:35:09.881 SGL Bit Bucket Descriptor: Not Supported 00:35:09.881 SGL Metadata Pointer: Not Supported 00:35:09.881 Oversized SGL: Not Supported 00:35:09.881 SGL Metadata Address: Not Supported 00:35:09.881 SGL Offset: Supported 00:35:09.881 Transport SGL Data Block: Not Supported 00:35:09.881 Replay Protected Memory Block: Not Supported 00:35:09.881 00:35:09.881 Firmware Slot Information 00:35:09.881 ========================= 00:35:09.881 Active slot: 0 00:35:09.881 00:35:09.881 00:35:09.881 Error Log 00:35:09.881 ========= 00:35:09.881 00:35:09.881 Active Namespaces 00:35:09.881 ================= 00:35:09.881 Discovery Log Page 00:35:09.881 ================== 00:35:09.881 Generation Counter: 2 00:35:09.881 Number of Records: 2 00:35:09.881 Record Format: 0 00:35:09.881 00:35:09.881 Discovery Log Entry 0 00:35:09.881 ---------------------- 00:35:09.881 Transport Type: 3 (TCP) 00:35:09.881 Address Family: 1 (IPv4) 00:35:09.881 Subsystem Type: 3 (Current Discovery Subsystem) 00:35:09.881 Entry Flags: 00:35:09.881 Duplicate Returned Information: 0 00:35:09.881 Explicit Persistent Connection Support for Discovery: 0 00:35:09.881 Transport Requirements: 00:35:09.881 Secure Channel: Not Specified 00:35:09.881 Port ID: 1 (0x0001) 00:35:09.881 Controller ID: 65535 (0xffff) 00:35:09.881 Admin Max SQ Size: 32 00:35:09.881 Transport Service Identifier: 4420 00:35:09.881 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:35:09.881 Transport Address: 10.0.0.1 00:35:09.881 Discovery Log Entry 1 00:35:09.881 ---------------------- 00:35:09.881 Transport Type: 3 (TCP) 00:35:09.881 Address Family: 1 (IPv4) 00:35:09.881 Subsystem Type: 2 (NVM Subsystem) 00:35:09.881 Entry Flags: 00:35:09.881 Duplicate Returned Information: 0 00:35:09.881 Explicit Persistent Connection Support for Discovery: 0 00:35:09.881 Transport Requirements: 00:35:09.881 Secure Channel: Not Specified 00:35:09.881 Port ID: 1 (0x0001) 00:35:09.881 Controller ID: 65535 (0xffff) 00:35:09.881 Admin Max SQ Size: 32 00:35:09.881 Transport Service Identifier: 4420 00:35:09.881 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:35:09.881 Transport Address: 10.0.0.1 00:35:09.881 15:07:48 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:09.881 EAL: No free 2048 kB hugepages reported on node 1 00:35:09.881 get_feature(0x01) failed 00:35:09.881 get_feature(0x02) failed 00:35:09.881 get_feature(0x04) failed 00:35:09.881 ===================================================== 00:35:09.881 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:09.881 ===================================================== 00:35:09.881 Controller Capabilities/Features 00:35:09.881 ================================ 00:35:09.881 Vendor ID: 0000 00:35:09.881 Subsystem Vendor ID: 0000 00:35:09.881 Serial Number: 87c8f6a4f20db4a2d41d 00:35:09.881 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:35:09.881 Firmware Version: 6.7.0-68 00:35:09.881 Recommended Arb Burst: 6 00:35:09.881 IEEE OUI Identifier: 00 00 00 00:35:09.881 Multi-path I/O 00:35:09.881 May have multiple subsystem ports: Yes 00:35:09.881 May have multiple controllers: Yes 00:35:09.881 Associated with SR-IOV VF: No 00:35:09.881 Max Data Transfer Size: Unlimited 00:35:09.881 Max Number of Namespaces: 1024 00:35:09.881 Max Number of I/O Queues: 128 00:35:09.881 NVMe Specification Version (VS): 1.3 00:35:09.881 NVMe Specification Version (Identify): 1.3 00:35:09.881 Maximum Queue Entries: 1024 00:35:09.881 Contiguous Queues Required: No 00:35:09.881 Arbitration Mechanisms Supported 00:35:09.881 Weighted Round Robin: Not Supported 00:35:09.881 Vendor Specific: Not Supported 00:35:09.881 Reset Timeout: 7500 ms 00:35:09.881 Doorbell Stride: 4 bytes 00:35:09.881 NVM Subsystem Reset: Not Supported 00:35:09.881 Command Sets Supported 00:35:09.881 NVM Command Set: Supported 00:35:09.881 Boot Partition: Not Supported 00:35:09.881 Memory Page Size Minimum: 4096 bytes 00:35:09.881 Memory Page Size Maximum: 4096 bytes 00:35:09.881 Persistent Memory Region: Not Supported 00:35:09.881 Optional Asynchronous Events Supported 00:35:09.881 Namespace Attribute Notices: Supported 00:35:09.881 Firmware Activation Notices: Not Supported 00:35:09.881 ANA Change Notices: Supported 00:35:09.881 PLE Aggregate Log Change Notices: Not Supported 00:35:09.881 LBA Status Info Alert Notices: Not Supported 00:35:09.881 EGE Aggregate Log Change Notices: Not Supported 00:35:09.881 Normal NVM Subsystem Shutdown event: Not Supported 00:35:09.881 Zone Descriptor Change Notices: Not Supported 00:35:09.881 Discovery Log Change Notices: Not Supported 00:35:09.881 Controller Attributes 00:35:09.881 128-bit Host Identifier: Supported 00:35:09.881 Non-Operational Permissive Mode: Not Supported 00:35:09.881 NVM Sets: Not Supported 00:35:09.881 Read Recovery Levels: Not Supported 00:35:09.881 Endurance Groups: Not Supported 00:35:09.881 Predictable Latency Mode: Not Supported 00:35:09.881 Traffic Based Keep ALive: Supported 00:35:09.881 Namespace Granularity: Not Supported 00:35:09.881 SQ Associations: Not Supported 00:35:09.881 UUID List: Not Supported 00:35:09.881 Multi-Domain Subsystem: Not Supported 00:35:09.881 Fixed Capacity Management: Not Supported 00:35:09.881 Variable Capacity Management: Not Supported 00:35:09.881 Delete Endurance Group: Not Supported 00:35:09.881 Delete NVM Set: Not Supported 00:35:09.881 Extended LBA Formats Supported: Not Supported 00:35:09.881 Flexible Data Placement Supported: Not Supported 00:35:09.881 00:35:09.881 Controller Memory Buffer Support 00:35:09.881 ================================ 00:35:09.881 Supported: No 00:35:09.881 00:35:09.881 Persistent Memory Region Support 00:35:09.881 ================================ 00:35:09.881 Supported: No 00:35:09.881 00:35:09.881 Admin Command Set Attributes 00:35:09.881 ============================ 00:35:09.881 Security Send/Receive: Not Supported 00:35:09.881 Format NVM: Not Supported 00:35:09.881 Firmware Activate/Download: Not Supported 00:35:09.881 Namespace Management: Not Supported 00:35:09.881 Device Self-Test: Not Supported 00:35:09.881 Directives: Not Supported 00:35:09.881 NVMe-MI: Not Supported 00:35:09.881 Virtualization Management: Not Supported 00:35:09.881 Doorbell Buffer Config: Not Supported 00:35:09.881 Get LBA Status Capability: Not Supported 00:35:09.881 Command & Feature Lockdown Capability: Not Supported 00:35:09.881 Abort Command Limit: 4 00:35:09.881 Async Event Request Limit: 4 00:35:09.881 Number of Firmware Slots: N/A 00:35:09.881 Firmware Slot 1 Read-Only: N/A 00:35:09.881 Firmware Activation Without Reset: N/A 00:35:09.881 Multiple Update Detection Support: N/A 00:35:09.881 Firmware Update Granularity: No Information Provided 00:35:09.881 Per-Namespace SMART Log: Yes 00:35:09.881 Asymmetric Namespace Access Log Page: Supported 00:35:09.881 ANA Transition Time : 10 sec 00:35:09.881 00:35:09.881 Asymmetric Namespace Access Capabilities 00:35:09.881 ANA Optimized State : Supported 00:35:09.881 ANA Non-Optimized State : Supported 00:35:09.881 ANA Inaccessible State : Supported 00:35:09.881 ANA Persistent Loss State : Supported 00:35:09.882 ANA Change State : Supported 00:35:09.882 ANAGRPID is not changed : No 00:35:09.882 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:35:09.882 00:35:09.882 ANA Group Identifier Maximum : 128 00:35:09.882 Number of ANA Group Identifiers : 128 00:35:09.882 Max Number of Allowed Namespaces : 1024 00:35:09.882 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:35:09.882 Command Effects Log Page: Supported 00:35:09.882 Get Log Page Extended Data: Supported 00:35:09.882 Telemetry Log Pages: Not Supported 00:35:09.882 Persistent Event Log Pages: Not Supported 00:35:09.882 Supported Log Pages Log Page: May Support 00:35:09.882 Commands Supported & Effects Log Page: Not Supported 00:35:09.882 Feature Identifiers & Effects Log Page:May Support 00:35:09.882 NVMe-MI Commands & Effects Log Page: May Support 00:35:09.882 Data Area 4 for Telemetry Log: Not Supported 00:35:09.882 Error Log Page Entries Supported: 128 00:35:09.882 Keep Alive: Supported 00:35:09.882 Keep Alive Granularity: 1000 ms 00:35:09.882 00:35:09.882 NVM Command Set Attributes 00:35:09.882 ========================== 00:35:09.882 Submission Queue Entry Size 00:35:09.882 Max: 64 00:35:09.882 Min: 64 00:35:09.882 Completion Queue Entry Size 00:35:09.882 Max: 16 00:35:09.882 Min: 16 00:35:09.882 Number of Namespaces: 1024 00:35:09.882 Compare Command: Not Supported 00:35:09.882 Write Uncorrectable Command: Not Supported 00:35:09.882 Dataset Management Command: Supported 00:35:09.882 Write Zeroes Command: Supported 00:35:09.882 Set Features Save Field: Not Supported 00:35:09.882 Reservations: Not Supported 00:35:09.882 Timestamp: Not Supported 00:35:09.882 Copy: Not Supported 00:35:09.882 Volatile Write Cache: Present 00:35:09.882 Atomic Write Unit (Normal): 1 00:35:09.882 Atomic Write Unit (PFail): 1 00:35:09.882 Atomic Compare & Write Unit: 1 00:35:09.882 Fused Compare & Write: Not Supported 00:35:09.882 Scatter-Gather List 00:35:09.882 SGL Command Set: Supported 00:35:09.882 SGL Keyed: Not Supported 00:35:09.882 SGL Bit Bucket Descriptor: Not Supported 00:35:09.882 SGL Metadata Pointer: Not Supported 00:35:09.882 Oversized SGL: Not Supported 00:35:09.882 SGL Metadata Address: Not Supported 00:35:09.882 SGL Offset: Supported 00:35:09.882 Transport SGL Data Block: Not Supported 00:35:09.882 Replay Protected Memory Block: Not Supported 00:35:09.882 00:35:09.882 Firmware Slot Information 00:35:09.882 ========================= 00:35:09.882 Active slot: 0 00:35:09.882 00:35:09.882 Asymmetric Namespace Access 00:35:09.882 =========================== 00:35:09.882 Change Count : 0 00:35:09.882 Number of ANA Group Descriptors : 1 00:35:09.882 ANA Group Descriptor : 0 00:35:09.882 ANA Group ID : 1 00:35:09.882 Number of NSID Values : 1 00:35:09.882 Change Count : 0 00:35:09.882 ANA State : 1 00:35:09.882 Namespace Identifier : 1 00:35:09.882 00:35:09.882 Commands Supported and Effects 00:35:09.882 ============================== 00:35:09.882 Admin Commands 00:35:09.882 -------------- 00:35:09.882 Get Log Page (02h): Supported 00:35:09.882 Identify (06h): Supported 00:35:09.882 Abort (08h): Supported 00:35:09.882 Set Features (09h): Supported 00:35:09.882 Get Features (0Ah): Supported 00:35:09.882 Asynchronous Event Request (0Ch): Supported 00:35:09.882 Keep Alive (18h): Supported 00:35:09.882 I/O Commands 00:35:09.882 ------------ 00:35:09.882 Flush (00h): Supported 00:35:09.882 Write (01h): Supported LBA-Change 00:35:09.882 Read (02h): Supported 00:35:09.882 Write Zeroes (08h): Supported LBA-Change 00:35:09.882 Dataset Management (09h): Supported 00:35:09.882 00:35:09.882 Error Log 00:35:09.882 ========= 00:35:09.882 Entry: 0 00:35:09.882 Error Count: 0x3 00:35:09.882 Submission Queue Id: 0x0 00:35:09.882 Command Id: 0x5 00:35:09.882 Phase Bit: 0 00:35:09.882 Status Code: 0x2 00:35:09.882 Status Code Type: 0x0 00:35:09.882 Do Not Retry: 1 00:35:09.882 Error Location: 0x28 00:35:09.882 LBA: 0x0 00:35:09.882 Namespace: 0x0 00:35:09.882 Vendor Log Page: 0x0 00:35:09.882 ----------- 00:35:09.882 Entry: 1 00:35:09.882 Error Count: 0x2 00:35:09.882 Submission Queue Id: 0x0 00:35:09.882 Command Id: 0x5 00:35:09.882 Phase Bit: 0 00:35:09.882 Status Code: 0x2 00:35:09.882 Status Code Type: 0x0 00:35:09.882 Do Not Retry: 1 00:35:09.882 Error Location: 0x28 00:35:09.882 LBA: 0x0 00:35:09.882 Namespace: 0x0 00:35:09.882 Vendor Log Page: 0x0 00:35:09.882 ----------- 00:35:09.882 Entry: 2 00:35:09.882 Error Count: 0x1 00:35:09.882 Submission Queue Id: 0x0 00:35:09.882 Command Id: 0x4 00:35:09.882 Phase Bit: 0 00:35:09.882 Status Code: 0x2 00:35:09.882 Status Code Type: 0x0 00:35:09.882 Do Not Retry: 1 00:35:09.882 Error Location: 0x28 00:35:09.882 LBA: 0x0 00:35:09.882 Namespace: 0x0 00:35:09.882 Vendor Log Page: 0x0 00:35:09.882 00:35:09.882 Number of Queues 00:35:09.882 ================ 00:35:09.882 Number of I/O Submission Queues: 128 00:35:09.882 Number of I/O Completion Queues: 128 00:35:09.882 00:35:09.882 ZNS Specific Controller Data 00:35:09.882 ============================ 00:35:09.882 Zone Append Size Limit: 0 00:35:09.882 00:35:09.882 00:35:09.882 Active Namespaces 00:35:09.882 ================= 00:35:09.882 get_feature(0x05) failed 00:35:09.882 Namespace ID:1 00:35:09.882 Command Set Identifier: NVM (00h) 00:35:09.882 Deallocate: Supported 00:35:09.882 Deallocated/Unwritten Error: Not Supported 00:35:09.882 Deallocated Read Value: Unknown 00:35:09.882 Deallocate in Write Zeroes: Not Supported 00:35:09.882 Deallocated Guard Field: 0xFFFF 00:35:09.882 Flush: Supported 00:35:09.882 Reservation: Not Supported 00:35:09.882 Namespace Sharing Capabilities: Multiple Controllers 00:35:09.882 Size (in LBAs): 1953525168 (931GiB) 00:35:09.882 Capacity (in LBAs): 1953525168 (931GiB) 00:35:09.882 Utilization (in LBAs): 1953525168 (931GiB) 00:35:09.882 UUID: 7846ebfe-728f-4379-a538-7bca4e0af9e2 00:35:09.882 Thin Provisioning: Not Supported 00:35:09.882 Per-NS Atomic Units: Yes 00:35:09.882 Atomic Boundary Size (Normal): 0 00:35:09.882 Atomic Boundary Size (PFail): 0 00:35:09.882 Atomic Boundary Offset: 0 00:35:09.882 NGUID/EUI64 Never Reused: No 00:35:09.882 ANA group ID: 1 00:35:09.882 Namespace Write Protected: No 00:35:09.882 Number of LBA Formats: 1 00:35:09.882 Current LBA Format: LBA Format #00 00:35:09.882 LBA Format #00: Data Size: 512 Metadata Size: 0 00:35:09.882 00:35:09.882 15:07:49 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:35:09.882 15:07:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:09.882 15:07:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:35:09.882 15:07:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:09.882 15:07:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:35:09.882 15:07:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:09.882 15:07:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:09.882 rmmod nvme_tcp 00:35:10.141 rmmod nvme_fabrics 00:35:10.141 15:07:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:10.141 15:07:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:35:10.141 15:07:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:35:10.141 15:07:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:35:10.141 15:07:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:10.141 15:07:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:10.141 15:07:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:10.141 15:07:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:10.141 15:07:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:10.141 15:07:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:10.141 15:07:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:10.141 15:07:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:12.050 15:07:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:12.050 15:07:51 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:35:12.050 15:07:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:12.050 15:07:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:35:12.050 15:07:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:12.050 15:07:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:12.050 15:07:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:12.050 15:07:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:12.050 15:07:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:35:12.050 15:07:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:35:12.050 15:07:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:13.428 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:13.428 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:13.428 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:13.428 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:13.428 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:13.428 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:13.428 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:13.428 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:13.428 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:13.428 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:13.428 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:13.428 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:13.428 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:13.428 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:13.428 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:13.428 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:14.386 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:35:14.386 00:35:14.386 real 0m9.606s 00:35:14.386 user 0m2.036s 00:35:14.386 sys 0m3.556s 00:35:14.386 15:07:53 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:14.386 15:07:53 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:14.386 ************************************ 00:35:14.386 END TEST nvmf_identify_kernel_target 00:35:14.386 ************************************ 00:35:14.386 15:07:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:35:14.386 15:07:53 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:35:14.386 15:07:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:35:14.386 15:07:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:14.386 15:07:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:14.386 ************************************ 00:35:14.386 START TEST nvmf_auth_host 00:35:14.386 ************************************ 00:35:14.386 15:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:35:14.643 * Looking for test storage... 00:35:14.643 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:14.643 15:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:14.643 15:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:35:14.643 15:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:14.643 15:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:14.644 15:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:14.644 15:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:14.644 15:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:14.644 15:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:14.644 15:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:14.644 15:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:14.644 15:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:14.644 15:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:14.644 15:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:14.644 15:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:14.644 15:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:14.644 15:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:14.644 15:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:14.644 15:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:14.644 15:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:14.644 15:07:53 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:14.644 15:07:53 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:14.644 15:07:53 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:14.644 15:07:53 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.644 15:07:53 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.644 15:07:53 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.644 15:07:53 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:35:14.644 15:07:53 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.644 15:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:35:14.644 15:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:14.644 15:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:14.644 15:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:14.644 15:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:14.644 15:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:14.644 15:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:14.644 15:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:14.644 15:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:14.644 15:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:35:14.644 15:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:35:14.644 15:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:35:14.644 15:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:35:14.644 15:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:14.644 15:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:14.644 15:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:35:14.644 15:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:35:14.644 15:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:35:14.644 15:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:14.644 15:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:14.644 15:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:14.644 15:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:14.644 15:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:14.644 15:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:14.644 15:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:14.644 15:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:14.644 15:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:14.644 15:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:14.644 15:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:35:14.644 15:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:16.545 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:16.545 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:16.545 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:16.545 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:16.545 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:16.546 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:16.546 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:16.546 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:16.546 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:16.546 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:16.546 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:16.546 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:16.546 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:16.546 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:16.546 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:16.546 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:16.546 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:16.546 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:16.546 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:16.546 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:16.546 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:35:16.546 00:35:16.546 --- 10.0.0.2 ping statistics --- 00:35:16.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:16.546 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:35:16.546 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:16.546 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:16.546 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:35:16.546 00:35:16.546 --- 10.0.0.1 ping statistics --- 00:35:16.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:16.546 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:35:16.546 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:16.546 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:35:16.546 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:16.546 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:16.546 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:16.546 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:16.546 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:16.546 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:16.546 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:16.546 15:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:35:16.546 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:16.546 15:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:16.546 15:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.546 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=2042228 00:35:16.546 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:35:16.546 15:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 2042228 00:35:16.546 15:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2042228 ']' 00:35:16.546 15:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:16.546 15:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:16.546 15:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:16.546 15:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:16.546 15:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.480 15:07:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:17.480 15:07:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:35:17.480 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:17.480 15:07:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:17.480 15:07:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.738 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:17.738 15:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:35:17.738 15:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:35:17.738 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:17.738 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:17.738 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:17.738 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:35:17.738 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:35:17.738 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:17.738 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=092a013d5834236972ea565fdd38befd 00:35:17.738 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:35:17.738 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.tRK 00:35:17.738 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 092a013d5834236972ea565fdd38befd 0 00:35:17.738 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 092a013d5834236972ea565fdd38befd 0 00:35:17.738 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=092a013d5834236972ea565fdd38befd 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.tRK 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.tRK 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.tRK 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0215130351af944a24a23fc11a75e9e65f47993d8aab858aae85e3f758174d89 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.KcZ 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0215130351af944a24a23fc11a75e9e65f47993d8aab858aae85e3f758174d89 3 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0215130351af944a24a23fc11a75e9e65f47993d8aab858aae85e3f758174d89 3 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0215130351af944a24a23fc11a75e9e65f47993d8aab858aae85e3f758174d89 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.KcZ 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.KcZ 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.KcZ 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ac7dbec2a5077b1b963787ec374425fde2ec35164a3f8c92 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.sZS 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ac7dbec2a5077b1b963787ec374425fde2ec35164a3f8c92 0 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ac7dbec2a5077b1b963787ec374425fde2ec35164a3f8c92 0 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ac7dbec2a5077b1b963787ec374425fde2ec35164a3f8c92 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.sZS 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.sZS 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.sZS 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=770ec127d3301bf49957bdab7fec9f1b5b8c8bb2dc4e2cfd 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.B4p 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 770ec127d3301bf49957bdab7fec9f1b5b8c8bb2dc4e2cfd 2 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 770ec127d3301bf49957bdab7fec9f1b5b8c8bb2dc4e2cfd 2 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=770ec127d3301bf49957bdab7fec9f1b5b8c8bb2dc4e2cfd 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:35:17.739 15:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:17.739 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.B4p 00:35:17.739 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.B4p 00:35:17.739 15:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.B4p 00:35:17.739 15:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:35:17.739 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:17.739 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:17.739 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:17.739 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:35:17.739 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:35:17.739 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:17.739 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=acc9a04bc0222259248790d38048b977 00:35:17.739 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:35:17.739 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.eLL 00:35:17.739 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key acc9a04bc0222259248790d38048b977 1 00:35:17.739 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 acc9a04bc0222259248790d38048b977 1 00:35:17.739 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:17.739 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:17.739 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=acc9a04bc0222259248790d38048b977 00:35:17.739 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:35:17.739 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.eLL 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.eLL 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.eLL 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=aa22fb0623cde81d4a81969abbcc9992 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.NdU 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key aa22fb0623cde81d4a81969abbcc9992 1 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 aa22fb0623cde81d4a81969abbcc9992 1 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=aa22fb0623cde81d4a81969abbcc9992 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.NdU 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.NdU 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.NdU 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2cb26a731dae8ceacf06c24d7906d9f69b9755f084a2600e 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.DLu 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2cb26a731dae8ceacf06c24d7906d9f69b9755f084a2600e 2 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2cb26a731dae8ceacf06c24d7906d9f69b9755f084a2600e 2 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2cb26a731dae8ceacf06c24d7906d9f69b9755f084a2600e 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.DLu 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.DLu 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.DLu 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e07c6a9918d3680d41c5b41bc508f344 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Hww 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e07c6a9918d3680d41c5b41bc508f344 0 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e07c6a9918d3680d41c5b41bc508f344 0 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e07c6a9918d3680d41c5b41bc508f344 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Hww 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Hww 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Hww 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4a07e71b5abdc1f4cc9569e138e3c22f8bc7b65e522de5d08552a855bad9e7eb 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Vq4 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4a07e71b5abdc1f4cc9569e138e3c22f8bc7b65e522de5d08552a855bad9e7eb 3 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4a07e71b5abdc1f4cc9569e138e3c22f8bc7b65e522de5d08552a855bad9e7eb 3 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4a07e71b5abdc1f4cc9569e138e3c22f8bc7b65e522de5d08552a855bad9e7eb 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Vq4 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Vq4 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Vq4 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2042228 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2042228 ']' 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:17.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:17.997 15:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.255 15:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:18.255 15:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:35:18.255 15:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:18.255 15:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.tRK 00:35:18.255 15:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.255 15:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.255 15:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.255 15:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.KcZ ]] 00:35:18.255 15:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.KcZ 00:35:18.255 15:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.255 15:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.255 15:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.255 15:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:18.255 15:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.sZS 00:35:18.255 15:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.255 15:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.B4p ]] 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.B4p 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.eLL 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.NdU ]] 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.NdU 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.DLu 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Hww ]] 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Hww 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Vq4 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:18.513 15:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:19.445 Waiting for block devices as requested 00:35:19.703 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:19.703 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:19.960 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:19.960 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:19.960 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:20.217 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:20.217 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:20.217 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:20.217 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:20.475 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:20.475 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:20.475 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:20.475 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:20.732 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:20.732 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:20.732 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:20.732 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:21.297 15:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:35:21.297 15:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:21.297 15:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:35:21.297 15:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:35:21.297 15:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:21.297 15:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:35:21.297 15:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:35:21.297 15:08:00 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:35:21.297 15:08:00 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:21.297 No valid GPT data, bailing 00:35:21.297 15:08:00 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:21.297 15:08:00 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:35:21.297 15:08:00 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:35:21.297 15:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:35:21.297 15:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:35:21.297 15:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:21.297 15:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:21.297 15:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:21.297 15:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:35:21.297 15:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:35:21.297 15:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:35:21.297 15:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:35:21.297 15:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:35:21.297 15:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:35:21.297 15:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:35:21.297 15:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:35:21.297 15:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:21.297 15:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:35:21.297 00:35:21.297 Discovery Log Number of Records 2, Generation counter 2 00:35:21.297 =====Discovery Log Entry 0====== 00:35:21.297 trtype: tcp 00:35:21.297 adrfam: ipv4 00:35:21.297 subtype: current discovery subsystem 00:35:21.297 treq: not specified, sq flow control disable supported 00:35:21.297 portid: 1 00:35:21.297 trsvcid: 4420 00:35:21.297 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:21.297 traddr: 10.0.0.1 00:35:21.297 eflags: none 00:35:21.297 sectype: none 00:35:21.297 =====Discovery Log Entry 1====== 00:35:21.297 trtype: tcp 00:35:21.297 adrfam: ipv4 00:35:21.297 subtype: nvme subsystem 00:35:21.297 treq: not specified, sq flow control disable supported 00:35:21.297 portid: 1 00:35:21.297 trsvcid: 4420 00:35:21.297 subnqn: nqn.2024-02.io.spdk:cnode0 00:35:21.297 traddr: 10.0.0.1 00:35:21.297 eflags: none 00:35:21.297 sectype: none 00:35:21.297 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:21.297 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:35:21.297 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:35:21.297 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:21.297 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:21.297 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:21.297 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:21.297 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:21.297 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWM3ZGJlYzJhNTA3N2IxYjk2Mzc4N2VjMzc0NDI1ZmRlMmVjMzUxNjRhM2Y4YzkyL9jZXQ==: 00:35:21.297 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzcwZWMxMjdkMzMwMWJmNDk5NTdiZGFiN2ZlYzlmMWI1YjhjOGJiMmRjNGUyY2Zkr72cWA==: 00:35:21.297 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:21.297 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:21.297 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWM3ZGJlYzJhNTA3N2IxYjk2Mzc4N2VjMzc0NDI1ZmRlMmVjMzUxNjRhM2Y4YzkyL9jZXQ==: 00:35:21.297 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzcwZWMxMjdkMzMwMWJmNDk5NTdiZGFiN2ZlYzlmMWI1YjhjOGJiMmRjNGUyY2Zkr72cWA==: ]] 00:35:21.297 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzcwZWMxMjdkMzMwMWJmNDk5NTdiZGFiN2ZlYzlmMWI1YjhjOGJiMmRjNGUyY2Zkr72cWA==: 00:35:21.297 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:35:21.297 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:35:21.297 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:35:21.297 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:21.297 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:35:21.297 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:21.297 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:35:21.297 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:21.297 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:21.297 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:21.298 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:21.298 15:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.298 15:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.298 15:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.298 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:21.298 15:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:21.298 15:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:21.298 15:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:21.298 15:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:21.298 15:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:21.298 15:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:21.298 15:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:21.298 15:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:21.298 15:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:21.298 15:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:21.298 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:21.298 15:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.298 15:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.556 nvme0n1 00:35:21.556 15:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.556 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:21.556 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:21.556 15:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.556 15:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.556 15:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.556 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:21.556 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:21.556 15:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.556 15:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.556 15:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.556 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:21.556 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:21.556 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:21.556 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:35:21.556 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:21.556 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:21.556 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:21.556 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:21.556 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDkyYTAxM2Q1ODM0MjM2OTcyZWE1NjVmZGQzOGJlZmTTSLr9: 00:35:21.556 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDIxNTEzMDM1MWFmOTQ0YTI0YTIzZmMxMWE3NWU5ZTY1ZjQ3OTkzZDhhYWI4NThhYWU4NWUzZjc1ODE3NGQ4OdeOt7U=: 00:35:21.556 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:21.556 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:21.556 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDkyYTAxM2Q1ODM0MjM2OTcyZWE1NjVmZGQzOGJlZmTTSLr9: 00:35:21.556 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDIxNTEzMDM1MWFmOTQ0YTI0YTIzZmMxMWE3NWU5ZTY1ZjQ3OTkzZDhhYWI4NThhYWU4NWUzZjc1ODE3NGQ4OdeOt7U=: ]] 00:35:21.556 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDIxNTEzMDM1MWFmOTQ0YTI0YTIzZmMxMWE3NWU5ZTY1ZjQ3OTkzZDhhYWI4NThhYWU4NWUzZjc1ODE3NGQ4OdeOt7U=: 00:35:21.556 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:35:21.556 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:21.556 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:21.556 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:21.556 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:21.556 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:21.556 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:21.556 15:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.556 15:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.556 15:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.556 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:21.556 15:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:21.556 15:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:21.556 15:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:21.556 15:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:21.556 15:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:21.556 15:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:21.556 15:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:21.556 15:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:21.556 15:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:21.556 15:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:21.556 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:21.556 15:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.556 15:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.814 nvme0n1 00:35:21.814 15:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.814 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:21.814 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:21.814 15:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.814 15:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.814 15:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.814 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:21.814 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:21.814 15:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.814 15:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.814 15:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.814 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:21.814 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:21.814 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:21.814 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:21.814 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:21.814 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:21.814 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWM3ZGJlYzJhNTA3N2IxYjk2Mzc4N2VjMzc0NDI1ZmRlMmVjMzUxNjRhM2Y4YzkyL9jZXQ==: 00:35:21.814 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzcwZWMxMjdkMzMwMWJmNDk5NTdiZGFiN2ZlYzlmMWI1YjhjOGJiMmRjNGUyY2Zkr72cWA==: 00:35:21.814 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:21.814 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:21.814 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWM3ZGJlYzJhNTA3N2IxYjk2Mzc4N2VjMzc0NDI1ZmRlMmVjMzUxNjRhM2Y4YzkyL9jZXQ==: 00:35:21.814 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzcwZWMxMjdkMzMwMWJmNDk5NTdiZGFiN2ZlYzlmMWI1YjhjOGJiMmRjNGUyY2Zkr72cWA==: ]] 00:35:21.814 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzcwZWMxMjdkMzMwMWJmNDk5NTdiZGFiN2ZlYzlmMWI1YjhjOGJiMmRjNGUyY2Zkr72cWA==: 00:35:21.814 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:35:21.814 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:21.814 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:21.814 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:21.814 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:21.814 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:21.814 15:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:21.814 15:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.814 15:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.814 15:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.814 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:21.815 15:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:21.815 15:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:21.815 15:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:21.815 15:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:21.815 15:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:21.815 15:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:21.815 15:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:21.815 15:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:21.815 15:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:21.815 15:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:21.815 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:21.815 15:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.815 15:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.072 nvme0n1 00:35:22.072 15:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.072 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:22.072 15:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.072 15:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.072 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:22.072 15:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.072 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:22.072 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:22.072 15:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.072 15:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.072 15:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.072 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:22.072 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:22.072 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:22.072 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:22.072 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:22.072 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:22.072 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWNjOWEwNGJjMDIyMjI1OTI0ODc5MGQzODA0OGI5NzdovoJk: 00:35:22.072 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWEyMmZiMDYyM2NkZTgxZDRhODE5NjlhYmJjYzk5OTJOPyHS: 00:35:22.072 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:22.072 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:22.072 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWNjOWEwNGJjMDIyMjI1OTI0ODc5MGQzODA0OGI5NzdovoJk: 00:35:22.072 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWEyMmZiMDYyM2NkZTgxZDRhODE5NjlhYmJjYzk5OTJOPyHS: ]] 00:35:22.072 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWEyMmZiMDYyM2NkZTgxZDRhODE5NjlhYmJjYzk5OTJOPyHS: 00:35:22.072 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:35:22.072 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:22.072 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:22.072 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:22.073 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:22.073 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:22.073 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:22.073 15:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.073 15:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.073 15:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.073 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:22.073 15:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:22.073 15:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:22.073 15:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:22.073 15:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:22.073 15:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:22.073 15:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:22.073 15:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:22.073 15:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:22.073 15:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:22.073 15:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:22.073 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:22.073 15:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.073 15:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.330 nvme0n1 00:35:22.330 15:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.330 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:22.330 15:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.330 15:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.330 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:22.330 15:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.330 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:22.330 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:22.330 15:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.330 15:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.330 15:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.330 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:22.330 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:35:22.330 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:22.330 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:22.330 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:22.330 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:22.330 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmNiMjZhNzMxZGFlOGNlYWNmMDZjMjRkNzkwNmQ5ZjY5Yjk3NTVmMDg0YTI2MDBlch5xSw==: 00:35:22.330 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTA3YzZhOTkxOGQzNjgwZDQxYzViNDFiYzUwOGYzNDSosSm7: 00:35:22.330 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:22.330 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:22.330 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmNiMjZhNzMxZGFlOGNlYWNmMDZjMjRkNzkwNmQ5ZjY5Yjk3NTVmMDg0YTI2MDBlch5xSw==: 00:35:22.330 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTA3YzZhOTkxOGQzNjgwZDQxYzViNDFiYzUwOGYzNDSosSm7: ]] 00:35:22.330 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTA3YzZhOTkxOGQzNjgwZDQxYzViNDFiYzUwOGYzNDSosSm7: 00:35:22.330 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:35:22.330 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:22.330 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:22.330 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:22.330 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:22.330 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:22.330 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:22.330 15:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.330 15:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.330 15:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.330 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:22.330 15:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:22.330 15:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:22.330 15:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:22.330 15:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:22.330 15:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:22.330 15:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:22.330 15:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:22.330 15:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:22.330 15:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:22.330 15:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:22.330 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:22.330 15:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.330 15:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.330 nvme0n1 00:35:22.330 15:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.330 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:22.330 15:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.330 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:22.330 15:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.587 15:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.587 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:22.587 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:22.587 15:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.587 15:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.587 15:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.587 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:22.587 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:35:22.587 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:22.587 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:22.587 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:22.587 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:22.587 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGEwN2U3MWI1YWJkYzFmNGNjOTU2OWUxMzhlM2MyMmY4YmM3YjY1ZTUyMmRlNWQwODU1MmE4NTViYWQ5ZTdlYurzbqg=: 00:35:22.587 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:22.587 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:22.587 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:22.587 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGEwN2U3MWI1YWJkYzFmNGNjOTU2OWUxMzhlM2MyMmY4YmM3YjY1ZTUyMmRlNWQwODU1MmE4NTViYWQ5ZTdlYurzbqg=: 00:35:22.587 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:22.587 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:35:22.587 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:22.587 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:22.587 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:22.587 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:22.587 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:22.587 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:22.587 15:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.587 15:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.587 15:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.587 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:22.587 15:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:22.587 15:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:22.588 15:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:22.588 15:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:22.588 15:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:22.588 15:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:22.588 15:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:22.588 15:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:22.588 15:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:22.588 15:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:22.588 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:22.588 15:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.588 15:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.588 nvme0n1 00:35:22.588 15:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.588 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:22.588 15:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.588 15:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.588 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:22.588 15:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.847 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:22.847 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:22.847 15:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.847 15:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.847 15:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.847 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:22.847 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:22.847 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:35:22.847 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:22.847 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:22.847 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:22.847 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:22.847 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDkyYTAxM2Q1ODM0MjM2OTcyZWE1NjVmZGQzOGJlZmTTSLr9: 00:35:22.847 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDIxNTEzMDM1MWFmOTQ0YTI0YTIzZmMxMWE3NWU5ZTY1ZjQ3OTkzZDhhYWI4NThhYWU4NWUzZjc1ODE3NGQ4OdeOt7U=: 00:35:22.847 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:22.847 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:22.847 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDkyYTAxM2Q1ODM0MjM2OTcyZWE1NjVmZGQzOGJlZmTTSLr9: 00:35:22.847 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDIxNTEzMDM1MWFmOTQ0YTI0YTIzZmMxMWE3NWU5ZTY1ZjQ3OTkzZDhhYWI4NThhYWU4NWUzZjc1ODE3NGQ4OdeOt7U=: ]] 00:35:22.847 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDIxNTEzMDM1MWFmOTQ0YTI0YTIzZmMxMWE3NWU5ZTY1ZjQ3OTkzZDhhYWI4NThhYWU4NWUzZjc1ODE3NGQ4OdeOt7U=: 00:35:22.847 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:35:22.847 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:22.847 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:22.847 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:22.847 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:22.847 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:22.847 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:22.847 15:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.847 15:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.847 15:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.847 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:22.847 15:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:22.847 15:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:22.847 15:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:22.847 15:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:22.847 15:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:22.847 15:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:22.847 15:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:22.847 15:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:22.847 15:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:22.847 15:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:22.847 15:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:22.847 15:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.847 15:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.847 nvme0n1 00:35:22.847 15:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.847 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:22.847 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:22.847 15:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.847 15:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.847 15:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.105 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:23.105 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:23.105 15:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.105 15:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.105 15:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.105 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:23.105 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:35:23.105 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:23.105 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:23.105 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:23.105 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:23.105 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWM3ZGJlYzJhNTA3N2IxYjk2Mzc4N2VjMzc0NDI1ZmRlMmVjMzUxNjRhM2Y4YzkyL9jZXQ==: 00:35:23.105 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzcwZWMxMjdkMzMwMWJmNDk5NTdiZGFiN2ZlYzlmMWI1YjhjOGJiMmRjNGUyY2Zkr72cWA==: 00:35:23.105 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:23.105 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:23.105 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWM3ZGJlYzJhNTA3N2IxYjk2Mzc4N2VjMzc0NDI1ZmRlMmVjMzUxNjRhM2Y4YzkyL9jZXQ==: 00:35:23.105 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzcwZWMxMjdkMzMwMWJmNDk5NTdiZGFiN2ZlYzlmMWI1YjhjOGJiMmRjNGUyY2Zkr72cWA==: ]] 00:35:23.105 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzcwZWMxMjdkMzMwMWJmNDk5NTdiZGFiN2ZlYzlmMWI1YjhjOGJiMmRjNGUyY2Zkr72cWA==: 00:35:23.105 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:35:23.105 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:23.105 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:23.105 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:23.105 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:23.105 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:23.105 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:23.105 15:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.105 15:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.105 15:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.105 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:23.105 15:08:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:23.105 15:08:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:23.105 15:08:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:23.105 15:08:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:23.105 15:08:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:23.105 15:08:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:23.105 15:08:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:23.105 15:08:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:23.105 15:08:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:23.105 15:08:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:23.105 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:23.105 15:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.105 15:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.105 nvme0n1 00:35:23.105 15:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.105 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:23.105 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:23.105 15:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.105 15:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.364 15:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.364 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:23.364 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:23.364 15:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.364 15:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.364 15:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.364 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:23.364 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:35:23.364 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:23.364 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:23.364 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:23.364 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:23.364 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWNjOWEwNGJjMDIyMjI1OTI0ODc5MGQzODA0OGI5NzdovoJk: 00:35:23.364 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWEyMmZiMDYyM2NkZTgxZDRhODE5NjlhYmJjYzk5OTJOPyHS: 00:35:23.364 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:23.364 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:23.364 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWNjOWEwNGJjMDIyMjI1OTI0ODc5MGQzODA0OGI5NzdovoJk: 00:35:23.364 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWEyMmZiMDYyM2NkZTgxZDRhODE5NjlhYmJjYzk5OTJOPyHS: ]] 00:35:23.364 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWEyMmZiMDYyM2NkZTgxZDRhODE5NjlhYmJjYzk5OTJOPyHS: 00:35:23.364 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:35:23.364 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:23.364 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:23.364 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:23.364 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:23.364 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:23.364 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:23.364 15:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.364 15:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.364 15:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.364 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:23.364 15:08:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:23.364 15:08:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:23.364 15:08:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:23.364 15:08:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:23.364 15:08:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:23.364 15:08:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:23.364 15:08:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:23.364 15:08:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:23.364 15:08:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:23.364 15:08:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:23.364 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:23.364 15:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.364 15:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.622 nvme0n1 00:35:23.622 15:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.622 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:23.622 15:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.622 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:23.622 15:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.622 15:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.622 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:23.622 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:23.622 15:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.622 15:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.622 15:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.622 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:23.622 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:35:23.622 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:23.622 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:23.622 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:23.622 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:23.622 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmNiMjZhNzMxZGFlOGNlYWNmMDZjMjRkNzkwNmQ5ZjY5Yjk3NTVmMDg0YTI2MDBlch5xSw==: 00:35:23.622 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTA3YzZhOTkxOGQzNjgwZDQxYzViNDFiYzUwOGYzNDSosSm7: 00:35:23.622 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:23.622 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:23.622 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmNiMjZhNzMxZGFlOGNlYWNmMDZjMjRkNzkwNmQ5ZjY5Yjk3NTVmMDg0YTI2MDBlch5xSw==: 00:35:23.622 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTA3YzZhOTkxOGQzNjgwZDQxYzViNDFiYzUwOGYzNDSosSm7: ]] 00:35:23.622 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTA3YzZhOTkxOGQzNjgwZDQxYzViNDFiYzUwOGYzNDSosSm7: 00:35:23.622 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:35:23.622 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:23.622 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:23.622 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:23.622 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:23.622 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:23.622 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:23.622 15:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.622 15:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.622 15:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.622 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:23.622 15:08:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:23.622 15:08:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:23.622 15:08:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:23.622 15:08:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:23.622 15:08:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:23.622 15:08:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:23.622 15:08:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:23.622 15:08:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:23.622 15:08:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:23.622 15:08:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:23.622 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:23.622 15:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.622 15:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.880 nvme0n1 00:35:23.880 15:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.880 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:23.880 15:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.880 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:23.880 15:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.880 15:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.880 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:23.880 15:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:23.880 15:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.880 15:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.880 15:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.880 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:23.880 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:35:23.880 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:23.880 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:23.880 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:23.880 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:23.880 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGEwN2U3MWI1YWJkYzFmNGNjOTU2OWUxMzhlM2MyMmY4YmM3YjY1ZTUyMmRlNWQwODU1MmE4NTViYWQ5ZTdlYurzbqg=: 00:35:23.880 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:23.880 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:23.880 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:23.880 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGEwN2U3MWI1YWJkYzFmNGNjOTU2OWUxMzhlM2MyMmY4YmM3YjY1ZTUyMmRlNWQwODU1MmE4NTViYWQ5ZTdlYurzbqg=: 00:35:23.880 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:23.880 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:35:23.880 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:23.880 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:23.880 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:23.880 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:23.880 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:23.880 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:23.880 15:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.880 15:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.880 15:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.880 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:23.880 15:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:23.880 15:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:23.880 15:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:23.880 15:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:23.880 15:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:23.880 15:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:23.880 15:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:23.880 15:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:23.880 15:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:23.880 15:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:23.880 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:23.880 15:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.880 15:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.137 nvme0n1 00:35:24.137 15:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.137 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:24.137 15:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.137 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:24.137 15:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.137 15:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.137 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:24.137 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:24.137 15:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.137 15:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.137 15:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.137 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:24.137 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:24.137 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:35:24.137 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:24.137 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:24.137 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:24.137 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:24.137 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDkyYTAxM2Q1ODM0MjM2OTcyZWE1NjVmZGQzOGJlZmTTSLr9: 00:35:24.137 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDIxNTEzMDM1MWFmOTQ0YTI0YTIzZmMxMWE3NWU5ZTY1ZjQ3OTkzZDhhYWI4NThhYWU4NWUzZjc1ODE3NGQ4OdeOt7U=: 00:35:24.137 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:24.137 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:24.137 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDkyYTAxM2Q1ODM0MjM2OTcyZWE1NjVmZGQzOGJlZmTTSLr9: 00:35:24.137 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDIxNTEzMDM1MWFmOTQ0YTI0YTIzZmMxMWE3NWU5ZTY1ZjQ3OTkzZDhhYWI4NThhYWU4NWUzZjc1ODE3NGQ4OdeOt7U=: ]] 00:35:24.137 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDIxNTEzMDM1MWFmOTQ0YTI0YTIzZmMxMWE3NWU5ZTY1ZjQ3OTkzZDhhYWI4NThhYWU4NWUzZjc1ODE3NGQ4OdeOt7U=: 00:35:24.137 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:35:24.137 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:24.137 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:24.137 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:24.137 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:24.137 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:24.137 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:24.137 15:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.137 15:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.137 15:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.137 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:24.137 15:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:24.137 15:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:24.137 15:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:24.137 15:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:24.137 15:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:24.137 15:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:24.137 15:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:24.137 15:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:24.137 15:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:24.137 15:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:24.137 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:24.137 15:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.137 15:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.395 nvme0n1 00:35:24.395 15:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.395 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:24.395 15:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.395 15:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.395 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:24.395 15:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.395 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:24.395 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:24.395 15:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.395 15:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.395 15:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.395 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:24.395 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:35:24.395 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:24.395 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:24.395 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:24.395 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:24.395 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWM3ZGJlYzJhNTA3N2IxYjk2Mzc4N2VjMzc0NDI1ZmRlMmVjMzUxNjRhM2Y4YzkyL9jZXQ==: 00:35:24.395 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzcwZWMxMjdkMzMwMWJmNDk5NTdiZGFiN2ZlYzlmMWI1YjhjOGJiMmRjNGUyY2Zkr72cWA==: 00:35:24.395 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:24.395 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:24.395 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWM3ZGJlYzJhNTA3N2IxYjk2Mzc4N2VjMzc0NDI1ZmRlMmVjMzUxNjRhM2Y4YzkyL9jZXQ==: 00:35:24.395 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzcwZWMxMjdkMzMwMWJmNDk5NTdiZGFiN2ZlYzlmMWI1YjhjOGJiMmRjNGUyY2Zkr72cWA==: ]] 00:35:24.395 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzcwZWMxMjdkMzMwMWJmNDk5NTdiZGFiN2ZlYzlmMWI1YjhjOGJiMmRjNGUyY2Zkr72cWA==: 00:35:24.395 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:35:24.395 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:24.395 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:24.395 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:24.395 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:24.395 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:24.395 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:24.395 15:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.395 15:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.395 15:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.395 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:24.395 15:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:24.395 15:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:24.395 15:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:24.395 15:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:24.395 15:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:24.395 15:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:24.395 15:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:24.395 15:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:24.395 15:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:24.395 15:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:24.395 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:24.395 15:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.395 15:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.652 nvme0n1 00:35:24.652 15:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.910 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:24.910 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:24.910 15:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.910 15:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.910 15:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.910 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:24.910 15:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:24.910 15:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.910 15:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.910 15:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.910 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:24.910 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:35:24.910 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:24.910 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:24.910 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:24.910 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:24.910 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWNjOWEwNGJjMDIyMjI1OTI0ODc5MGQzODA0OGI5NzdovoJk: 00:35:24.910 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWEyMmZiMDYyM2NkZTgxZDRhODE5NjlhYmJjYzk5OTJOPyHS: 00:35:24.910 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:24.910 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:24.910 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWNjOWEwNGJjMDIyMjI1OTI0ODc5MGQzODA0OGI5NzdovoJk: 00:35:24.910 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWEyMmZiMDYyM2NkZTgxZDRhODE5NjlhYmJjYzk5OTJOPyHS: ]] 00:35:24.910 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWEyMmZiMDYyM2NkZTgxZDRhODE5NjlhYmJjYzk5OTJOPyHS: 00:35:24.910 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:35:24.910 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:24.910 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:24.910 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:24.910 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:24.910 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:24.910 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:24.910 15:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.910 15:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.910 15:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.910 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:24.910 15:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:24.910 15:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:24.910 15:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:24.911 15:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:24.911 15:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:24.911 15:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:24.911 15:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:24.911 15:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:24.911 15:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:24.911 15:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:24.911 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:24.911 15:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.911 15:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.169 nvme0n1 00:35:25.169 15:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.169 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:25.169 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:25.169 15:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.169 15:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.169 15:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.169 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:25.169 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:25.169 15:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.169 15:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.169 15:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.169 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:25.169 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:35:25.169 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:25.169 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:25.169 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:25.169 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:25.169 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmNiMjZhNzMxZGFlOGNlYWNmMDZjMjRkNzkwNmQ5ZjY5Yjk3NTVmMDg0YTI2MDBlch5xSw==: 00:35:25.169 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTA3YzZhOTkxOGQzNjgwZDQxYzViNDFiYzUwOGYzNDSosSm7: 00:35:25.169 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:25.169 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:25.169 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmNiMjZhNzMxZGFlOGNlYWNmMDZjMjRkNzkwNmQ5ZjY5Yjk3NTVmMDg0YTI2MDBlch5xSw==: 00:35:25.169 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTA3YzZhOTkxOGQzNjgwZDQxYzViNDFiYzUwOGYzNDSosSm7: ]] 00:35:25.169 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTA3YzZhOTkxOGQzNjgwZDQxYzViNDFiYzUwOGYzNDSosSm7: 00:35:25.169 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:35:25.169 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:25.169 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:25.169 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:25.169 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:25.169 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:25.169 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:25.169 15:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.169 15:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.169 15:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.169 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:25.169 15:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:25.169 15:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:25.169 15:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:25.169 15:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:25.169 15:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:25.169 15:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:25.169 15:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:25.169 15:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:25.169 15:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:25.169 15:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:25.169 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:25.169 15:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.169 15:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.427 nvme0n1 00:35:25.427 15:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.427 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:25.427 15:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.427 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:25.427 15:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.427 15:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.427 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:25.427 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:25.427 15:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.427 15:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.685 15:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.685 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:25.685 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:35:25.685 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:25.685 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:25.685 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:25.685 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:25.685 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGEwN2U3MWI1YWJkYzFmNGNjOTU2OWUxMzhlM2MyMmY4YmM3YjY1ZTUyMmRlNWQwODU1MmE4NTViYWQ5ZTdlYurzbqg=: 00:35:25.685 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:25.685 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:25.685 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:25.685 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGEwN2U3MWI1YWJkYzFmNGNjOTU2OWUxMzhlM2MyMmY4YmM3YjY1ZTUyMmRlNWQwODU1MmE4NTViYWQ5ZTdlYurzbqg=: 00:35:25.685 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:25.685 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:35:25.685 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:25.685 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:25.685 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:25.685 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:25.685 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:25.685 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:25.685 15:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.685 15:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.685 15:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.685 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:25.685 15:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:25.685 15:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:25.685 15:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:25.685 15:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:25.685 15:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:25.685 15:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:25.685 15:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:25.685 15:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:25.685 15:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:25.685 15:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:25.685 15:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:25.685 15:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.685 15:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.942 nvme0n1 00:35:25.942 15:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.942 15:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:25.943 15:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:25.943 15:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.943 15:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.943 15:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.943 15:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:25.943 15:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:25.943 15:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.943 15:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.943 15:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.943 15:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:25.943 15:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:25.943 15:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:35:25.943 15:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:25.943 15:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:25.943 15:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:25.943 15:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:25.943 15:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDkyYTAxM2Q1ODM0MjM2OTcyZWE1NjVmZGQzOGJlZmTTSLr9: 00:35:25.943 15:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDIxNTEzMDM1MWFmOTQ0YTI0YTIzZmMxMWE3NWU5ZTY1ZjQ3OTkzZDhhYWI4NThhYWU4NWUzZjc1ODE3NGQ4OdeOt7U=: 00:35:25.943 15:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:25.943 15:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:25.943 15:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDkyYTAxM2Q1ODM0MjM2OTcyZWE1NjVmZGQzOGJlZmTTSLr9: 00:35:25.943 15:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDIxNTEzMDM1MWFmOTQ0YTI0YTIzZmMxMWE3NWU5ZTY1ZjQ3OTkzZDhhYWI4NThhYWU4NWUzZjc1ODE3NGQ4OdeOt7U=: ]] 00:35:25.943 15:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDIxNTEzMDM1MWFmOTQ0YTI0YTIzZmMxMWE3NWU5ZTY1ZjQ3OTkzZDhhYWI4NThhYWU4NWUzZjc1ODE3NGQ4OdeOt7U=: 00:35:25.943 15:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:35:25.943 15:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:25.943 15:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:25.943 15:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:25.943 15:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:25.943 15:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:25.943 15:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:25.943 15:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.943 15:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.943 15:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.943 15:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:25.943 15:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:25.943 15:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:25.943 15:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:25.943 15:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:25.943 15:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:25.943 15:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:25.943 15:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:25.943 15:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:25.943 15:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:25.943 15:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:25.943 15:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:25.943 15:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.943 15:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.509 nvme0n1 00:35:26.509 15:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:26.509 15:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:26.509 15:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:26.509 15:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.509 15:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:26.509 15:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:26.509 15:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:26.509 15:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:26.509 15:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:26.509 15:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.509 15:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:26.509 15:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:26.509 15:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:35:26.509 15:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:26.509 15:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:26.509 15:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:26.509 15:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:26.509 15:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWM3ZGJlYzJhNTA3N2IxYjk2Mzc4N2VjMzc0NDI1ZmRlMmVjMzUxNjRhM2Y4YzkyL9jZXQ==: 00:35:26.509 15:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzcwZWMxMjdkMzMwMWJmNDk5NTdiZGFiN2ZlYzlmMWI1YjhjOGJiMmRjNGUyY2Zkr72cWA==: 00:35:26.509 15:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:26.509 15:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:26.509 15:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWM3ZGJlYzJhNTA3N2IxYjk2Mzc4N2VjMzc0NDI1ZmRlMmVjMzUxNjRhM2Y4YzkyL9jZXQ==: 00:35:26.509 15:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzcwZWMxMjdkMzMwMWJmNDk5NTdiZGFiN2ZlYzlmMWI1YjhjOGJiMmRjNGUyY2Zkr72cWA==: ]] 00:35:26.509 15:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzcwZWMxMjdkMzMwMWJmNDk5NTdiZGFiN2ZlYzlmMWI1YjhjOGJiMmRjNGUyY2Zkr72cWA==: 00:35:26.509 15:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:35:26.509 15:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:26.509 15:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:26.509 15:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:26.509 15:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:26.509 15:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:26.509 15:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:26.509 15:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:26.509 15:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.509 15:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:26.509 15:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:26.509 15:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:26.509 15:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:26.509 15:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:26.509 15:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:26.509 15:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:26.509 15:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:26.509 15:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:26.509 15:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:26.509 15:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:26.509 15:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:26.509 15:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:26.509 15:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:26.509 15:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.074 nvme0n1 00:35:27.074 15:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.074 15:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:27.074 15:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.074 15:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:27.074 15:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.074 15:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.074 15:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:27.074 15:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:27.074 15:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.074 15:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.074 15:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.074 15:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:27.074 15:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:35:27.074 15:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:27.074 15:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:27.074 15:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:27.074 15:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:27.074 15:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWNjOWEwNGJjMDIyMjI1OTI0ODc5MGQzODA0OGI5NzdovoJk: 00:35:27.074 15:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWEyMmZiMDYyM2NkZTgxZDRhODE5NjlhYmJjYzk5OTJOPyHS: 00:35:27.074 15:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:27.074 15:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:27.074 15:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWNjOWEwNGJjMDIyMjI1OTI0ODc5MGQzODA0OGI5NzdovoJk: 00:35:27.074 15:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWEyMmZiMDYyM2NkZTgxZDRhODE5NjlhYmJjYzk5OTJOPyHS: ]] 00:35:27.074 15:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWEyMmZiMDYyM2NkZTgxZDRhODE5NjlhYmJjYzk5OTJOPyHS: 00:35:27.074 15:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:35:27.074 15:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:27.074 15:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:27.074 15:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:27.074 15:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:27.074 15:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:27.074 15:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:27.074 15:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.074 15:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.074 15:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.074 15:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:27.074 15:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:27.074 15:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:27.074 15:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:27.074 15:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:27.074 15:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:27.074 15:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:27.074 15:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:27.074 15:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:27.074 15:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:27.074 15:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:27.074 15:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:27.074 15:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.074 15:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.640 nvme0n1 00:35:27.640 15:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.640 15:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:27.640 15:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:27.640 15:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.640 15:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.640 15:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.640 15:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:27.640 15:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:27.640 15:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.640 15:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.640 15:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.640 15:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:27.640 15:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:35:27.640 15:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:27.640 15:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:27.640 15:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:27.640 15:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:27.640 15:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmNiMjZhNzMxZGFlOGNlYWNmMDZjMjRkNzkwNmQ5ZjY5Yjk3NTVmMDg0YTI2MDBlch5xSw==: 00:35:27.640 15:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTA3YzZhOTkxOGQzNjgwZDQxYzViNDFiYzUwOGYzNDSosSm7: 00:35:27.640 15:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:27.640 15:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:27.640 15:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmNiMjZhNzMxZGFlOGNlYWNmMDZjMjRkNzkwNmQ5ZjY5Yjk3NTVmMDg0YTI2MDBlch5xSw==: 00:35:27.640 15:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTA3YzZhOTkxOGQzNjgwZDQxYzViNDFiYzUwOGYzNDSosSm7: ]] 00:35:27.640 15:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTA3YzZhOTkxOGQzNjgwZDQxYzViNDFiYzUwOGYzNDSosSm7: 00:35:27.640 15:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:35:27.640 15:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:27.640 15:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:27.640 15:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:27.640 15:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:27.640 15:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:27.640 15:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:27.640 15:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.640 15:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.640 15:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.640 15:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:27.640 15:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:27.640 15:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:27.640 15:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:27.640 15:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:27.640 15:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:27.640 15:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:27.640 15:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:27.640 15:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:27.640 15:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:27.640 15:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:27.640 15:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:27.640 15:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.640 15:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.206 nvme0n1 00:35:28.206 15:08:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.206 15:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:28.206 15:08:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.206 15:08:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.206 15:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:28.206 15:08:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.206 15:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:28.206 15:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:28.206 15:08:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.206 15:08:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.206 15:08:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.206 15:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:28.206 15:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:35:28.206 15:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:28.206 15:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:28.206 15:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:28.206 15:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:28.206 15:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGEwN2U3MWI1YWJkYzFmNGNjOTU2OWUxMzhlM2MyMmY4YmM3YjY1ZTUyMmRlNWQwODU1MmE4NTViYWQ5ZTdlYurzbqg=: 00:35:28.206 15:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:28.206 15:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:28.206 15:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:28.206 15:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGEwN2U3MWI1YWJkYzFmNGNjOTU2OWUxMzhlM2MyMmY4YmM3YjY1ZTUyMmRlNWQwODU1MmE4NTViYWQ5ZTdlYurzbqg=: 00:35:28.206 15:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:28.206 15:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:35:28.206 15:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:28.206 15:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:28.206 15:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:28.206 15:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:28.206 15:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:28.206 15:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:28.206 15:08:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.206 15:08:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.206 15:08:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.206 15:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:28.206 15:08:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:28.206 15:08:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:28.206 15:08:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:28.207 15:08:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:28.207 15:08:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:28.207 15:08:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:28.207 15:08:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:28.207 15:08:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:28.207 15:08:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:28.207 15:08:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:28.207 15:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:28.207 15:08:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.207 15:08:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.772 nvme0n1 00:35:28.772 15:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.772 15:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:28.772 15:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.772 15:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.772 15:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:28.772 15:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.772 15:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:28.772 15:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:28.772 15:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.772 15:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.772 15:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.772 15:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:28.772 15:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:28.772 15:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:35:28.772 15:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:28.772 15:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:28.772 15:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:28.772 15:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:28.772 15:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDkyYTAxM2Q1ODM0MjM2OTcyZWE1NjVmZGQzOGJlZmTTSLr9: 00:35:28.772 15:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDIxNTEzMDM1MWFmOTQ0YTI0YTIzZmMxMWE3NWU5ZTY1ZjQ3OTkzZDhhYWI4NThhYWU4NWUzZjc1ODE3NGQ4OdeOt7U=: 00:35:28.772 15:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:28.772 15:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:28.772 15:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDkyYTAxM2Q1ODM0MjM2OTcyZWE1NjVmZGQzOGJlZmTTSLr9: 00:35:28.772 15:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDIxNTEzMDM1MWFmOTQ0YTI0YTIzZmMxMWE3NWU5ZTY1ZjQ3OTkzZDhhYWI4NThhYWU4NWUzZjc1ODE3NGQ4OdeOt7U=: ]] 00:35:28.772 15:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDIxNTEzMDM1MWFmOTQ0YTI0YTIzZmMxMWE3NWU5ZTY1ZjQ3OTkzZDhhYWI4NThhYWU4NWUzZjc1ODE3NGQ4OdeOt7U=: 00:35:28.772 15:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:35:28.772 15:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:28.772 15:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:28.772 15:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:28.772 15:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:28.772 15:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:28.772 15:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:28.772 15:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.772 15:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.030 15:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.030 15:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:29.030 15:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:29.030 15:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:29.030 15:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:29.030 15:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:29.030 15:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:29.030 15:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:29.030 15:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:29.030 15:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:29.030 15:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:29.030 15:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:29.030 15:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:29.030 15:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.030 15:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.964 nvme0n1 00:35:29.964 15:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.964 15:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:29.964 15:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.964 15:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.964 15:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:29.964 15:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.964 15:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:29.964 15:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:29.964 15:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.964 15:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.964 15:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.964 15:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:29.964 15:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:35:29.964 15:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:29.964 15:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:29.964 15:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:29.964 15:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:29.964 15:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWM3ZGJlYzJhNTA3N2IxYjk2Mzc4N2VjMzc0NDI1ZmRlMmVjMzUxNjRhM2Y4YzkyL9jZXQ==: 00:35:29.964 15:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzcwZWMxMjdkMzMwMWJmNDk5NTdiZGFiN2ZlYzlmMWI1YjhjOGJiMmRjNGUyY2Zkr72cWA==: 00:35:29.964 15:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:29.964 15:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:29.964 15:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWM3ZGJlYzJhNTA3N2IxYjk2Mzc4N2VjMzc0NDI1ZmRlMmVjMzUxNjRhM2Y4YzkyL9jZXQ==: 00:35:29.964 15:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzcwZWMxMjdkMzMwMWJmNDk5NTdiZGFiN2ZlYzlmMWI1YjhjOGJiMmRjNGUyY2Zkr72cWA==: ]] 00:35:29.964 15:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzcwZWMxMjdkMzMwMWJmNDk5NTdiZGFiN2ZlYzlmMWI1YjhjOGJiMmRjNGUyY2Zkr72cWA==: 00:35:29.964 15:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:35:29.964 15:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:29.964 15:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:29.964 15:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:29.964 15:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:29.964 15:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:29.964 15:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:29.964 15:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.964 15:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.964 15:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.964 15:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:29.964 15:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:29.964 15:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:29.964 15:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:29.964 15:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:29.964 15:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:29.964 15:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:29.964 15:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:29.964 15:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:29.964 15:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:29.964 15:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:29.964 15:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:29.964 15:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.965 15:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.950 nvme0n1 00:35:30.950 15:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.950 15:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:30.950 15:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.950 15:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.950 15:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:30.950 15:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.950 15:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:30.950 15:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:30.950 15:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.950 15:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.950 15:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.950 15:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:30.950 15:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:35:30.950 15:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:30.950 15:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:30.950 15:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:30.950 15:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:30.950 15:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWNjOWEwNGJjMDIyMjI1OTI0ODc5MGQzODA0OGI5NzdovoJk: 00:35:30.950 15:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWEyMmZiMDYyM2NkZTgxZDRhODE5NjlhYmJjYzk5OTJOPyHS: 00:35:30.950 15:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:30.950 15:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:30.950 15:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWNjOWEwNGJjMDIyMjI1OTI0ODc5MGQzODA0OGI5NzdovoJk: 00:35:30.950 15:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWEyMmZiMDYyM2NkZTgxZDRhODE5NjlhYmJjYzk5OTJOPyHS: ]] 00:35:30.950 15:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWEyMmZiMDYyM2NkZTgxZDRhODE5NjlhYmJjYzk5OTJOPyHS: 00:35:30.950 15:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:35:30.950 15:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:30.950 15:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:30.950 15:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:30.950 15:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:30.950 15:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:30.950 15:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:30.950 15:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.950 15:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.950 15:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.950 15:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:30.950 15:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:30.950 15:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:30.950 15:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:30.950 15:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:30.950 15:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:30.950 15:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:30.950 15:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:30.950 15:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:30.950 15:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:30.950 15:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:30.950 15:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:30.950 15:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.950 15:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.887 nvme0n1 00:35:31.887 15:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.887 15:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:31.887 15:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:31.887 15:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.887 15:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.887 15:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.887 15:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:31.887 15:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:31.887 15:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.887 15:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.887 15:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.887 15:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:31.887 15:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:35:31.887 15:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:31.887 15:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:31.887 15:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:31.887 15:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:31.887 15:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmNiMjZhNzMxZGFlOGNlYWNmMDZjMjRkNzkwNmQ5ZjY5Yjk3NTVmMDg0YTI2MDBlch5xSw==: 00:35:31.887 15:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTA3YzZhOTkxOGQzNjgwZDQxYzViNDFiYzUwOGYzNDSosSm7: 00:35:31.887 15:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:31.887 15:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:31.887 15:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmNiMjZhNzMxZGFlOGNlYWNmMDZjMjRkNzkwNmQ5ZjY5Yjk3NTVmMDg0YTI2MDBlch5xSw==: 00:35:31.887 15:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTA3YzZhOTkxOGQzNjgwZDQxYzViNDFiYzUwOGYzNDSosSm7: ]] 00:35:31.887 15:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTA3YzZhOTkxOGQzNjgwZDQxYzViNDFiYzUwOGYzNDSosSm7: 00:35:31.887 15:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:35:31.887 15:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:31.887 15:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:31.887 15:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:31.887 15:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:31.887 15:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:31.887 15:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:31.887 15:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.887 15:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.146 15:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.146 15:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:32.146 15:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:32.146 15:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:32.146 15:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:32.146 15:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:32.146 15:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:32.146 15:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:32.146 15:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:32.146 15:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:32.146 15:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:32.146 15:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:32.147 15:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:32.147 15:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.147 15:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.082 nvme0n1 00:35:33.082 15:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.082 15:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:33.082 15:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:33.082 15:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.082 15:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.082 15:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.082 15:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:33.082 15:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:33.082 15:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.082 15:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.082 15:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.082 15:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:33.082 15:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:35:33.082 15:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:33.082 15:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:33.082 15:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:33.082 15:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:33.082 15:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGEwN2U3MWI1YWJkYzFmNGNjOTU2OWUxMzhlM2MyMmY4YmM3YjY1ZTUyMmRlNWQwODU1MmE4NTViYWQ5ZTdlYurzbqg=: 00:35:33.082 15:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:33.082 15:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:33.082 15:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:33.082 15:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGEwN2U3MWI1YWJkYzFmNGNjOTU2OWUxMzhlM2MyMmY4YmM3YjY1ZTUyMmRlNWQwODU1MmE4NTViYWQ5ZTdlYurzbqg=: 00:35:33.082 15:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:33.082 15:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:35:33.082 15:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:33.082 15:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:33.082 15:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:33.082 15:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:33.082 15:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:33.082 15:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:33.082 15:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.082 15:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.082 15:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.082 15:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:33.082 15:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:33.082 15:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:33.082 15:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:33.082 15:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:33.082 15:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:33.082 15:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:33.082 15:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:33.082 15:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:33.082 15:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:33.082 15:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:33.082 15:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:33.082 15:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.082 15:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.018 nvme0n1 00:35:34.018 15:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.018 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:34.018 15:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.018 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:34.018 15:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.018 15:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.018 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:34.018 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:34.018 15:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.018 15:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.018 15:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.018 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:34.018 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:34.018 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:34.018 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:35:34.018 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:34.018 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:34.018 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:34.018 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:34.018 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDkyYTAxM2Q1ODM0MjM2OTcyZWE1NjVmZGQzOGJlZmTTSLr9: 00:35:34.018 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDIxNTEzMDM1MWFmOTQ0YTI0YTIzZmMxMWE3NWU5ZTY1ZjQ3OTkzZDhhYWI4NThhYWU4NWUzZjc1ODE3NGQ4OdeOt7U=: 00:35:34.018 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:34.018 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:34.018 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDkyYTAxM2Q1ODM0MjM2OTcyZWE1NjVmZGQzOGJlZmTTSLr9: 00:35:34.018 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDIxNTEzMDM1MWFmOTQ0YTI0YTIzZmMxMWE3NWU5ZTY1ZjQ3OTkzZDhhYWI4NThhYWU4NWUzZjc1ODE3NGQ4OdeOt7U=: ]] 00:35:34.018 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDIxNTEzMDM1MWFmOTQ0YTI0YTIzZmMxMWE3NWU5ZTY1ZjQ3OTkzZDhhYWI4NThhYWU4NWUzZjc1ODE3NGQ4OdeOt7U=: 00:35:34.018 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:35:34.018 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:34.018 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:34.018 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:34.018 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:34.018 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:34.018 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:34.018 15:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.018 15:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.018 15:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.018 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:34.018 15:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:34.018 15:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:34.018 15:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:34.018 15:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:34.018 15:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:34.018 15:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:34.018 15:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:34.018 15:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:34.018 15:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:34.018 15:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:34.018 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:34.018 15:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.018 15:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.018 nvme0n1 00:35:34.018 15:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.018 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:34.019 15:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.019 15:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.019 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:34.019 15:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.276 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:34.276 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:34.276 15:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.276 15:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.276 15:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.276 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:34.276 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:35:34.276 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:34.276 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:34.276 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:34.276 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:34.276 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWM3ZGJlYzJhNTA3N2IxYjk2Mzc4N2VjMzc0NDI1ZmRlMmVjMzUxNjRhM2Y4YzkyL9jZXQ==: 00:35:34.276 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzcwZWMxMjdkMzMwMWJmNDk5NTdiZGFiN2ZlYzlmMWI1YjhjOGJiMmRjNGUyY2Zkr72cWA==: 00:35:34.276 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:34.276 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:34.276 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWM3ZGJlYzJhNTA3N2IxYjk2Mzc4N2VjMzc0NDI1ZmRlMmVjMzUxNjRhM2Y4YzkyL9jZXQ==: 00:35:34.276 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzcwZWMxMjdkMzMwMWJmNDk5NTdiZGFiN2ZlYzlmMWI1YjhjOGJiMmRjNGUyY2Zkr72cWA==: ]] 00:35:34.276 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzcwZWMxMjdkMzMwMWJmNDk5NTdiZGFiN2ZlYzlmMWI1YjhjOGJiMmRjNGUyY2Zkr72cWA==: 00:35:34.276 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:35:34.276 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:34.276 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:34.276 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:34.276 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:34.276 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:34.276 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:34.276 15:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.276 15:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.277 15:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.277 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:34.277 15:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:34.277 15:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:34.277 15:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:34.277 15:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:34.277 15:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:34.277 15:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:34.277 15:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:34.277 15:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:34.277 15:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:34.277 15:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:34.277 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:34.277 15:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.277 15:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.277 nvme0n1 00:35:34.277 15:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.277 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:34.277 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:34.277 15:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.277 15:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.277 15:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.277 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:34.277 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:34.277 15:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.277 15:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWNjOWEwNGJjMDIyMjI1OTI0ODc5MGQzODA0OGI5NzdovoJk: 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWEyMmZiMDYyM2NkZTgxZDRhODE5NjlhYmJjYzk5OTJOPyHS: 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWNjOWEwNGJjMDIyMjI1OTI0ODc5MGQzODA0OGI5NzdovoJk: 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWEyMmZiMDYyM2NkZTgxZDRhODE5NjlhYmJjYzk5OTJOPyHS: ]] 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWEyMmZiMDYyM2NkZTgxZDRhODE5NjlhYmJjYzk5OTJOPyHS: 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.535 nvme0n1 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmNiMjZhNzMxZGFlOGNlYWNmMDZjMjRkNzkwNmQ5ZjY5Yjk3NTVmMDg0YTI2MDBlch5xSw==: 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTA3YzZhOTkxOGQzNjgwZDQxYzViNDFiYzUwOGYzNDSosSm7: 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmNiMjZhNzMxZGFlOGNlYWNmMDZjMjRkNzkwNmQ5ZjY5Yjk3NTVmMDg0YTI2MDBlch5xSw==: 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTA3YzZhOTkxOGQzNjgwZDQxYzViNDFiYzUwOGYzNDSosSm7: ]] 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTA3YzZhOTkxOGQzNjgwZDQxYzViNDFiYzUwOGYzNDSosSm7: 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.535 15:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.793 nvme0n1 00:35:34.793 15:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.793 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:34.793 15:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.793 15:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:34.793 15:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.793 15:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.793 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:34.793 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:34.793 15:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.793 15:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.793 15:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.793 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:34.793 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:35:34.793 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:34.793 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:34.793 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:34.793 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:34.793 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGEwN2U3MWI1YWJkYzFmNGNjOTU2OWUxMzhlM2MyMmY4YmM3YjY1ZTUyMmRlNWQwODU1MmE4NTViYWQ5ZTdlYurzbqg=: 00:35:34.793 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:34.793 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:34.793 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:34.793 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGEwN2U3MWI1YWJkYzFmNGNjOTU2OWUxMzhlM2MyMmY4YmM3YjY1ZTUyMmRlNWQwODU1MmE4NTViYWQ5ZTdlYurzbqg=: 00:35:34.793 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:34.793 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:35:34.793 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:34.793 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:34.793 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:34.793 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:34.793 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:34.793 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:34.793 15:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.793 15:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.793 15:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.793 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:34.793 15:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:34.793 15:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:34.793 15:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:34.793 15:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:34.793 15:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:34.793 15:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:34.793 15:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:34.793 15:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:34.793 15:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:34.793 15:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:34.793 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:34.793 15:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.793 15:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.050 nvme0n1 00:35:35.050 15:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.050 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:35.050 15:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.050 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:35.050 15:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.050 15:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.050 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:35.050 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:35.050 15:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.050 15:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.050 15:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.050 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:35.050 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:35.050 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:35:35.050 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:35.050 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:35.050 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:35.050 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:35.050 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDkyYTAxM2Q1ODM0MjM2OTcyZWE1NjVmZGQzOGJlZmTTSLr9: 00:35:35.050 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDIxNTEzMDM1MWFmOTQ0YTI0YTIzZmMxMWE3NWU5ZTY1ZjQ3OTkzZDhhYWI4NThhYWU4NWUzZjc1ODE3NGQ4OdeOt7U=: 00:35:35.050 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:35.050 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:35.050 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDkyYTAxM2Q1ODM0MjM2OTcyZWE1NjVmZGQzOGJlZmTTSLr9: 00:35:35.050 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDIxNTEzMDM1MWFmOTQ0YTI0YTIzZmMxMWE3NWU5ZTY1ZjQ3OTkzZDhhYWI4NThhYWU4NWUzZjc1ODE3NGQ4OdeOt7U=: ]] 00:35:35.050 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDIxNTEzMDM1MWFmOTQ0YTI0YTIzZmMxMWE3NWU5ZTY1ZjQ3OTkzZDhhYWI4NThhYWU4NWUzZjc1ODE3NGQ4OdeOt7U=: 00:35:35.050 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:35:35.050 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:35.050 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:35.050 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:35.050 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:35.050 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:35.050 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:35.050 15:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.050 15:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.050 15:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.050 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:35.050 15:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:35.050 15:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:35.050 15:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:35.050 15:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:35.050 15:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:35.050 15:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:35.050 15:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:35.050 15:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:35.050 15:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:35.050 15:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:35.050 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:35.050 15:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.050 15:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.309 nvme0n1 00:35:35.309 15:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.309 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:35.309 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:35.309 15:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.309 15:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.309 15:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.309 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:35.309 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:35.309 15:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.309 15:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.309 15:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.309 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:35.309 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:35:35.309 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:35.309 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:35.309 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:35.309 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:35.309 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWM3ZGJlYzJhNTA3N2IxYjk2Mzc4N2VjMzc0NDI1ZmRlMmVjMzUxNjRhM2Y4YzkyL9jZXQ==: 00:35:35.309 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzcwZWMxMjdkMzMwMWJmNDk5NTdiZGFiN2ZlYzlmMWI1YjhjOGJiMmRjNGUyY2Zkr72cWA==: 00:35:35.309 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:35.309 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:35.309 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWM3ZGJlYzJhNTA3N2IxYjk2Mzc4N2VjMzc0NDI1ZmRlMmVjMzUxNjRhM2Y4YzkyL9jZXQ==: 00:35:35.309 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzcwZWMxMjdkMzMwMWJmNDk5NTdiZGFiN2ZlYzlmMWI1YjhjOGJiMmRjNGUyY2Zkr72cWA==: ]] 00:35:35.309 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzcwZWMxMjdkMzMwMWJmNDk5NTdiZGFiN2ZlYzlmMWI1YjhjOGJiMmRjNGUyY2Zkr72cWA==: 00:35:35.309 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:35:35.309 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:35.309 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:35.309 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:35.309 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:35.309 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:35.309 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:35.309 15:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.309 15:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.309 15:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.309 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:35.309 15:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:35.309 15:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:35.309 15:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:35.309 15:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:35.309 15:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:35.309 15:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:35.309 15:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:35.309 15:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:35.309 15:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:35.309 15:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:35.309 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:35.309 15:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.309 15:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.568 nvme0n1 00:35:35.568 15:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.568 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:35.568 15:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.568 15:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.568 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:35.568 15:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.568 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:35.568 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:35.568 15:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.568 15:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.568 15:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.568 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:35.568 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:35:35.568 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:35.568 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:35.569 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:35.569 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:35.569 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWNjOWEwNGJjMDIyMjI1OTI0ODc5MGQzODA0OGI5NzdovoJk: 00:35:35.569 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWEyMmZiMDYyM2NkZTgxZDRhODE5NjlhYmJjYzk5OTJOPyHS: 00:35:35.569 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:35.569 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:35.569 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWNjOWEwNGJjMDIyMjI1OTI0ODc5MGQzODA0OGI5NzdovoJk: 00:35:35.569 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWEyMmZiMDYyM2NkZTgxZDRhODE5NjlhYmJjYzk5OTJOPyHS: ]] 00:35:35.569 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWEyMmZiMDYyM2NkZTgxZDRhODE5NjlhYmJjYzk5OTJOPyHS: 00:35:35.569 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:35:35.569 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:35.569 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:35.569 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:35.569 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:35.569 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:35.569 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:35.569 15:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.569 15:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.569 15:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.569 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:35.569 15:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:35.569 15:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:35.569 15:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:35.569 15:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:35.569 15:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:35.569 15:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:35.569 15:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:35.569 15:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:35.569 15:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:35.569 15:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:35.569 15:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:35.569 15:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.569 15:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.829 nvme0n1 00:35:35.829 15:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.829 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:35.829 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:35.829 15:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.829 15:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.829 15:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.829 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:35.829 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:35.829 15:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.829 15:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.829 15:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.829 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:35.829 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:35:35.829 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:35.829 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:35.829 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:35.829 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:35.829 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmNiMjZhNzMxZGFlOGNlYWNmMDZjMjRkNzkwNmQ5ZjY5Yjk3NTVmMDg0YTI2MDBlch5xSw==: 00:35:35.829 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTA3YzZhOTkxOGQzNjgwZDQxYzViNDFiYzUwOGYzNDSosSm7: 00:35:35.829 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:35.829 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:35.829 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmNiMjZhNzMxZGFlOGNlYWNmMDZjMjRkNzkwNmQ5ZjY5Yjk3NTVmMDg0YTI2MDBlch5xSw==: 00:35:35.829 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTA3YzZhOTkxOGQzNjgwZDQxYzViNDFiYzUwOGYzNDSosSm7: ]] 00:35:35.829 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTA3YzZhOTkxOGQzNjgwZDQxYzViNDFiYzUwOGYzNDSosSm7: 00:35:35.829 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:35:35.829 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:35.829 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:35.829 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:35.829 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:35.829 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:35.829 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:35.829 15:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.829 15:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.829 15:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.829 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:35.829 15:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:35.829 15:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:35.829 15:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:35.829 15:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:35.829 15:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:35.829 15:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:35.829 15:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:35.829 15:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:35.829 15:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:35.829 15:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:35.829 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:35.829 15:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.829 15:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.087 nvme0n1 00:35:36.087 15:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.087 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:36.087 15:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.087 15:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.087 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:36.087 15:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.087 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:36.087 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:36.087 15:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.087 15:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.087 15:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.087 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:36.087 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:35:36.087 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:36.087 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:36.087 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:36.087 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:36.087 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGEwN2U3MWI1YWJkYzFmNGNjOTU2OWUxMzhlM2MyMmY4YmM3YjY1ZTUyMmRlNWQwODU1MmE4NTViYWQ5ZTdlYurzbqg=: 00:35:36.087 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:36.087 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:36.087 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:36.087 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGEwN2U3MWI1YWJkYzFmNGNjOTU2OWUxMzhlM2MyMmY4YmM3YjY1ZTUyMmRlNWQwODU1MmE4NTViYWQ5ZTdlYurzbqg=: 00:35:36.087 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:36.087 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:35:36.087 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:36.087 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:36.087 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:36.087 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:36.088 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:36.088 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:36.088 15:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.088 15:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.088 15:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.088 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:36.088 15:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:36.088 15:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:36.088 15:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:36.088 15:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:36.088 15:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:36.088 15:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:36.088 15:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:36.088 15:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:36.088 15:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:36.088 15:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:36.088 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:36.088 15:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.088 15:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.347 nvme0n1 00:35:36.347 15:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.347 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:36.347 15:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.347 15:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.347 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:36.347 15:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.347 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:36.347 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:36.347 15:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.347 15:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.347 15:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.347 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:36.347 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:36.347 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:35:36.347 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:36.347 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:36.347 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:36.347 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:36.347 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDkyYTAxM2Q1ODM0MjM2OTcyZWE1NjVmZGQzOGJlZmTTSLr9: 00:35:36.347 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDIxNTEzMDM1MWFmOTQ0YTI0YTIzZmMxMWE3NWU5ZTY1ZjQ3OTkzZDhhYWI4NThhYWU4NWUzZjc1ODE3NGQ4OdeOt7U=: 00:35:36.347 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:36.347 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:36.347 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDkyYTAxM2Q1ODM0MjM2OTcyZWE1NjVmZGQzOGJlZmTTSLr9: 00:35:36.347 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDIxNTEzMDM1MWFmOTQ0YTI0YTIzZmMxMWE3NWU5ZTY1ZjQ3OTkzZDhhYWI4NThhYWU4NWUzZjc1ODE3NGQ4OdeOt7U=: ]] 00:35:36.347 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDIxNTEzMDM1MWFmOTQ0YTI0YTIzZmMxMWE3NWU5ZTY1ZjQ3OTkzZDhhYWI4NThhYWU4NWUzZjc1ODE3NGQ4OdeOt7U=: 00:35:36.347 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:35:36.347 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:36.347 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:36.347 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:36.347 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:36.347 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:36.347 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:36.347 15:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.347 15:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.347 15:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.347 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:36.347 15:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:36.347 15:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:36.347 15:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:36.347 15:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:36.347 15:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:36.347 15:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:36.347 15:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:36.347 15:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:36.347 15:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:36.347 15:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:36.347 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:36.347 15:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.347 15:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.606 nvme0n1 00:35:36.606 15:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.606 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:36.606 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:36.606 15:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.606 15:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.606 15:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.865 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:36.865 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:36.866 15:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.866 15:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.866 15:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.866 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:36.866 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:35:36.866 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:36.866 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:36.866 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:36.866 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:36.866 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWM3ZGJlYzJhNTA3N2IxYjk2Mzc4N2VjMzc0NDI1ZmRlMmVjMzUxNjRhM2Y4YzkyL9jZXQ==: 00:35:36.866 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzcwZWMxMjdkMzMwMWJmNDk5NTdiZGFiN2ZlYzlmMWI1YjhjOGJiMmRjNGUyY2Zkr72cWA==: 00:35:36.866 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:36.866 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:36.866 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWM3ZGJlYzJhNTA3N2IxYjk2Mzc4N2VjMzc0NDI1ZmRlMmVjMzUxNjRhM2Y4YzkyL9jZXQ==: 00:35:36.866 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzcwZWMxMjdkMzMwMWJmNDk5NTdiZGFiN2ZlYzlmMWI1YjhjOGJiMmRjNGUyY2Zkr72cWA==: ]] 00:35:36.866 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzcwZWMxMjdkMzMwMWJmNDk5NTdiZGFiN2ZlYzlmMWI1YjhjOGJiMmRjNGUyY2Zkr72cWA==: 00:35:36.866 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:35:36.866 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:36.866 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:36.866 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:36.866 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:36.866 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:36.866 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:36.866 15:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.866 15:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.866 15:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.866 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:36.866 15:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:36.866 15:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:36.866 15:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:36.866 15:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:36.866 15:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:36.866 15:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:36.866 15:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:36.866 15:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:36.866 15:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:36.866 15:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:36.866 15:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:36.866 15:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.866 15:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.125 nvme0n1 00:35:37.125 15:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.125 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:37.125 15:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.125 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:37.125 15:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.125 15:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.125 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:37.125 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:37.125 15:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.125 15:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.125 15:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.125 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:37.125 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:35:37.125 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:37.125 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:37.125 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:37.125 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:37.125 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWNjOWEwNGJjMDIyMjI1OTI0ODc5MGQzODA0OGI5NzdovoJk: 00:35:37.125 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWEyMmZiMDYyM2NkZTgxZDRhODE5NjlhYmJjYzk5OTJOPyHS: 00:35:37.125 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:37.125 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:37.125 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWNjOWEwNGJjMDIyMjI1OTI0ODc5MGQzODA0OGI5NzdovoJk: 00:35:37.125 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWEyMmZiMDYyM2NkZTgxZDRhODE5NjlhYmJjYzk5OTJOPyHS: ]] 00:35:37.125 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWEyMmZiMDYyM2NkZTgxZDRhODE5NjlhYmJjYzk5OTJOPyHS: 00:35:37.125 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:35:37.125 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:37.125 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:37.125 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:37.125 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:37.125 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:37.125 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:37.125 15:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.125 15:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.125 15:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.125 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:37.125 15:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:37.125 15:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:37.125 15:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:37.125 15:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:37.125 15:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:37.125 15:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:37.125 15:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:37.125 15:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:37.125 15:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:37.125 15:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:37.125 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:37.125 15:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.125 15:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.384 nvme0n1 00:35:37.385 15:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.385 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:37.385 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:37.385 15:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.385 15:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.385 15:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.385 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:37.385 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:37.385 15:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.385 15:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.385 15:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.385 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:37.385 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:35:37.385 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:37.385 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:37.385 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:37.385 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:37.385 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmNiMjZhNzMxZGFlOGNlYWNmMDZjMjRkNzkwNmQ5ZjY5Yjk3NTVmMDg0YTI2MDBlch5xSw==: 00:35:37.385 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTA3YzZhOTkxOGQzNjgwZDQxYzViNDFiYzUwOGYzNDSosSm7: 00:35:37.385 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:37.385 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:37.385 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmNiMjZhNzMxZGFlOGNlYWNmMDZjMjRkNzkwNmQ5ZjY5Yjk3NTVmMDg0YTI2MDBlch5xSw==: 00:35:37.385 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTA3YzZhOTkxOGQzNjgwZDQxYzViNDFiYzUwOGYzNDSosSm7: ]] 00:35:37.385 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTA3YzZhOTkxOGQzNjgwZDQxYzViNDFiYzUwOGYzNDSosSm7: 00:35:37.385 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:35:37.385 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:37.385 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:37.385 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:37.385 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:37.385 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:37.385 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:37.385 15:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.385 15:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.385 15:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.385 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:37.385 15:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:37.385 15:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:37.385 15:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:37.385 15:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:37.385 15:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:37.385 15:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:37.385 15:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:37.385 15:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:37.385 15:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:37.385 15:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:37.385 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:37.385 15:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.385 15:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.644 nvme0n1 00:35:37.644 15:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.644 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:37.644 15:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.644 15:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.644 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:37.644 15:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.904 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:37.904 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:37.904 15:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.904 15:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.904 15:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.904 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:37.904 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:35:37.904 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:37.904 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:37.904 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:37.904 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:37.904 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGEwN2U3MWI1YWJkYzFmNGNjOTU2OWUxMzhlM2MyMmY4YmM3YjY1ZTUyMmRlNWQwODU1MmE4NTViYWQ5ZTdlYurzbqg=: 00:35:37.904 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:37.904 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:37.904 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:37.904 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGEwN2U3MWI1YWJkYzFmNGNjOTU2OWUxMzhlM2MyMmY4YmM3YjY1ZTUyMmRlNWQwODU1MmE4NTViYWQ5ZTdlYurzbqg=: 00:35:37.904 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:37.904 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:35:37.904 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:37.904 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:37.904 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:37.904 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:37.904 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:37.904 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:37.904 15:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.904 15:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.904 15:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.904 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:37.904 15:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:37.904 15:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:37.904 15:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:37.904 15:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:37.904 15:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:37.904 15:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:37.904 15:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:37.904 15:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:37.904 15:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:37.904 15:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:37.904 15:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:37.904 15:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.904 15:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.165 nvme0n1 00:35:38.165 15:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.165 15:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:38.165 15:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.165 15:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.165 15:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:38.165 15:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.165 15:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:38.165 15:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:38.165 15:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.165 15:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.165 15:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.165 15:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:38.165 15:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:38.165 15:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:35:38.165 15:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:38.165 15:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:38.165 15:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:38.165 15:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:38.165 15:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDkyYTAxM2Q1ODM0MjM2OTcyZWE1NjVmZGQzOGJlZmTTSLr9: 00:35:38.165 15:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDIxNTEzMDM1MWFmOTQ0YTI0YTIzZmMxMWE3NWU5ZTY1ZjQ3OTkzZDhhYWI4NThhYWU4NWUzZjc1ODE3NGQ4OdeOt7U=: 00:35:38.165 15:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:38.165 15:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:38.165 15:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDkyYTAxM2Q1ODM0MjM2OTcyZWE1NjVmZGQzOGJlZmTTSLr9: 00:35:38.165 15:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDIxNTEzMDM1MWFmOTQ0YTI0YTIzZmMxMWE3NWU5ZTY1ZjQ3OTkzZDhhYWI4NThhYWU4NWUzZjc1ODE3NGQ4OdeOt7U=: ]] 00:35:38.165 15:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDIxNTEzMDM1MWFmOTQ0YTI0YTIzZmMxMWE3NWU5ZTY1ZjQ3OTkzZDhhYWI4NThhYWU4NWUzZjc1ODE3NGQ4OdeOt7U=: 00:35:38.165 15:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:35:38.165 15:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:38.165 15:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:38.165 15:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:38.165 15:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:38.165 15:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:38.165 15:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:38.165 15:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.165 15:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.165 15:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.165 15:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:38.165 15:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:38.165 15:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:38.165 15:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:38.165 15:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:38.165 15:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:38.165 15:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:38.165 15:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:38.165 15:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:38.165 15:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:38.165 15:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:38.165 15:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:38.165 15:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.165 15:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.732 nvme0n1 00:35:38.732 15:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.732 15:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:38.732 15:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.732 15:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.732 15:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:38.732 15:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.732 15:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:38.732 15:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:38.732 15:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.732 15:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.732 15:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.732 15:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:38.732 15:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:35:38.732 15:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:38.732 15:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:38.732 15:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:38.732 15:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:38.732 15:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWM3ZGJlYzJhNTA3N2IxYjk2Mzc4N2VjMzc0NDI1ZmRlMmVjMzUxNjRhM2Y4YzkyL9jZXQ==: 00:35:38.732 15:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzcwZWMxMjdkMzMwMWJmNDk5NTdiZGFiN2ZlYzlmMWI1YjhjOGJiMmRjNGUyY2Zkr72cWA==: 00:35:38.732 15:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:38.732 15:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:38.732 15:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWM3ZGJlYzJhNTA3N2IxYjk2Mzc4N2VjMzc0NDI1ZmRlMmVjMzUxNjRhM2Y4YzkyL9jZXQ==: 00:35:38.732 15:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzcwZWMxMjdkMzMwMWJmNDk5NTdiZGFiN2ZlYzlmMWI1YjhjOGJiMmRjNGUyY2Zkr72cWA==: ]] 00:35:38.732 15:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzcwZWMxMjdkMzMwMWJmNDk5NTdiZGFiN2ZlYzlmMWI1YjhjOGJiMmRjNGUyY2Zkr72cWA==: 00:35:38.732 15:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:35:38.732 15:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:38.732 15:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:38.732 15:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:38.732 15:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:38.732 15:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:38.732 15:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:38.732 15:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.732 15:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.733 15:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.733 15:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:38.733 15:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:38.733 15:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:38.733 15:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:38.733 15:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:38.733 15:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:38.733 15:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:38.733 15:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:38.733 15:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:38.733 15:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:38.733 15:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:38.733 15:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:38.733 15:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.733 15:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.299 nvme0n1 00:35:39.299 15:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.299 15:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:39.299 15:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.299 15:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.299 15:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:39.299 15:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.299 15:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:39.299 15:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:39.299 15:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.299 15:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.299 15:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.299 15:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:39.299 15:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:35:39.299 15:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:39.299 15:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:39.299 15:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:39.299 15:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:39.299 15:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWNjOWEwNGJjMDIyMjI1OTI0ODc5MGQzODA0OGI5NzdovoJk: 00:35:39.299 15:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWEyMmZiMDYyM2NkZTgxZDRhODE5NjlhYmJjYzk5OTJOPyHS: 00:35:39.299 15:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:39.299 15:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:39.299 15:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWNjOWEwNGJjMDIyMjI1OTI0ODc5MGQzODA0OGI5NzdovoJk: 00:35:39.299 15:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWEyMmZiMDYyM2NkZTgxZDRhODE5NjlhYmJjYzk5OTJOPyHS: ]] 00:35:39.299 15:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWEyMmZiMDYyM2NkZTgxZDRhODE5NjlhYmJjYzk5OTJOPyHS: 00:35:39.299 15:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:35:39.299 15:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:39.299 15:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:39.299 15:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:39.299 15:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:39.299 15:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:39.299 15:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:39.299 15:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.299 15:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.299 15:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.299 15:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:39.299 15:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:39.299 15:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:39.299 15:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:39.299 15:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:39.299 15:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:39.299 15:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:39.299 15:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:39.299 15:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:39.299 15:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:39.299 15:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:39.299 15:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:39.299 15:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.299 15:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.238 nvme0n1 00:35:40.238 15:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.238 15:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:40.238 15:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.238 15:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.238 15:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:40.238 15:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.238 15:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:40.238 15:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:40.238 15:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.238 15:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.238 15:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.238 15:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:40.238 15:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:35:40.238 15:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:40.238 15:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:40.238 15:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:40.238 15:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:40.238 15:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmNiMjZhNzMxZGFlOGNlYWNmMDZjMjRkNzkwNmQ5ZjY5Yjk3NTVmMDg0YTI2MDBlch5xSw==: 00:35:40.238 15:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTA3YzZhOTkxOGQzNjgwZDQxYzViNDFiYzUwOGYzNDSosSm7: 00:35:40.238 15:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:40.238 15:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:40.238 15:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmNiMjZhNzMxZGFlOGNlYWNmMDZjMjRkNzkwNmQ5ZjY5Yjk3NTVmMDg0YTI2MDBlch5xSw==: 00:35:40.238 15:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTA3YzZhOTkxOGQzNjgwZDQxYzViNDFiYzUwOGYzNDSosSm7: ]] 00:35:40.238 15:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTA3YzZhOTkxOGQzNjgwZDQxYzViNDFiYzUwOGYzNDSosSm7: 00:35:40.238 15:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:35:40.238 15:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:40.238 15:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:40.238 15:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:40.238 15:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:40.238 15:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:40.238 15:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:40.238 15:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.238 15:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.238 15:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.238 15:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:40.238 15:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:40.238 15:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:40.238 15:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:40.238 15:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:40.238 15:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:40.238 15:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:40.238 15:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:40.238 15:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:40.238 15:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:40.238 15:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:40.238 15:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:40.238 15:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.238 15:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.806 nvme0n1 00:35:40.806 15:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.806 15:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:40.806 15:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.806 15:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.806 15:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:40.806 15:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.806 15:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:40.806 15:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:40.807 15:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.807 15:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.807 15:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.807 15:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:40.807 15:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:35:40.807 15:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:40.807 15:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:40.807 15:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:40.807 15:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:40.807 15:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGEwN2U3MWI1YWJkYzFmNGNjOTU2OWUxMzhlM2MyMmY4YmM3YjY1ZTUyMmRlNWQwODU1MmE4NTViYWQ5ZTdlYurzbqg=: 00:35:40.807 15:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:40.807 15:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:40.807 15:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:40.807 15:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGEwN2U3MWI1YWJkYzFmNGNjOTU2OWUxMzhlM2MyMmY4YmM3YjY1ZTUyMmRlNWQwODU1MmE4NTViYWQ5ZTdlYurzbqg=: 00:35:40.807 15:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:40.807 15:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:35:40.807 15:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:40.807 15:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:40.807 15:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:40.807 15:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:40.807 15:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:40.807 15:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:40.807 15:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.807 15:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.807 15:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.807 15:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:40.807 15:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:40.807 15:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:40.807 15:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:40.807 15:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:40.807 15:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:40.807 15:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:40.807 15:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:40.807 15:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:40.807 15:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:40.807 15:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:40.807 15:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:40.807 15:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.807 15:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.376 nvme0n1 00:35:41.376 15:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.376 15:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:41.376 15:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:41.376 15:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.376 15:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.376 15:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.376 15:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:41.376 15:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:41.376 15:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.376 15:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.376 15:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.376 15:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:41.376 15:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:41.376 15:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:35:41.376 15:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:41.376 15:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:41.376 15:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:41.376 15:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:41.376 15:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDkyYTAxM2Q1ODM0MjM2OTcyZWE1NjVmZGQzOGJlZmTTSLr9: 00:35:41.376 15:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDIxNTEzMDM1MWFmOTQ0YTI0YTIzZmMxMWE3NWU5ZTY1ZjQ3OTkzZDhhYWI4NThhYWU4NWUzZjc1ODE3NGQ4OdeOt7U=: 00:35:41.376 15:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:41.376 15:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:41.376 15:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDkyYTAxM2Q1ODM0MjM2OTcyZWE1NjVmZGQzOGJlZmTTSLr9: 00:35:41.376 15:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDIxNTEzMDM1MWFmOTQ0YTI0YTIzZmMxMWE3NWU5ZTY1ZjQ3OTkzZDhhYWI4NThhYWU4NWUzZjc1ODE3NGQ4OdeOt7U=: ]] 00:35:41.376 15:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDIxNTEzMDM1MWFmOTQ0YTI0YTIzZmMxMWE3NWU5ZTY1ZjQ3OTkzZDhhYWI4NThhYWU4NWUzZjc1ODE3NGQ4OdeOt7U=: 00:35:41.376 15:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:35:41.376 15:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:41.376 15:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:41.376 15:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:41.376 15:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:41.376 15:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:41.376 15:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:41.376 15:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.376 15:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.376 15:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.376 15:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:41.376 15:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:41.376 15:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:41.376 15:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:41.376 15:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:41.376 15:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:41.376 15:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:41.376 15:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:41.376 15:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:41.376 15:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:41.376 15:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:41.376 15:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:41.376 15:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.376 15:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.312 nvme0n1 00:35:42.312 15:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.312 15:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:42.312 15:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:42.312 15:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.312 15:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.312 15:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.312 15:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:42.312 15:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:42.312 15:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.312 15:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.312 15:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.312 15:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:42.312 15:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:35:42.312 15:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:42.312 15:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:42.312 15:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:42.312 15:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:42.312 15:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWM3ZGJlYzJhNTA3N2IxYjk2Mzc4N2VjMzc0NDI1ZmRlMmVjMzUxNjRhM2Y4YzkyL9jZXQ==: 00:35:42.312 15:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzcwZWMxMjdkMzMwMWJmNDk5NTdiZGFiN2ZlYzlmMWI1YjhjOGJiMmRjNGUyY2Zkr72cWA==: 00:35:42.312 15:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:42.312 15:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:42.312 15:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWM3ZGJlYzJhNTA3N2IxYjk2Mzc4N2VjMzc0NDI1ZmRlMmVjMzUxNjRhM2Y4YzkyL9jZXQ==: 00:35:42.312 15:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzcwZWMxMjdkMzMwMWJmNDk5NTdiZGFiN2ZlYzlmMWI1YjhjOGJiMmRjNGUyY2Zkr72cWA==: ]] 00:35:42.312 15:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzcwZWMxMjdkMzMwMWJmNDk5NTdiZGFiN2ZlYzlmMWI1YjhjOGJiMmRjNGUyY2Zkr72cWA==: 00:35:42.312 15:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:35:42.312 15:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:42.312 15:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:42.312 15:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:42.312 15:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:42.312 15:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:42.312 15:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:42.312 15:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.312 15:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.312 15:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.312 15:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:42.312 15:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:42.312 15:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:42.312 15:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:42.312 15:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:42.312 15:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:42.312 15:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:42.312 15:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:42.312 15:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:42.312 15:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:42.312 15:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:42.312 15:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:42.312 15:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.312 15:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.244 nvme0n1 00:35:43.244 15:08:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.244 15:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:43.244 15:08:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.244 15:08:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.244 15:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:43.244 15:08:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.244 15:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:43.244 15:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:43.244 15:08:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.244 15:08:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.244 15:08:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.244 15:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:43.244 15:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:35:43.244 15:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:43.244 15:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:43.244 15:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:43.244 15:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:43.244 15:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWNjOWEwNGJjMDIyMjI1OTI0ODc5MGQzODA0OGI5NzdovoJk: 00:35:43.244 15:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWEyMmZiMDYyM2NkZTgxZDRhODE5NjlhYmJjYzk5OTJOPyHS: 00:35:43.244 15:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:43.244 15:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:43.244 15:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWNjOWEwNGJjMDIyMjI1OTI0ODc5MGQzODA0OGI5NzdovoJk: 00:35:43.244 15:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWEyMmZiMDYyM2NkZTgxZDRhODE5NjlhYmJjYzk5OTJOPyHS: ]] 00:35:43.244 15:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWEyMmZiMDYyM2NkZTgxZDRhODE5NjlhYmJjYzk5OTJOPyHS: 00:35:43.244 15:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:35:43.244 15:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:43.244 15:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:43.244 15:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:43.244 15:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:43.245 15:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:43.245 15:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:43.245 15:08:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.245 15:08:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.245 15:08:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.245 15:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:43.245 15:08:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:43.245 15:08:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:43.245 15:08:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:43.245 15:08:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:43.245 15:08:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:43.245 15:08:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:43.245 15:08:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:43.245 15:08:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:43.245 15:08:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:43.245 15:08:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:43.245 15:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:43.245 15:08:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.245 15:08:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.175 nvme0n1 00:35:44.175 15:08:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:44.175 15:08:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:44.175 15:08:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:44.175 15:08:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:44.175 15:08:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.175 15:08:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:44.432 15:08:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:44.432 15:08:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:44.432 15:08:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:44.432 15:08:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.432 15:08:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:44.432 15:08:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:44.432 15:08:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:35:44.432 15:08:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:44.432 15:08:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:44.432 15:08:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:44.432 15:08:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:44.432 15:08:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmNiMjZhNzMxZGFlOGNlYWNmMDZjMjRkNzkwNmQ5ZjY5Yjk3NTVmMDg0YTI2MDBlch5xSw==: 00:35:44.432 15:08:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTA3YzZhOTkxOGQzNjgwZDQxYzViNDFiYzUwOGYzNDSosSm7: 00:35:44.432 15:08:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:44.432 15:08:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:44.432 15:08:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmNiMjZhNzMxZGFlOGNlYWNmMDZjMjRkNzkwNmQ5ZjY5Yjk3NTVmMDg0YTI2MDBlch5xSw==: 00:35:44.432 15:08:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTA3YzZhOTkxOGQzNjgwZDQxYzViNDFiYzUwOGYzNDSosSm7: ]] 00:35:44.432 15:08:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTA3YzZhOTkxOGQzNjgwZDQxYzViNDFiYzUwOGYzNDSosSm7: 00:35:44.432 15:08:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:35:44.432 15:08:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:44.432 15:08:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:44.432 15:08:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:44.432 15:08:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:44.432 15:08:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:44.432 15:08:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:44.432 15:08:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:44.433 15:08:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.433 15:08:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:44.433 15:08:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:44.433 15:08:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:44.433 15:08:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:44.433 15:08:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:44.433 15:08:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:44.433 15:08:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:44.433 15:08:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:44.433 15:08:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:44.433 15:08:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:44.433 15:08:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:44.433 15:08:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:44.433 15:08:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:44.433 15:08:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:44.433 15:08:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.365 nvme0n1 00:35:45.365 15:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.365 15:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:45.365 15:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.365 15:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:45.365 15:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.365 15:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.365 15:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:45.365 15:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:45.365 15:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.365 15:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.365 15:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.365 15:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:45.365 15:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:35:45.365 15:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:45.365 15:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:45.365 15:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:45.365 15:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:45.365 15:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGEwN2U3MWI1YWJkYzFmNGNjOTU2OWUxMzhlM2MyMmY4YmM3YjY1ZTUyMmRlNWQwODU1MmE4NTViYWQ5ZTdlYurzbqg=: 00:35:45.365 15:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:45.365 15:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:45.365 15:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:45.365 15:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGEwN2U3MWI1YWJkYzFmNGNjOTU2OWUxMzhlM2MyMmY4YmM3YjY1ZTUyMmRlNWQwODU1MmE4NTViYWQ5ZTdlYurzbqg=: 00:35:45.365 15:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:45.365 15:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:35:45.365 15:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:45.365 15:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:45.365 15:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:45.365 15:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:45.365 15:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:45.365 15:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:45.365 15:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.365 15:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.365 15:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.365 15:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:45.365 15:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:45.365 15:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:45.365 15:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:45.365 15:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:45.365 15:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:45.365 15:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:45.365 15:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:45.365 15:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:45.365 15:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:45.365 15:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:45.365 15:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:45.365 15:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.365 15:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.296 nvme0n1 00:35:46.296 15:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.296 15:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:46.296 15:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.296 15:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.296 15:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:46.296 15:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.553 15:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:46.553 15:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:46.553 15:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.553 15:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.553 15:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.553 15:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:46.553 15:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:46.553 15:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:46.553 15:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:35:46.553 15:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:46.553 15:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:46.553 15:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:46.553 15:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:46.553 15:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDkyYTAxM2Q1ODM0MjM2OTcyZWE1NjVmZGQzOGJlZmTTSLr9: 00:35:46.553 15:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDIxNTEzMDM1MWFmOTQ0YTI0YTIzZmMxMWE3NWU5ZTY1ZjQ3OTkzZDhhYWI4NThhYWU4NWUzZjc1ODE3NGQ4OdeOt7U=: 00:35:46.553 15:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:46.553 15:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:46.553 15:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDkyYTAxM2Q1ODM0MjM2OTcyZWE1NjVmZGQzOGJlZmTTSLr9: 00:35:46.553 15:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDIxNTEzMDM1MWFmOTQ0YTI0YTIzZmMxMWE3NWU5ZTY1ZjQ3OTkzZDhhYWI4NThhYWU4NWUzZjc1ODE3NGQ4OdeOt7U=: ]] 00:35:46.553 15:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDIxNTEzMDM1MWFmOTQ0YTI0YTIzZmMxMWE3NWU5ZTY1ZjQ3OTkzZDhhYWI4NThhYWU4NWUzZjc1ODE3NGQ4OdeOt7U=: 00:35:46.553 15:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:35:46.553 15:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:46.553 15:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:46.553 15:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:46.553 15:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:46.553 15:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:46.553 15:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:46.553 15:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.553 15:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.553 15:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.553 15:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:46.553 15:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:46.553 15:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:46.553 15:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:46.553 15:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:46.553 15:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:46.553 15:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:46.553 15:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:46.553 15:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:46.553 15:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:46.553 15:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:46.553 15:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:46.553 15:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.553 15:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.553 nvme0n1 00:35:46.553 15:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.553 15:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:46.553 15:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.553 15:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:46.553 15:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.553 15:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.553 15:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:46.553 15:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:46.553 15:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.553 15:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.810 15:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.810 15:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:46.810 15:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:35:46.810 15:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:46.810 15:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:46.810 15:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:46.810 15:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:46.810 15:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWM3ZGJlYzJhNTA3N2IxYjk2Mzc4N2VjMzc0NDI1ZmRlMmVjMzUxNjRhM2Y4YzkyL9jZXQ==: 00:35:46.810 15:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzcwZWMxMjdkMzMwMWJmNDk5NTdiZGFiN2ZlYzlmMWI1YjhjOGJiMmRjNGUyY2Zkr72cWA==: 00:35:46.810 15:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:46.810 15:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:46.810 15:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWM3ZGJlYzJhNTA3N2IxYjk2Mzc4N2VjMzc0NDI1ZmRlMmVjMzUxNjRhM2Y4YzkyL9jZXQ==: 00:35:46.810 15:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzcwZWMxMjdkMzMwMWJmNDk5NTdiZGFiN2ZlYzlmMWI1YjhjOGJiMmRjNGUyY2Zkr72cWA==: ]] 00:35:46.810 15:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzcwZWMxMjdkMzMwMWJmNDk5NTdiZGFiN2ZlYzlmMWI1YjhjOGJiMmRjNGUyY2Zkr72cWA==: 00:35:46.810 15:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:35:46.810 15:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:46.810 15:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:46.810 15:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:46.810 15:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:46.810 15:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:46.810 15:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:46.810 15:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.810 15:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.810 15:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.810 15:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:46.810 15:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:46.810 15:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:46.810 15:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:46.810 15:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:46.810 15:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:46.810 15:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:46.810 15:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:46.810 15:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:46.810 15:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:46.810 15:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:46.810 15:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:46.810 15:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.810 15:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.811 nvme0n1 00:35:46.811 15:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.811 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:46.811 15:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.811 15:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.811 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:46.811 15:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.811 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:46.811 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:46.811 15:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.811 15:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.811 15:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.811 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:46.811 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:35:46.811 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:46.811 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:46.811 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:46.811 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:46.811 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWNjOWEwNGJjMDIyMjI1OTI0ODc5MGQzODA0OGI5NzdovoJk: 00:35:46.811 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWEyMmZiMDYyM2NkZTgxZDRhODE5NjlhYmJjYzk5OTJOPyHS: 00:35:46.811 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:46.811 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:46.811 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWNjOWEwNGJjMDIyMjI1OTI0ODc5MGQzODA0OGI5NzdovoJk: 00:35:46.811 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWEyMmZiMDYyM2NkZTgxZDRhODE5NjlhYmJjYzk5OTJOPyHS: ]] 00:35:46.811 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWEyMmZiMDYyM2NkZTgxZDRhODE5NjlhYmJjYzk5OTJOPyHS: 00:35:46.811 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:35:46.811 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:46.811 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:46.811 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:46.811 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:46.811 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:46.811 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:46.811 15:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.811 15:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.811 15:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.811 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:46.811 15:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:46.811 15:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:46.811 15:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:46.811 15:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:46.811 15:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:46.811 15:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:46.811 15:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:46.811 15:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:46.811 15:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:46.811 15:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:46.811 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:46.811 15:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.811 15:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.068 nvme0n1 00:35:47.068 15:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.068 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:47.068 15:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.068 15:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.068 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:47.068 15:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.068 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:47.068 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:47.068 15:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.068 15:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.068 15:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.068 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:47.068 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:35:47.068 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:47.068 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:47.068 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:47.068 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:47.068 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmNiMjZhNzMxZGFlOGNlYWNmMDZjMjRkNzkwNmQ5ZjY5Yjk3NTVmMDg0YTI2MDBlch5xSw==: 00:35:47.068 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTA3YzZhOTkxOGQzNjgwZDQxYzViNDFiYzUwOGYzNDSosSm7: 00:35:47.068 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:47.068 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:47.068 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmNiMjZhNzMxZGFlOGNlYWNmMDZjMjRkNzkwNmQ5ZjY5Yjk3NTVmMDg0YTI2MDBlch5xSw==: 00:35:47.068 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTA3YzZhOTkxOGQzNjgwZDQxYzViNDFiYzUwOGYzNDSosSm7: ]] 00:35:47.068 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTA3YzZhOTkxOGQzNjgwZDQxYzViNDFiYzUwOGYzNDSosSm7: 00:35:47.068 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:35:47.068 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:47.068 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:47.068 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:47.068 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:47.068 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:47.068 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:47.068 15:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.068 15:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.068 15:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.068 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:47.068 15:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:47.068 15:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:47.068 15:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:47.068 15:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:47.068 15:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:47.068 15:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:47.068 15:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:47.068 15:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:47.068 15:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:47.068 15:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:47.068 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:47.068 15:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.068 15:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.326 nvme0n1 00:35:47.326 15:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.326 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:47.326 15:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.326 15:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.326 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:47.326 15:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.326 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:47.326 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:47.326 15:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.326 15:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.326 15:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.326 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:47.326 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:35:47.326 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:47.326 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:47.326 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:47.326 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:47.326 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGEwN2U3MWI1YWJkYzFmNGNjOTU2OWUxMzhlM2MyMmY4YmM3YjY1ZTUyMmRlNWQwODU1MmE4NTViYWQ5ZTdlYurzbqg=: 00:35:47.326 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:47.326 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:47.326 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:47.326 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGEwN2U3MWI1YWJkYzFmNGNjOTU2OWUxMzhlM2MyMmY4YmM3YjY1ZTUyMmRlNWQwODU1MmE4NTViYWQ5ZTdlYurzbqg=: 00:35:47.326 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:47.326 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:35:47.326 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:47.326 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:47.326 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:47.326 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:47.326 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:47.326 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:47.326 15:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.326 15:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.326 15:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.326 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:47.326 15:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:47.326 15:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:47.326 15:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:47.326 15:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:47.326 15:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:47.326 15:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:47.326 15:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:47.326 15:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:47.326 15:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:47.326 15:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:47.326 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:47.326 15:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.326 15:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.584 nvme0n1 00:35:47.584 15:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.584 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:47.584 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:47.584 15:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.584 15:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.584 15:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.584 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:47.584 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:47.584 15:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.584 15:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.584 15:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.584 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:47.584 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:47.584 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:35:47.584 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:47.584 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:47.584 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:47.584 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:47.584 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDkyYTAxM2Q1ODM0MjM2OTcyZWE1NjVmZGQzOGJlZmTTSLr9: 00:35:47.584 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDIxNTEzMDM1MWFmOTQ0YTI0YTIzZmMxMWE3NWU5ZTY1ZjQ3OTkzZDhhYWI4NThhYWU4NWUzZjc1ODE3NGQ4OdeOt7U=: 00:35:47.584 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:47.584 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:47.584 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDkyYTAxM2Q1ODM0MjM2OTcyZWE1NjVmZGQzOGJlZmTTSLr9: 00:35:47.584 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDIxNTEzMDM1MWFmOTQ0YTI0YTIzZmMxMWE3NWU5ZTY1ZjQ3OTkzZDhhYWI4NThhYWU4NWUzZjc1ODE3NGQ4OdeOt7U=: ]] 00:35:47.584 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDIxNTEzMDM1MWFmOTQ0YTI0YTIzZmMxMWE3NWU5ZTY1ZjQ3OTkzZDhhYWI4NThhYWU4NWUzZjc1ODE3NGQ4OdeOt7U=: 00:35:47.584 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:35:47.584 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:47.584 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:47.584 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:47.584 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:47.584 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:47.584 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:47.584 15:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.584 15:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.584 15:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.584 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:47.584 15:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:47.584 15:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:47.584 15:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:47.584 15:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:47.584 15:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:47.584 15:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:47.584 15:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:47.584 15:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:47.584 15:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:47.584 15:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:47.584 15:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:47.584 15:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.584 15:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.842 nvme0n1 00:35:47.842 15:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.842 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:47.842 15:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.842 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:47.842 15:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.842 15:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.842 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:47.842 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:47.842 15:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.842 15:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.842 15:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.842 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:47.842 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:35:47.842 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:47.842 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:47.842 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:47.842 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:47.842 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWM3ZGJlYzJhNTA3N2IxYjk2Mzc4N2VjMzc0NDI1ZmRlMmVjMzUxNjRhM2Y4YzkyL9jZXQ==: 00:35:47.842 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzcwZWMxMjdkMzMwMWJmNDk5NTdiZGFiN2ZlYzlmMWI1YjhjOGJiMmRjNGUyY2Zkr72cWA==: 00:35:47.842 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:47.842 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:47.842 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWM3ZGJlYzJhNTA3N2IxYjk2Mzc4N2VjMzc0NDI1ZmRlMmVjMzUxNjRhM2Y4YzkyL9jZXQ==: 00:35:47.842 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzcwZWMxMjdkMzMwMWJmNDk5NTdiZGFiN2ZlYzlmMWI1YjhjOGJiMmRjNGUyY2Zkr72cWA==: ]] 00:35:47.842 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzcwZWMxMjdkMzMwMWJmNDk5NTdiZGFiN2ZlYzlmMWI1YjhjOGJiMmRjNGUyY2Zkr72cWA==: 00:35:47.842 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:35:47.842 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:47.842 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:47.842 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:47.842 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:47.842 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:47.842 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:47.842 15:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.842 15:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.842 15:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.842 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:47.842 15:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:47.842 15:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:47.842 15:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:47.842 15:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:47.842 15:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:47.842 15:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:47.842 15:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:47.842 15:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:47.842 15:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:47.842 15:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:47.842 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:47.842 15:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.842 15:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.100 nvme0n1 00:35:48.100 15:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.100 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:48.100 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:48.100 15:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.100 15:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.100 15:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.100 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:48.101 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:48.101 15:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.101 15:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.101 15:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.101 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:48.101 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:35:48.101 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:48.101 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:48.101 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:48.101 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:48.101 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWNjOWEwNGJjMDIyMjI1OTI0ODc5MGQzODA0OGI5NzdovoJk: 00:35:48.101 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWEyMmZiMDYyM2NkZTgxZDRhODE5NjlhYmJjYzk5OTJOPyHS: 00:35:48.101 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:48.101 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:48.101 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWNjOWEwNGJjMDIyMjI1OTI0ODc5MGQzODA0OGI5NzdovoJk: 00:35:48.101 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWEyMmZiMDYyM2NkZTgxZDRhODE5NjlhYmJjYzk5OTJOPyHS: ]] 00:35:48.101 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWEyMmZiMDYyM2NkZTgxZDRhODE5NjlhYmJjYzk5OTJOPyHS: 00:35:48.101 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:35:48.101 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:48.101 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:48.101 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:48.101 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:48.101 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:48.101 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:48.101 15:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.101 15:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.101 15:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.101 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:48.101 15:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:48.101 15:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:48.101 15:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:48.101 15:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:48.101 15:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:48.101 15:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:48.101 15:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:48.101 15:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:48.101 15:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:48.101 15:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:48.101 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:48.101 15:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.101 15:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.359 nvme0n1 00:35:48.359 15:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.359 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:48.359 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:48.359 15:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.359 15:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.359 15:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.359 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:48.359 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:48.359 15:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.359 15:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.359 15:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.359 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:48.359 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:35:48.359 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:48.359 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:48.359 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:48.359 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:48.359 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmNiMjZhNzMxZGFlOGNlYWNmMDZjMjRkNzkwNmQ5ZjY5Yjk3NTVmMDg0YTI2MDBlch5xSw==: 00:35:48.359 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTA3YzZhOTkxOGQzNjgwZDQxYzViNDFiYzUwOGYzNDSosSm7: 00:35:48.359 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:48.359 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:48.359 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmNiMjZhNzMxZGFlOGNlYWNmMDZjMjRkNzkwNmQ5ZjY5Yjk3NTVmMDg0YTI2MDBlch5xSw==: 00:35:48.359 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTA3YzZhOTkxOGQzNjgwZDQxYzViNDFiYzUwOGYzNDSosSm7: ]] 00:35:48.359 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTA3YzZhOTkxOGQzNjgwZDQxYzViNDFiYzUwOGYzNDSosSm7: 00:35:48.359 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:35:48.359 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:48.359 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:48.359 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:48.359 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:48.359 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:48.359 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:48.359 15:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.359 15:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.359 15:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.359 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:48.359 15:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:48.359 15:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:48.359 15:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:48.359 15:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:48.359 15:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:48.359 15:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:48.359 15:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:48.359 15:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:48.359 15:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:48.359 15:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:48.359 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:48.359 15:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.359 15:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.638 nvme0n1 00:35:48.638 15:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.638 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:48.638 15:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.638 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:48.638 15:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.638 15:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.638 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:48.638 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:48.638 15:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.638 15:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.638 15:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.638 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:48.638 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:35:48.638 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:48.638 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:48.638 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:48.638 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:48.638 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGEwN2U3MWI1YWJkYzFmNGNjOTU2OWUxMzhlM2MyMmY4YmM3YjY1ZTUyMmRlNWQwODU1MmE4NTViYWQ5ZTdlYurzbqg=: 00:35:48.638 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:48.638 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:48.638 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:48.638 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGEwN2U3MWI1YWJkYzFmNGNjOTU2OWUxMzhlM2MyMmY4YmM3YjY1ZTUyMmRlNWQwODU1MmE4NTViYWQ5ZTdlYurzbqg=: 00:35:48.638 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:48.638 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:35:48.638 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:48.638 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:48.638 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:48.638 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:48.638 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:48.638 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:48.638 15:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.638 15:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.638 15:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.638 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:48.638 15:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:48.638 15:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:48.638 15:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:48.638 15:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:48.638 15:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:48.638 15:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:48.638 15:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:48.638 15:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:48.638 15:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:48.638 15:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:48.638 15:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:48.638 15:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.638 15:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.901 nvme0n1 00:35:48.901 15:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.901 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:48.901 15:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.901 15:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.901 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:48.901 15:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.901 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:48.901 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:48.901 15:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.901 15:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.901 15:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.901 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:48.901 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:48.901 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:35:48.901 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:48.901 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:48.901 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:48.901 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:48.901 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDkyYTAxM2Q1ODM0MjM2OTcyZWE1NjVmZGQzOGJlZmTTSLr9: 00:35:48.901 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDIxNTEzMDM1MWFmOTQ0YTI0YTIzZmMxMWE3NWU5ZTY1ZjQ3OTkzZDhhYWI4NThhYWU4NWUzZjc1ODE3NGQ4OdeOt7U=: 00:35:48.901 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:48.901 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:48.901 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDkyYTAxM2Q1ODM0MjM2OTcyZWE1NjVmZGQzOGJlZmTTSLr9: 00:35:48.901 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDIxNTEzMDM1MWFmOTQ0YTI0YTIzZmMxMWE3NWU5ZTY1ZjQ3OTkzZDhhYWI4NThhYWU4NWUzZjc1ODE3NGQ4OdeOt7U=: ]] 00:35:48.901 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDIxNTEzMDM1MWFmOTQ0YTI0YTIzZmMxMWE3NWU5ZTY1ZjQ3OTkzZDhhYWI4NThhYWU4NWUzZjc1ODE3NGQ4OdeOt7U=: 00:35:48.901 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:35:48.901 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:48.901 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:48.901 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:48.901 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:48.901 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:48.901 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:48.901 15:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.901 15:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.901 15:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.901 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:48.901 15:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:48.901 15:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:48.901 15:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:48.901 15:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:48.901 15:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:48.901 15:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:48.901 15:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:48.901 15:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:48.901 15:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:48.901 15:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:48.901 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:48.901 15:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.901 15:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.158 nvme0n1 00:35:49.158 15:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.158 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:49.158 15:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.158 15:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.158 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:49.158 15:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.158 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:49.158 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:49.158 15:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.158 15:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.417 15:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.417 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:49.417 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:35:49.417 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:49.417 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:49.417 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:49.417 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:49.417 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWM3ZGJlYzJhNTA3N2IxYjk2Mzc4N2VjMzc0NDI1ZmRlMmVjMzUxNjRhM2Y4YzkyL9jZXQ==: 00:35:49.417 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzcwZWMxMjdkMzMwMWJmNDk5NTdiZGFiN2ZlYzlmMWI1YjhjOGJiMmRjNGUyY2Zkr72cWA==: 00:35:49.417 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:49.417 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:49.417 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWM3ZGJlYzJhNTA3N2IxYjk2Mzc4N2VjMzc0NDI1ZmRlMmVjMzUxNjRhM2Y4YzkyL9jZXQ==: 00:35:49.417 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzcwZWMxMjdkMzMwMWJmNDk5NTdiZGFiN2ZlYzlmMWI1YjhjOGJiMmRjNGUyY2Zkr72cWA==: ]] 00:35:49.417 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzcwZWMxMjdkMzMwMWJmNDk5NTdiZGFiN2ZlYzlmMWI1YjhjOGJiMmRjNGUyY2Zkr72cWA==: 00:35:49.417 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:35:49.417 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:49.417 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:49.417 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:49.417 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:49.417 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:49.417 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:49.417 15:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.417 15:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.417 15:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.417 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:49.417 15:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:49.417 15:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:49.417 15:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:49.417 15:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:49.417 15:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:49.417 15:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:49.417 15:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:49.417 15:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:49.417 15:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:49.417 15:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:49.417 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:49.417 15:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.417 15:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.676 nvme0n1 00:35:49.676 15:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.676 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:49.676 15:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.676 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:49.676 15:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.676 15:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.676 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:49.676 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:49.676 15:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.676 15:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.676 15:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.676 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:49.676 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:35:49.676 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:49.676 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:49.676 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:49.676 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:49.676 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWNjOWEwNGJjMDIyMjI1OTI0ODc5MGQzODA0OGI5NzdovoJk: 00:35:49.676 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWEyMmZiMDYyM2NkZTgxZDRhODE5NjlhYmJjYzk5OTJOPyHS: 00:35:49.676 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:49.676 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:49.676 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWNjOWEwNGJjMDIyMjI1OTI0ODc5MGQzODA0OGI5NzdovoJk: 00:35:49.676 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWEyMmZiMDYyM2NkZTgxZDRhODE5NjlhYmJjYzk5OTJOPyHS: ]] 00:35:49.677 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWEyMmZiMDYyM2NkZTgxZDRhODE5NjlhYmJjYzk5OTJOPyHS: 00:35:49.677 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:35:49.677 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:49.677 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:49.677 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:49.677 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:49.677 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:49.677 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:49.677 15:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.677 15:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.677 15:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.677 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:49.677 15:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:49.677 15:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:49.677 15:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:49.677 15:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:49.677 15:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:49.677 15:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:49.677 15:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:49.677 15:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:49.677 15:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:49.677 15:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:49.677 15:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:49.677 15:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.677 15:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.935 nvme0n1 00:35:49.935 15:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.935 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:49.935 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:49.935 15:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.935 15:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.935 15:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.935 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:49.935 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:49.935 15:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.935 15:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.935 15:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.935 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:49.935 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:35:49.935 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:49.935 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:49.935 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:49.935 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:49.935 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmNiMjZhNzMxZGFlOGNlYWNmMDZjMjRkNzkwNmQ5ZjY5Yjk3NTVmMDg0YTI2MDBlch5xSw==: 00:35:49.935 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTA3YzZhOTkxOGQzNjgwZDQxYzViNDFiYzUwOGYzNDSosSm7: 00:35:49.935 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:49.935 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:49.935 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmNiMjZhNzMxZGFlOGNlYWNmMDZjMjRkNzkwNmQ5ZjY5Yjk3NTVmMDg0YTI2MDBlch5xSw==: 00:35:49.935 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTA3YzZhOTkxOGQzNjgwZDQxYzViNDFiYzUwOGYzNDSosSm7: ]] 00:35:49.935 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTA3YzZhOTkxOGQzNjgwZDQxYzViNDFiYzUwOGYzNDSosSm7: 00:35:49.935 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:35:49.935 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:49.935 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:49.935 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:49.935 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:49.935 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:49.935 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:49.935 15:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.935 15:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.935 15:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.935 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:49.935 15:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:49.935 15:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:49.935 15:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:49.935 15:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:49.935 15:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:49.935 15:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:49.936 15:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:49.936 15:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:49.936 15:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:49.936 15:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:49.936 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:49.936 15:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.936 15:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.502 nvme0n1 00:35:50.502 15:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.502 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:50.502 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:50.502 15:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.502 15:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.502 15:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.502 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:50.502 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:50.502 15:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.502 15:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.502 15:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.502 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:50.502 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:35:50.502 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:50.502 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:50.502 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:50.502 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:50.502 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGEwN2U3MWI1YWJkYzFmNGNjOTU2OWUxMzhlM2MyMmY4YmM3YjY1ZTUyMmRlNWQwODU1MmE4NTViYWQ5ZTdlYurzbqg=: 00:35:50.502 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:50.502 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:50.502 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:50.502 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGEwN2U3MWI1YWJkYzFmNGNjOTU2OWUxMzhlM2MyMmY4YmM3YjY1ZTUyMmRlNWQwODU1MmE4NTViYWQ5ZTdlYurzbqg=: 00:35:50.502 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:50.502 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:35:50.502 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:50.502 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:50.502 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:50.502 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:50.502 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:50.502 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:50.502 15:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.502 15:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.502 15:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.502 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:50.502 15:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:50.502 15:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:50.502 15:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:50.502 15:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:50.502 15:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:50.502 15:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:50.502 15:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:50.502 15:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:50.502 15:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:50.502 15:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:50.502 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:50.502 15:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.502 15:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.760 nvme0n1 00:35:50.760 15:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.760 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:50.760 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:50.760 15:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.760 15:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.760 15:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.760 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:50.760 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:50.760 15:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.760 15:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.760 15:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.760 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:50.760 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:50.760 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:35:50.760 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:50.760 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:50.760 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:50.760 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:50.760 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDkyYTAxM2Q1ODM0MjM2OTcyZWE1NjVmZGQzOGJlZmTTSLr9: 00:35:50.760 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDIxNTEzMDM1MWFmOTQ0YTI0YTIzZmMxMWE3NWU5ZTY1ZjQ3OTkzZDhhYWI4NThhYWU4NWUzZjc1ODE3NGQ4OdeOt7U=: 00:35:50.760 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:50.760 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:50.760 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDkyYTAxM2Q1ODM0MjM2OTcyZWE1NjVmZGQzOGJlZmTTSLr9: 00:35:50.760 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDIxNTEzMDM1MWFmOTQ0YTI0YTIzZmMxMWE3NWU5ZTY1ZjQ3OTkzZDhhYWI4NThhYWU4NWUzZjc1ODE3NGQ4OdeOt7U=: ]] 00:35:50.760 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDIxNTEzMDM1MWFmOTQ0YTI0YTIzZmMxMWE3NWU5ZTY1ZjQ3OTkzZDhhYWI4NThhYWU4NWUzZjc1ODE3NGQ4OdeOt7U=: 00:35:50.760 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:35:50.760 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:50.760 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:50.760 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:50.760 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:50.760 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:50.760 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:50.760 15:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.760 15:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.760 15:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.760 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:50.760 15:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:50.760 15:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:50.760 15:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:50.760 15:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:50.760 15:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:50.760 15:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:50.760 15:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:50.760 15:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:50.760 15:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:50.761 15:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:50.761 15:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:50.761 15:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.761 15:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.326 nvme0n1 00:35:51.326 15:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.326 15:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:51.326 15:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.326 15:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.326 15:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:51.326 15:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.326 15:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:51.326 15:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:51.326 15:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.326 15:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.326 15:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.326 15:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:51.326 15:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:35:51.326 15:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:51.326 15:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:51.326 15:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:51.326 15:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:51.326 15:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWM3ZGJlYzJhNTA3N2IxYjk2Mzc4N2VjMzc0NDI1ZmRlMmVjMzUxNjRhM2Y4YzkyL9jZXQ==: 00:35:51.326 15:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzcwZWMxMjdkMzMwMWJmNDk5NTdiZGFiN2ZlYzlmMWI1YjhjOGJiMmRjNGUyY2Zkr72cWA==: 00:35:51.326 15:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:51.326 15:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:51.326 15:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWM3ZGJlYzJhNTA3N2IxYjk2Mzc4N2VjMzc0NDI1ZmRlMmVjMzUxNjRhM2Y4YzkyL9jZXQ==: 00:35:51.326 15:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzcwZWMxMjdkMzMwMWJmNDk5NTdiZGFiN2ZlYzlmMWI1YjhjOGJiMmRjNGUyY2Zkr72cWA==: ]] 00:35:51.326 15:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzcwZWMxMjdkMzMwMWJmNDk5NTdiZGFiN2ZlYzlmMWI1YjhjOGJiMmRjNGUyY2Zkr72cWA==: 00:35:51.326 15:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:35:51.326 15:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:51.326 15:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:51.326 15:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:51.326 15:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:51.326 15:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:51.326 15:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:51.326 15:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.326 15:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.326 15:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.326 15:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:51.326 15:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:51.326 15:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:51.326 15:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:51.326 15:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:51.326 15:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:51.326 15:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:51.326 15:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:51.326 15:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:51.326 15:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:51.326 15:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:51.326 15:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:51.326 15:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.326 15:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.906 nvme0n1 00:35:51.906 15:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.906 15:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:51.906 15:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:51.906 15:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.906 15:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.906 15:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.906 15:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:51.906 15:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:51.906 15:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.906 15:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.906 15:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.906 15:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:51.906 15:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:35:51.906 15:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:51.906 15:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:51.906 15:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:51.906 15:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:51.906 15:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWNjOWEwNGJjMDIyMjI1OTI0ODc5MGQzODA0OGI5NzdovoJk: 00:35:51.906 15:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWEyMmZiMDYyM2NkZTgxZDRhODE5NjlhYmJjYzk5OTJOPyHS: 00:35:51.906 15:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:51.906 15:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:51.906 15:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWNjOWEwNGJjMDIyMjI1OTI0ODc5MGQzODA0OGI5NzdovoJk: 00:35:51.906 15:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWEyMmZiMDYyM2NkZTgxZDRhODE5NjlhYmJjYzk5OTJOPyHS: ]] 00:35:51.906 15:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWEyMmZiMDYyM2NkZTgxZDRhODE5NjlhYmJjYzk5OTJOPyHS: 00:35:51.906 15:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:35:51.906 15:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:51.906 15:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:51.906 15:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:51.906 15:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:51.906 15:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:51.906 15:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:51.906 15:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.906 15:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.906 15:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.906 15:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:51.906 15:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:51.906 15:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:51.906 15:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:51.906 15:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:51.906 15:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:51.906 15:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:51.906 15:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:51.906 15:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:51.906 15:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:51.906 15:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:51.906 15:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:51.906 15:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.906 15:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.472 nvme0n1 00:35:52.472 15:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.472 15:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:52.472 15:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:52.472 15:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.472 15:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.472 15:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.472 15:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:52.472 15:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:52.472 15:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.472 15:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.472 15:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.472 15:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:52.472 15:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:35:52.472 15:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:52.472 15:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:52.472 15:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:52.472 15:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:52.472 15:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmNiMjZhNzMxZGFlOGNlYWNmMDZjMjRkNzkwNmQ5ZjY5Yjk3NTVmMDg0YTI2MDBlch5xSw==: 00:35:52.472 15:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTA3YzZhOTkxOGQzNjgwZDQxYzViNDFiYzUwOGYzNDSosSm7: 00:35:52.472 15:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:52.472 15:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:52.472 15:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmNiMjZhNzMxZGFlOGNlYWNmMDZjMjRkNzkwNmQ5ZjY5Yjk3NTVmMDg0YTI2MDBlch5xSw==: 00:35:52.472 15:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTA3YzZhOTkxOGQzNjgwZDQxYzViNDFiYzUwOGYzNDSosSm7: ]] 00:35:52.472 15:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTA3YzZhOTkxOGQzNjgwZDQxYzViNDFiYzUwOGYzNDSosSm7: 00:35:52.472 15:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:35:52.472 15:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:52.472 15:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:52.472 15:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:52.472 15:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:52.472 15:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:52.472 15:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:52.472 15:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.472 15:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.472 15:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.472 15:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:52.472 15:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:52.472 15:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:52.472 15:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:52.472 15:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:52.472 15:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:52.472 15:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:52.472 15:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:52.472 15:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:52.472 15:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:52.472 15:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:52.472 15:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:52.472 15:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.472 15:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.039 nvme0n1 00:35:53.039 15:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.039 15:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:53.039 15:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.039 15:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:53.039 15:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.039 15:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.039 15:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:53.039 15:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:53.039 15:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.039 15:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.039 15:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.039 15:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:53.039 15:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:35:53.039 15:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:53.039 15:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:53.039 15:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:53.039 15:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:53.039 15:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGEwN2U3MWI1YWJkYzFmNGNjOTU2OWUxMzhlM2MyMmY4YmM3YjY1ZTUyMmRlNWQwODU1MmE4NTViYWQ5ZTdlYurzbqg=: 00:35:53.039 15:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:53.039 15:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:53.039 15:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:53.039 15:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGEwN2U3MWI1YWJkYzFmNGNjOTU2OWUxMzhlM2MyMmY4YmM3YjY1ZTUyMmRlNWQwODU1MmE4NTViYWQ5ZTdlYurzbqg=: 00:35:53.039 15:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:53.039 15:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:35:53.039 15:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:53.039 15:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:53.039 15:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:53.039 15:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:53.039 15:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:53.039 15:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:53.039 15:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.039 15:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.039 15:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.039 15:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:53.039 15:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:53.039 15:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:53.039 15:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:53.039 15:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:53.039 15:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:53.039 15:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:53.039 15:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:53.039 15:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:53.039 15:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:53.039 15:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:53.039 15:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:53.039 15:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.039 15:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.604 nvme0n1 00:35:53.604 15:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.604 15:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:53.604 15:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.604 15:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.604 15:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:53.604 15:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.604 15:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:53.604 15:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:53.604 15:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.604 15:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.604 15:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.604 15:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:53.604 15:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:53.604 15:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:35:53.604 15:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:53.604 15:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:53.604 15:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:53.604 15:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:53.604 15:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDkyYTAxM2Q1ODM0MjM2OTcyZWE1NjVmZGQzOGJlZmTTSLr9: 00:35:53.604 15:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDIxNTEzMDM1MWFmOTQ0YTI0YTIzZmMxMWE3NWU5ZTY1ZjQ3OTkzZDhhYWI4NThhYWU4NWUzZjc1ODE3NGQ4OdeOt7U=: 00:35:53.604 15:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:53.604 15:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:53.604 15:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDkyYTAxM2Q1ODM0MjM2OTcyZWE1NjVmZGQzOGJlZmTTSLr9: 00:35:53.604 15:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDIxNTEzMDM1MWFmOTQ0YTI0YTIzZmMxMWE3NWU5ZTY1ZjQ3OTkzZDhhYWI4NThhYWU4NWUzZjc1ODE3NGQ4OdeOt7U=: ]] 00:35:53.604 15:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDIxNTEzMDM1MWFmOTQ0YTI0YTIzZmMxMWE3NWU5ZTY1ZjQ3OTkzZDhhYWI4NThhYWU4NWUzZjc1ODE3NGQ4OdeOt7U=: 00:35:53.604 15:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:35:53.604 15:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:53.604 15:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:53.604 15:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:53.604 15:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:53.604 15:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:53.604 15:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:53.604 15:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.604 15:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.604 15:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.604 15:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:53.604 15:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:53.604 15:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:53.604 15:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:53.604 15:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:53.604 15:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:53.604 15:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:53.604 15:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:53.604 15:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:53.604 15:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:53.604 15:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:53.862 15:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:53.862 15:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.862 15:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.795 nvme0n1 00:35:54.795 15:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:54.795 15:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:54.795 15:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:54.795 15:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:54.795 15:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.795 15:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:54.795 15:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:54.795 15:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:54.795 15:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:54.795 15:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.795 15:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:54.795 15:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:54.795 15:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:35:54.795 15:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:54.795 15:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:54.795 15:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:54.795 15:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:54.795 15:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWM3ZGJlYzJhNTA3N2IxYjk2Mzc4N2VjMzc0NDI1ZmRlMmVjMzUxNjRhM2Y4YzkyL9jZXQ==: 00:35:54.795 15:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzcwZWMxMjdkMzMwMWJmNDk5NTdiZGFiN2ZlYzlmMWI1YjhjOGJiMmRjNGUyY2Zkr72cWA==: 00:35:54.795 15:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:54.795 15:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:54.795 15:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWM3ZGJlYzJhNTA3N2IxYjk2Mzc4N2VjMzc0NDI1ZmRlMmVjMzUxNjRhM2Y4YzkyL9jZXQ==: 00:35:54.795 15:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzcwZWMxMjdkMzMwMWJmNDk5NTdiZGFiN2ZlYzlmMWI1YjhjOGJiMmRjNGUyY2Zkr72cWA==: ]] 00:35:54.795 15:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzcwZWMxMjdkMzMwMWJmNDk5NTdiZGFiN2ZlYzlmMWI1YjhjOGJiMmRjNGUyY2Zkr72cWA==: 00:35:54.795 15:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:35:54.795 15:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:54.795 15:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:54.795 15:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:54.795 15:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:54.795 15:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:54.795 15:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:54.795 15:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:54.795 15:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.795 15:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:54.795 15:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:54.795 15:08:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:54.795 15:08:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:54.795 15:08:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:54.795 15:08:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:54.795 15:08:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:54.795 15:08:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:54.795 15:08:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:54.795 15:08:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:54.795 15:08:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:54.795 15:08:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:54.795 15:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:54.795 15:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:54.795 15:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.723 nvme0n1 00:35:55.723 15:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.723 15:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:55.723 15:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.723 15:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:55.723 15:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.723 15:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.723 15:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:55.723 15:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:55.723 15:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.723 15:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.723 15:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.723 15:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:55.723 15:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:35:55.723 15:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:55.723 15:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:55.723 15:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:55.723 15:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:55.723 15:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWNjOWEwNGJjMDIyMjI1OTI0ODc5MGQzODA0OGI5NzdovoJk: 00:35:55.723 15:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWEyMmZiMDYyM2NkZTgxZDRhODE5NjlhYmJjYzk5OTJOPyHS: 00:35:55.723 15:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:55.723 15:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:55.723 15:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWNjOWEwNGJjMDIyMjI1OTI0ODc5MGQzODA0OGI5NzdovoJk: 00:35:55.723 15:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWEyMmZiMDYyM2NkZTgxZDRhODE5NjlhYmJjYzk5OTJOPyHS: ]] 00:35:55.724 15:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWEyMmZiMDYyM2NkZTgxZDRhODE5NjlhYmJjYzk5OTJOPyHS: 00:35:55.724 15:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:35:55.724 15:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:55.724 15:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:55.724 15:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:55.724 15:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:55.724 15:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:55.724 15:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:55.724 15:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.724 15:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.724 15:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.724 15:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:55.724 15:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:55.724 15:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:55.724 15:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:55.724 15:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:55.724 15:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:55.724 15:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:55.724 15:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:55.724 15:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:55.724 15:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:55.724 15:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:55.724 15:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:55.724 15:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.724 15:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.655 nvme0n1 00:35:56.914 15:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:56.914 15:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:56.914 15:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:56.914 15:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:56.914 15:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.914 15:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:56.914 15:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:56.914 15:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:56.914 15:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:56.914 15:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.914 15:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:56.914 15:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:56.914 15:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:35:56.914 15:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:56.914 15:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:56.914 15:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:56.914 15:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:56.914 15:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmNiMjZhNzMxZGFlOGNlYWNmMDZjMjRkNzkwNmQ5ZjY5Yjk3NTVmMDg0YTI2MDBlch5xSw==: 00:35:56.914 15:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTA3YzZhOTkxOGQzNjgwZDQxYzViNDFiYzUwOGYzNDSosSm7: 00:35:56.914 15:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:56.914 15:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:56.914 15:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmNiMjZhNzMxZGFlOGNlYWNmMDZjMjRkNzkwNmQ5ZjY5Yjk3NTVmMDg0YTI2MDBlch5xSw==: 00:35:56.914 15:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTA3YzZhOTkxOGQzNjgwZDQxYzViNDFiYzUwOGYzNDSosSm7: ]] 00:35:56.914 15:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTA3YzZhOTkxOGQzNjgwZDQxYzViNDFiYzUwOGYzNDSosSm7: 00:35:56.914 15:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:35:56.914 15:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:56.914 15:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:56.914 15:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:56.914 15:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:56.914 15:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:56.914 15:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:56.914 15:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:56.914 15:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.914 15:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:56.914 15:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:56.914 15:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:56.914 15:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:56.914 15:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:56.914 15:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.914 15:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.914 15:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:56.914 15:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:56.914 15:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:56.914 15:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:56.914 15:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:56.914 15:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:56.914 15:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:56.914 15:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.846 nvme0n1 00:35:57.846 15:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.846 15:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:57.846 15:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.846 15:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.846 15:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:57.846 15:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.846 15:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:57.846 15:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:57.846 15:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.846 15:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.846 15:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.846 15:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:57.846 15:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:35:57.846 15:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:57.846 15:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:57.846 15:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:57.846 15:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:57.846 15:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGEwN2U3MWI1YWJkYzFmNGNjOTU2OWUxMzhlM2MyMmY4YmM3YjY1ZTUyMmRlNWQwODU1MmE4NTViYWQ5ZTdlYurzbqg=: 00:35:57.846 15:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:57.846 15:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:57.846 15:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:57.846 15:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGEwN2U3MWI1YWJkYzFmNGNjOTU2OWUxMzhlM2MyMmY4YmM3YjY1ZTUyMmRlNWQwODU1MmE4NTViYWQ5ZTdlYurzbqg=: 00:35:57.846 15:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:57.846 15:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:35:57.846 15:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:57.846 15:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:57.846 15:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:57.846 15:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:57.846 15:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:57.846 15:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:57.846 15:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.846 15:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.846 15:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.846 15:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:57.846 15:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:57.846 15:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:57.846 15:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:57.846 15:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:57.846 15:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:57.846 15:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:57.846 15:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:57.846 15:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:57.846 15:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:57.846 15:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:57.846 15:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:57.846 15:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.846 15:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.778 nvme0n1 00:35:58.778 15:08:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.778 15:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:58.778 15:08:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.778 15:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:58.778 15:08:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.778 15:08:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.778 15:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:58.778 15:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:58.778 15:08:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.778 15:08:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.778 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.778 15:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:58.778 15:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:58.778 15:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:58.778 15:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:58.778 15:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:58.778 15:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWM3ZGJlYzJhNTA3N2IxYjk2Mzc4N2VjMzc0NDI1ZmRlMmVjMzUxNjRhM2Y4YzkyL9jZXQ==: 00:35:58.778 15:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzcwZWMxMjdkMzMwMWJmNDk5NTdiZGFiN2ZlYzlmMWI1YjhjOGJiMmRjNGUyY2Zkr72cWA==: 00:35:58.778 15:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:58.778 15:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:58.778 15:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWM3ZGJlYzJhNTA3N2IxYjk2Mzc4N2VjMzc0NDI1ZmRlMmVjMzUxNjRhM2Y4YzkyL9jZXQ==: 00:35:58.778 15:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzcwZWMxMjdkMzMwMWJmNDk5NTdiZGFiN2ZlYzlmMWI1YjhjOGJiMmRjNGUyY2Zkr72cWA==: ]] 00:35:58.778 15:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzcwZWMxMjdkMzMwMWJmNDk5NTdiZGFiN2ZlYzlmMWI1YjhjOGJiMmRjNGUyY2Zkr72cWA==: 00:35:58.778 15:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:58.778 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.778 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.778 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.778 15:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:35:58.778 15:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:58.778 15:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:58.778 15:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:58.778 15:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:58.778 15:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:58.778 15:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:58.778 15:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:58.778 15:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:58.778 15:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:58.778 15:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:58.778 15:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:58.778 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:35:58.778 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:58.778 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:35:58.778 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:58.778 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:35:58.778 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:58.778 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:58.778 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.778 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.778 request: 00:35:58.778 { 00:35:58.778 "name": "nvme0", 00:35:58.778 "trtype": "tcp", 00:35:58.779 "traddr": "10.0.0.1", 00:35:58.779 "adrfam": "ipv4", 00:35:58.779 "trsvcid": "4420", 00:35:58.779 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:58.779 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:58.779 "prchk_reftag": false, 00:35:58.779 "prchk_guard": false, 00:35:58.779 "hdgst": false, 00:35:58.779 "ddgst": false, 00:35:58.779 "method": "bdev_nvme_attach_controller", 00:35:58.779 "req_id": 1 00:35:58.779 } 00:35:58.779 Got JSON-RPC error response 00:35:58.779 response: 00:35:58.779 { 00:35:58.779 "code": -5, 00:35:58.779 "message": "Input/output error" 00:35:58.779 } 00:35:58.779 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:35:58.779 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:35:58.779 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:58.779 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:58.779 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:58.779 15:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:35:58.779 15:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:35:58.779 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.779 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.037 request: 00:35:59.037 { 00:35:59.037 "name": "nvme0", 00:35:59.037 "trtype": "tcp", 00:35:59.037 "traddr": "10.0.0.1", 00:35:59.037 "adrfam": "ipv4", 00:35:59.037 "trsvcid": "4420", 00:35:59.037 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:59.037 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:59.037 "prchk_reftag": false, 00:35:59.037 "prchk_guard": false, 00:35:59.037 "hdgst": false, 00:35:59.037 "ddgst": false, 00:35:59.037 "dhchap_key": "key2", 00:35:59.037 "method": "bdev_nvme_attach_controller", 00:35:59.037 "req_id": 1 00:35:59.037 } 00:35:59.037 Got JSON-RPC error response 00:35:59.037 response: 00:35:59.037 { 00:35:59.037 "code": -5, 00:35:59.037 "message": "Input/output error" 00:35:59.037 } 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.037 request: 00:35:59.037 { 00:35:59.037 "name": "nvme0", 00:35:59.037 "trtype": "tcp", 00:35:59.037 "traddr": "10.0.0.1", 00:35:59.037 "adrfam": "ipv4", 00:35:59.037 "trsvcid": "4420", 00:35:59.037 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:59.037 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:59.037 "prchk_reftag": false, 00:35:59.037 "prchk_guard": false, 00:35:59.037 "hdgst": false, 00:35:59.037 "ddgst": false, 00:35:59.037 "dhchap_key": "key1", 00:35:59.037 "dhchap_ctrlr_key": "ckey2", 00:35:59.037 "method": "bdev_nvme_attach_controller", 00:35:59.037 "req_id": 1 00:35:59.037 } 00:35:59.037 Got JSON-RPC error response 00:35:59.037 response: 00:35:59.037 { 00:35:59.037 "code": -5, 00:35:59.037 "message": "Input/output error" 00:35:59.037 } 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:59.037 15:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:59.037 rmmod nvme_tcp 00:35:59.296 rmmod nvme_fabrics 00:35:59.296 15:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:59.296 15:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:35:59.296 15:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:35:59.296 15:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 2042228 ']' 00:35:59.296 15:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 2042228 00:35:59.296 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 2042228 ']' 00:35:59.296 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 2042228 00:35:59.296 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:35:59.296 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:59.296 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2042228 00:35:59.296 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:59.296 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:59.296 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2042228' 00:35:59.296 killing process with pid 2042228 00:35:59.296 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 2042228 00:35:59.296 15:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 2042228 00:36:00.227 15:08:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:00.227 15:08:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:00.227 15:08:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:00.227 15:08:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:00.227 15:08:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:00.227 15:08:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:00.227 15:08:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:00.227 15:08:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:02.761 15:08:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:02.761 15:08:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:36:02.761 15:08:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:02.761 15:08:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:36:02.761 15:08:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:36:02.761 15:08:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:36:02.761 15:08:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:02.761 15:08:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:02.761 15:08:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:02.761 15:08:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:02.761 15:08:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:36:02.761 15:08:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:36:02.761 15:08:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:03.694 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:03.694 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:03.694 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:03.694 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:03.694 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:03.694 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:03.694 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:03.694 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:03.694 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:03.694 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:03.694 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:03.694 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:03.694 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:03.694 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:03.694 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:03.694 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:04.627 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:04.886 15:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.tRK /tmp/spdk.key-null.sZS /tmp/spdk.key-sha256.eLL /tmp/spdk.key-sha384.DLu /tmp/spdk.key-sha512.Vq4 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:36:04.886 15:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:05.822 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:36:05.822 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:36:05.822 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:36:05.822 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:36:05.822 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:36:05.822 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:36:05.822 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:36:05.822 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:36:05.822 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:36:05.822 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:36:05.822 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:36:05.822 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:36:05.822 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:36:05.822 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:36:05.822 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:36:05.822 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:36:05.822 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:36:06.081 00:36:06.081 real 0m51.560s 00:36:06.081 user 0m49.215s 00:36:06.081 sys 0m6.105s 00:36:06.081 15:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:06.081 15:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.081 ************************************ 00:36:06.081 END TEST nvmf_auth_host 00:36:06.081 ************************************ 00:36:06.081 15:08:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:36:06.081 15:08:45 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:36:06.081 15:08:45 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:36:06.081 15:08:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:36:06.081 15:08:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:06.081 15:08:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:06.081 ************************************ 00:36:06.081 START TEST nvmf_digest 00:36:06.081 ************************************ 00:36:06.081 15:08:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:36:06.081 * Looking for test storage... 00:36:06.081 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:06.081 15:08:45 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:06.081 15:08:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:36:06.081 15:08:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:06.081 15:08:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:06.081 15:08:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:06.081 15:08:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:06.081 15:08:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:06.081 15:08:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:06.081 15:08:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:06.081 15:08:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:06.081 15:08:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:06.081 15:08:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:06.081 15:08:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:06.081 15:08:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:06.081 15:08:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:06.081 15:08:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:06.081 15:08:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:06.081 15:08:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:06.081 15:08:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:06.081 15:08:45 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:06.081 15:08:45 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:06.081 15:08:45 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:06.081 15:08:45 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.081 15:08:45 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.081 15:08:45 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.081 15:08:45 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:36:06.081 15:08:45 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.081 15:08:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:36:06.081 15:08:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:06.081 15:08:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:06.081 15:08:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:06.081 15:08:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:06.081 15:08:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:06.081 15:08:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:06.081 15:08:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:06.081 15:08:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:06.081 15:08:45 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:36:06.081 15:08:45 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:36:06.081 15:08:45 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:36:06.081 15:08:45 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:36:06.081 15:08:45 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:36:06.081 15:08:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:06.081 15:08:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:06.081 15:08:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:06.081 15:08:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:06.081 15:08:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:06.081 15:08:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:06.081 15:08:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:06.081 15:08:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:06.081 15:08:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:06.081 15:08:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:06.081 15:08:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:36:06.081 15:08:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:08.638 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:08.638 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:36:08.638 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:08.638 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:08.638 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:08.638 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:08.638 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:08.638 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:36:08.638 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:08.638 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:36:08.638 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:36:08.638 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:36:08.638 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:36:08.638 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:36:08.638 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:36:08.638 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:08.638 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:08.638 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:08.638 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:08.638 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:08.638 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:08.638 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:08.638 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:08.638 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:08.638 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:08.638 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:08.638 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:08.638 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:08.638 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:08.638 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:08.638 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:08.638 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:08.639 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:08.639 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:08.639 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:08.639 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:08.639 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:08.639 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:36:08.639 00:36:08.639 --- 10.0.0.2 ping statistics --- 00:36:08.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:08.639 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:08.639 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:08.639 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:36:08.639 00:36:08.639 --- 10.0.0.1 ping statistics --- 00:36:08.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:08.639 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:08.639 ************************************ 00:36:08.639 START TEST nvmf_digest_clean 00:36:08.639 ************************************ 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=2051924 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 2051924 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2051924 ']' 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:08.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:08.639 15:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:08.639 [2024-07-14 15:08:47.565446] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:08.639 [2024-07-14 15:08:47.565589] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:08.639 EAL: No free 2048 kB hugepages reported on node 1 00:36:08.639 [2024-07-14 15:08:47.698829] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:08.639 [2024-07-14 15:08:47.919367] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:08.639 [2024-07-14 15:08:47.919450] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:08.639 [2024-07-14 15:08:47.919476] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:08.639 [2024-07-14 15:08:47.919501] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:08.639 [2024-07-14 15:08:47.919519] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:08.639 [2024-07-14 15:08:47.919563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:09.205 15:08:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:09.205 15:08:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:36:09.205 15:08:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:09.205 15:08:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:09.205 15:08:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:09.205 15:08:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:09.205 15:08:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:36:09.205 15:08:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:36:09.205 15:08:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:36:09.205 15:08:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.205 15:08:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:09.771 null0 00:36:09.771 [2024-07-14 15:08:48.870001] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:09.771 [2024-07-14 15:08:48.894256] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:09.771 15:08:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.771 15:08:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:36:09.771 15:08:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:09.771 15:08:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:09.771 15:08:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:36:09.771 15:08:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:36:09.771 15:08:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:36:09.771 15:08:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:09.771 15:08:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2052077 00:36:09.771 15:08:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:36:09.771 15:08:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2052077 /var/tmp/bperf.sock 00:36:09.771 15:08:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2052077 ']' 00:36:09.771 15:08:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:09.771 15:08:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:09.771 15:08:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:09.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:09.771 15:08:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:09.771 15:08:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:09.771 [2024-07-14 15:08:48.985262] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:09.771 [2024-07-14 15:08:48.985427] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2052077 ] 00:36:09.771 EAL: No free 2048 kB hugepages reported on node 1 00:36:10.029 [2024-07-14 15:08:49.131268] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:10.287 [2024-07-14 15:08:49.391219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:10.852 15:08:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:10.852 15:08:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:36:10.852 15:08:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:10.852 15:08:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:10.852 15:08:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:11.417 15:08:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:11.417 15:08:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:11.674 nvme0n1 00:36:11.674 15:08:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:11.674 15:08:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:11.932 Running I/O for 2 seconds... 00:36:13.830 00:36:13.830 Latency(us) 00:36:13.830 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:13.830 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:13.830 nvme0n1 : 2.01 13739.20 53.67 0.00 0.00 9300.60 4830.25 20097.71 00:36:13.830 =================================================================================================================== 00:36:13.830 Total : 13739.20 53.67 0.00 0.00 9300.60 4830.25 20097.71 00:36:13.830 0 00:36:13.830 15:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:13.830 15:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:13.830 15:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:13.830 15:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:13.830 | select(.opcode=="crc32c") 00:36:13.830 | "\(.module_name) \(.executed)"' 00:36:13.830 15:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:14.094 15:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:14.094 15:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:14.094 15:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:14.094 15:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:14.094 15:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2052077 00:36:14.094 15:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2052077 ']' 00:36:14.094 15:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2052077 00:36:14.094 15:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:36:14.094 15:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:14.094 15:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2052077 00:36:14.094 15:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:14.094 15:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:14.094 15:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2052077' 00:36:14.094 killing process with pid 2052077 00:36:14.094 15:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2052077 00:36:14.094 Received shutdown signal, test time was about 2.000000 seconds 00:36:14.094 00:36:14.094 Latency(us) 00:36:14.094 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:14.094 =================================================================================================================== 00:36:14.094 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:14.094 15:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2052077 00:36:15.466 15:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:36:15.466 15:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:15.466 15:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:15.466 15:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:36:15.466 15:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:36:15.466 15:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:36:15.466 15:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:15.466 15:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2052743 00:36:15.466 15:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:36:15.466 15:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2052743 /var/tmp/bperf.sock 00:36:15.466 15:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2052743 ']' 00:36:15.466 15:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:15.466 15:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:15.466 15:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:15.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:15.466 15:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:15.466 15:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:15.466 [2024-07-14 15:08:54.466291] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:15.466 [2024-07-14 15:08:54.466428] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2052743 ] 00:36:15.466 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:15.466 Zero copy mechanism will not be used. 00:36:15.466 EAL: No free 2048 kB hugepages reported on node 1 00:36:15.466 [2024-07-14 15:08:54.599142] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:15.724 [2024-07-14 15:08:54.857346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:16.290 15:08:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:16.290 15:08:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:36:16.290 15:08:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:16.290 15:08:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:16.290 15:08:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:16.856 15:08:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:16.856 15:08:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:17.422 nvme0n1 00:36:17.422 15:08:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:17.422 15:08:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:17.422 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:17.422 Zero copy mechanism will not be used. 00:36:17.422 Running I/O for 2 seconds... 00:36:19.952 00:36:19.952 Latency(us) 00:36:19.952 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:19.952 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:36:19.952 nvme0n1 : 2.00 4652.73 581.59 0.00 0.00 3431.15 1128.68 7524.50 00:36:19.952 =================================================================================================================== 00:36:19.952 Total : 4652.73 581.59 0.00 0.00 3431.15 1128.68 7524.50 00:36:19.952 0 00:36:19.952 15:08:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:19.952 15:08:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:19.952 15:08:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:19.952 15:08:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:19.952 | select(.opcode=="crc32c") 00:36:19.952 | "\(.module_name) \(.executed)"' 00:36:19.952 15:08:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:19.952 15:08:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:19.952 15:08:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:19.952 15:08:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:19.952 15:08:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:19.952 15:08:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2052743 00:36:19.952 15:08:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2052743 ']' 00:36:19.952 15:08:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2052743 00:36:19.952 15:08:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:36:19.952 15:08:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:19.952 15:08:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2052743 00:36:19.952 15:08:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:19.952 15:08:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:19.952 15:08:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2052743' 00:36:19.952 killing process with pid 2052743 00:36:19.952 15:08:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2052743 00:36:19.952 Received shutdown signal, test time was about 2.000000 seconds 00:36:19.952 00:36:19.952 Latency(us) 00:36:19.952 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:19.952 =================================================================================================================== 00:36:19.953 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:19.953 15:08:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2052743 00:36:20.887 15:09:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:36:20.887 15:09:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:20.887 15:09:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:20.887 15:09:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:36:20.887 15:09:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:36:20.887 15:09:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:36:20.887 15:09:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:20.887 15:09:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2053414 00:36:20.887 15:09:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:36:20.887 15:09:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2053414 /var/tmp/bperf.sock 00:36:20.887 15:09:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2053414 ']' 00:36:20.887 15:09:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:20.887 15:09:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:20.887 15:09:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:20.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:20.887 15:09:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:20.887 15:09:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:20.887 [2024-07-14 15:09:00.106126] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:20.887 [2024-07-14 15:09:00.106285] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2053414 ] 00:36:20.887 EAL: No free 2048 kB hugepages reported on node 1 00:36:21.146 [2024-07-14 15:09:00.240767] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:21.405 [2024-07-14 15:09:00.502750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:21.971 15:09:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:21.971 15:09:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:36:21.971 15:09:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:21.971 15:09:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:21.971 15:09:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:22.537 15:09:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:22.537 15:09:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:22.795 nvme0n1 00:36:22.795 15:09:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:22.795 15:09:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:23.053 Running I/O for 2 seconds... 00:36:24.952 00:36:24.952 Latency(us) 00:36:24.952 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:24.952 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:24.952 nvme0n1 : 2.00 16516.28 64.52 0.00 0.00 7739.76 4223.43 16893.72 00:36:24.952 =================================================================================================================== 00:36:24.952 Total : 16516.28 64.52 0.00 0.00 7739.76 4223.43 16893.72 00:36:24.952 0 00:36:24.952 15:09:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:24.952 15:09:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:24.952 15:09:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:24.952 15:09:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:24.952 | select(.opcode=="crc32c") 00:36:24.952 | "\(.module_name) \(.executed)"' 00:36:24.952 15:09:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:25.209 15:09:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:25.209 15:09:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:25.209 15:09:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:25.209 15:09:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:25.209 15:09:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2053414 00:36:25.209 15:09:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2053414 ']' 00:36:25.209 15:09:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2053414 00:36:25.209 15:09:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:36:25.209 15:09:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:25.209 15:09:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2053414 00:36:25.467 15:09:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:25.467 15:09:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:25.467 15:09:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2053414' 00:36:25.467 killing process with pid 2053414 00:36:25.467 15:09:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2053414 00:36:25.467 Received shutdown signal, test time was about 2.000000 seconds 00:36:25.467 00:36:25.467 Latency(us) 00:36:25.467 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:25.467 =================================================================================================================== 00:36:25.467 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:25.467 15:09:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2053414 00:36:26.401 15:09:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:36:26.401 15:09:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:26.401 15:09:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:26.401 15:09:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:36:26.401 15:09:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:36:26.401 15:09:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:36:26.401 15:09:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:26.401 15:09:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2054204 00:36:26.401 15:09:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:36:26.401 15:09:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2054204 /var/tmp/bperf.sock 00:36:26.401 15:09:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2054204 ']' 00:36:26.401 15:09:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:26.401 15:09:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:26.401 15:09:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:26.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:26.401 15:09:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:26.401 15:09:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:26.401 [2024-07-14 15:09:05.703515] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:26.401 [2024-07-14 15:09:05.703662] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2054204 ] 00:36:26.401 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:26.401 Zero copy mechanism will not be used. 00:36:26.659 EAL: No free 2048 kB hugepages reported on node 1 00:36:26.659 [2024-07-14 15:09:05.838073] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:26.917 [2024-07-14 15:09:06.098622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:27.483 15:09:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:27.483 15:09:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:36:27.483 15:09:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:27.483 15:09:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:27.483 15:09:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:28.082 15:09:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:28.082 15:09:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:28.648 nvme0n1 00:36:28.648 15:09:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:28.648 15:09:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:28.648 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:28.648 Zero copy mechanism will not be used. 00:36:28.648 Running I/O for 2 seconds... 00:36:31.173 00:36:31.173 Latency(us) 00:36:31.173 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:31.173 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:36:31.173 nvme0n1 : 2.00 4889.99 611.25 0.00 0.00 3255.65 2524.35 9272.13 00:36:31.173 =================================================================================================================== 00:36:31.173 Total : 4889.99 611.25 0.00 0.00 3255.65 2524.35 9272.13 00:36:31.173 0 00:36:31.173 15:09:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:31.173 15:09:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:31.173 15:09:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:31.173 15:09:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:31.173 | select(.opcode=="crc32c") 00:36:31.173 | "\(.module_name) \(.executed)"' 00:36:31.173 15:09:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:31.173 15:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:31.173 15:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:31.173 15:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:31.173 15:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:31.173 15:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2054204 00:36:31.173 15:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2054204 ']' 00:36:31.173 15:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2054204 00:36:31.173 15:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:36:31.173 15:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:31.173 15:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2054204 00:36:31.173 15:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:31.173 15:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:31.173 15:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2054204' 00:36:31.173 killing process with pid 2054204 00:36:31.173 15:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2054204 00:36:31.173 Received shutdown signal, test time was about 2.000000 seconds 00:36:31.173 00:36:31.173 Latency(us) 00:36:31.173 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:31.173 =================================================================================================================== 00:36:31.173 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:31.173 15:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2054204 00:36:32.106 15:09:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2051924 00:36:32.106 15:09:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2051924 ']' 00:36:32.106 15:09:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2051924 00:36:32.106 15:09:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:36:32.106 15:09:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:32.106 15:09:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2051924 00:36:32.106 15:09:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:32.106 15:09:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:32.106 15:09:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2051924' 00:36:32.106 killing process with pid 2051924 00:36:32.106 15:09:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2051924 00:36:32.106 15:09:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2051924 00:36:33.479 00:36:33.479 real 0m25.082s 00:36:33.479 user 0m47.635s 00:36:33.479 sys 0m5.006s 00:36:33.479 15:09:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:33.479 15:09:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:33.479 ************************************ 00:36:33.479 END TEST nvmf_digest_clean 00:36:33.479 ************************************ 00:36:33.479 15:09:12 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:36:33.479 15:09:12 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:36:33.479 15:09:12 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:33.479 15:09:12 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:33.479 15:09:12 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:33.479 ************************************ 00:36:33.479 START TEST nvmf_digest_error 00:36:33.479 ************************************ 00:36:33.479 15:09:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:36:33.479 15:09:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:36:33.479 15:09:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:33.479 15:09:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:33.479 15:09:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:33.479 15:09:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=2055536 00:36:33.479 15:09:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:36:33.479 15:09:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 2055536 00:36:33.479 15:09:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2055536 ']' 00:36:33.479 15:09:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:33.480 15:09:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:33.480 15:09:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:33.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:33.480 15:09:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:33.480 15:09:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:33.480 [2024-07-14 15:09:12.702959] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:33.480 [2024-07-14 15:09:12.703108] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:33.480 EAL: No free 2048 kB hugepages reported on node 1 00:36:33.737 [2024-07-14 15:09:12.836129] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:33.995 [2024-07-14 15:09:13.089762] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:33.995 [2024-07-14 15:09:13.089833] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:33.995 [2024-07-14 15:09:13.089873] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:33.995 [2024-07-14 15:09:13.089906] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:33.995 [2024-07-14 15:09:13.089928] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:33.995 [2024-07-14 15:09:13.089978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:34.559 15:09:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:34.559 15:09:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:36:34.559 15:09:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:34.559 15:09:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:34.559 15:09:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:34.559 15:09:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:34.559 15:09:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:36:34.559 15:09:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:34.559 15:09:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:34.559 [2024-07-14 15:09:13.644214] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:36:34.559 15:09:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:34.559 15:09:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:36:34.559 15:09:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:36:34.559 15:09:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:34.559 15:09:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:34.817 null0 00:36:34.817 [2024-07-14 15:09:14.029374] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:34.817 [2024-07-14 15:09:14.053616] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:34.817 15:09:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:34.817 15:09:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:36:34.817 15:09:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:34.817 15:09:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:36:34.817 15:09:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:36:34.817 15:09:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:36:34.817 15:09:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2055687 00:36:34.817 15:09:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:36:34.817 15:09:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2055687 /var/tmp/bperf.sock 00:36:34.817 15:09:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2055687 ']' 00:36:34.817 15:09:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:34.817 15:09:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:34.817 15:09:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:34.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:34.817 15:09:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:34.817 15:09:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:35.075 [2024-07-14 15:09:14.138235] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:35.075 [2024-07-14 15:09:14.138370] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2055687 ] 00:36:35.075 EAL: No free 2048 kB hugepages reported on node 1 00:36:35.075 [2024-07-14 15:09:14.267861] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:35.334 [2024-07-14 15:09:14.522071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:35.900 15:09:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:35.900 15:09:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:36:35.900 15:09:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:35.900 15:09:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:36.158 15:09:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:36.158 15:09:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:36.158 15:09:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:36.158 15:09:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:36.158 15:09:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:36.158 15:09:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:36.724 nvme0n1 00:36:36.724 15:09:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:36:36.724 15:09:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:36.724 15:09:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:36.724 15:09:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:36.724 15:09:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:36.724 15:09:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:36.724 Running I/O for 2 seconds... 00:36:36.724 [2024-07-14 15:09:15.929287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:36.724 [2024-07-14 15:09:15.929363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.724 [2024-07-14 15:09:15.929398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.724 [2024-07-14 15:09:15.951385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:36.725 [2024-07-14 15:09:15.951438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.725 [2024-07-14 15:09:15.951468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.725 [2024-07-14 15:09:15.970791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:36.725 [2024-07-14 15:09:15.970839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:7477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.725 [2024-07-14 15:09:15.970869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.725 [2024-07-14 15:09:15.987353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:36.725 [2024-07-14 15:09:15.987401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.725 [2024-07-14 15:09:15.987432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.725 [2024-07-14 15:09:16.007289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:36.725 [2024-07-14 15:09:16.007338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.725 [2024-07-14 15:09:16.007379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.725 [2024-07-14 15:09:16.023208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:36.725 [2024-07-14 15:09:16.023256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.725 [2024-07-14 15:09:16.023286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.983 [2024-07-14 15:09:16.041755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:36.983 [2024-07-14 15:09:16.041802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:16355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.983 [2024-07-14 15:09:16.041833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.983 [2024-07-14 15:09:16.060899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:36.983 [2024-07-14 15:09:16.060955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.983 [2024-07-14 15:09:16.060996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.983 [2024-07-14 15:09:16.077422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:36.983 [2024-07-14 15:09:16.077469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.983 [2024-07-14 15:09:16.077498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.983 [2024-07-14 15:09:16.097143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:36.983 [2024-07-14 15:09:16.097204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.983 [2024-07-14 15:09:16.097234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.983 [2024-07-14 15:09:16.116469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:36.983 [2024-07-14 15:09:16.116516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.983 [2024-07-14 15:09:16.116546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.983 [2024-07-14 15:09:16.132012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:36.983 [2024-07-14 15:09:16.132079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.983 [2024-07-14 15:09:16.132106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.983 [2024-07-14 15:09:16.151375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:36.983 [2024-07-14 15:09:16.151422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.984 [2024-07-14 15:09:16.151451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.984 [2024-07-14 15:09:16.168599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:36.984 [2024-07-14 15:09:16.168646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.984 [2024-07-14 15:09:16.168676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.984 [2024-07-14 15:09:16.185849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:36.984 [2024-07-14 15:09:16.185906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.984 [2024-07-14 15:09:16.185947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.984 [2024-07-14 15:09:16.207328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:36.984 [2024-07-14 15:09:16.207378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.984 [2024-07-14 15:09:16.207408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.984 [2024-07-14 15:09:16.222790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:36.984 [2024-07-14 15:09:16.222838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.984 [2024-07-14 15:09:16.222868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.984 [2024-07-14 15:09:16.242726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:36.984 [2024-07-14 15:09:16.242773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.984 [2024-07-14 15:09:16.242803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.984 [2024-07-14 15:09:16.263020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:36.984 [2024-07-14 15:09:16.263074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.984 [2024-07-14 15:09:16.263100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.984 [2024-07-14 15:09:16.278718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:36.984 [2024-07-14 15:09:16.278765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.984 [2024-07-14 15:09:16.278795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.242 [2024-07-14 15:09:16.298573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.242 [2024-07-14 15:09:16.298622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.242 [2024-07-14 15:09:16.298651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.242 [2024-07-14 15:09:16.315200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.242 [2024-07-14 15:09:16.315248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.243 [2024-07-14 15:09:16.315287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.243 [2024-07-14 15:09:16.336486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.243 [2024-07-14 15:09:16.336535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.243 [2024-07-14 15:09:16.336564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.243 [2024-07-14 15:09:16.357498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.243 [2024-07-14 15:09:16.357545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.243 [2024-07-14 15:09:16.357575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.243 [2024-07-14 15:09:16.374673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.243 [2024-07-14 15:09:16.374720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.243 [2024-07-14 15:09:16.374750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.243 [2024-07-14 15:09:16.393106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.243 [2024-07-14 15:09:16.393159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.243 [2024-07-14 15:09:16.393185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.243 [2024-07-14 15:09:16.410282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.243 [2024-07-14 15:09:16.410329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.243 [2024-07-14 15:09:16.410359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.243 [2024-07-14 15:09:16.427446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.243 [2024-07-14 15:09:16.427493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.243 [2024-07-14 15:09:16.427523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.243 [2024-07-14 15:09:16.446196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.243 [2024-07-14 15:09:16.446251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.243 [2024-07-14 15:09:16.446281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.243 [2024-07-14 15:09:16.464293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.243 [2024-07-14 15:09:16.464342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.243 [2024-07-14 15:09:16.464371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.243 [2024-07-14 15:09:16.480781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.243 [2024-07-14 15:09:16.480829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.243 [2024-07-14 15:09:16.480859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.243 [2024-07-14 15:09:16.497063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.243 [2024-07-14 15:09:16.497117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.243 [2024-07-14 15:09:16.497143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.243 [2024-07-14 15:09:16.515421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.243 [2024-07-14 15:09:16.515471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.243 [2024-07-14 15:09:16.515501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.243 [2024-07-14 15:09:16.531792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.243 [2024-07-14 15:09:16.531840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.243 [2024-07-14 15:09:16.531870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.501 [2024-07-14 15:09:16.551411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.501 [2024-07-14 15:09:16.551459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.501 [2024-07-14 15:09:16.551489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.501 [2024-07-14 15:09:16.569816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.501 [2024-07-14 15:09:16.569863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.501 [2024-07-14 15:09:16.569902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.501 [2024-07-14 15:09:16.589690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.501 [2024-07-14 15:09:16.589737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:19693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.502 [2024-07-14 15:09:16.589767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.502 [2024-07-14 15:09:16.607989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.502 [2024-07-14 15:09:16.608032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.502 [2024-07-14 15:09:16.608058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.502 [2024-07-14 15:09:16.622752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.502 [2024-07-14 15:09:16.622800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.502 [2024-07-14 15:09:16.622837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.502 [2024-07-14 15:09:16.643041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.502 [2024-07-14 15:09:16.643100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.502 [2024-07-14 15:09:16.643128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.502 [2024-07-14 15:09:16.662103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.502 [2024-07-14 15:09:16.662160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.502 [2024-07-14 15:09:16.662187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.502 [2024-07-14 15:09:16.677759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.502 [2024-07-14 15:09:16.677807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.502 [2024-07-14 15:09:16.677835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.502 [2024-07-14 15:09:16.696275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.502 [2024-07-14 15:09:16.696322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.502 [2024-07-14 15:09:16.696353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.502 [2024-07-14 15:09:16.715777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.502 [2024-07-14 15:09:16.715826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.502 [2024-07-14 15:09:16.715856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.502 [2024-07-14 15:09:16.730988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.502 [2024-07-14 15:09:16.731029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.502 [2024-07-14 15:09:16.731054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.502 [2024-07-14 15:09:16.752360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.502 [2024-07-14 15:09:16.752408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.502 [2024-07-14 15:09:16.752437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.502 [2024-07-14 15:09:16.773180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.502 [2024-07-14 15:09:16.773242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.502 [2024-07-14 15:09:16.773271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.502 [2024-07-14 15:09:16.789576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.502 [2024-07-14 15:09:16.789624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.502 [2024-07-14 15:09:16.789653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.761 [2024-07-14 15:09:16.810084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.761 [2024-07-14 15:09:16.810140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.761 [2024-07-14 15:09:16.810167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.761 [2024-07-14 15:09:16.829854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.761 [2024-07-14 15:09:16.829928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:18395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.761 [2024-07-14 15:09:16.829956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.761 [2024-07-14 15:09:16.845890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.761 [2024-07-14 15:09:16.845947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.761 [2024-07-14 15:09:16.845971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.761 [2024-07-14 15:09:16.867038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.761 [2024-07-14 15:09:16.867079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.761 [2024-07-14 15:09:16.867121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.761 [2024-07-14 15:09:16.884114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.761 [2024-07-14 15:09:16.884171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.761 [2024-07-14 15:09:16.884196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.761 [2024-07-14 15:09:16.903575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.761 [2024-07-14 15:09:16.903624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.761 [2024-07-14 15:09:16.903654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.761 [2024-07-14 15:09:16.925574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.761 [2024-07-14 15:09:16.925622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.761 [2024-07-14 15:09:16.925651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.761 [2024-07-14 15:09:16.941069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.761 [2024-07-14 15:09:16.941122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.761 [2024-07-14 15:09:16.941156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.761 [2024-07-14 15:09:16.958546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.761 [2024-07-14 15:09:16.958593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.761 [2024-07-14 15:09:16.958622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.761 [2024-07-14 15:09:16.975498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.761 [2024-07-14 15:09:16.975546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.761 [2024-07-14 15:09:16.975575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.761 [2024-07-14 15:09:16.993599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.761 [2024-07-14 15:09:16.993646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.761 [2024-07-14 15:09:16.993675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.761 [2024-07-14 15:09:17.009354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.761 [2024-07-14 15:09:17.009402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:19653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.761 [2024-07-14 15:09:17.009431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.761 [2024-07-14 15:09:17.028689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.761 [2024-07-14 15:09:17.028735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.761 [2024-07-14 15:09:17.028765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.761 [2024-07-14 15:09:17.044383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.761 [2024-07-14 15:09:17.044430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.761 [2024-07-14 15:09:17.044459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.761 [2024-07-14 15:09:17.063829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.761 [2024-07-14 15:09:17.063886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.761 [2024-07-14 15:09:17.063919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.019 [2024-07-14 15:09:17.084026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.019 [2024-07-14 15:09:17.084081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:25385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.019 [2024-07-14 15:09:17.084107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.019 [2024-07-14 15:09:17.102615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.019 [2024-07-14 15:09:17.102662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:17002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.019 [2024-07-14 15:09:17.102691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.019 [2024-07-14 15:09:17.118238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.019 [2024-07-14 15:09:17.118285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.019 [2024-07-14 15:09:17.118315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.019 [2024-07-14 15:09:17.137073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.019 [2024-07-14 15:09:17.137129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.019 [2024-07-14 15:09:17.137156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.019 [2024-07-14 15:09:17.153745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.019 [2024-07-14 15:09:17.153792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.019 [2024-07-14 15:09:17.153823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.019 [2024-07-14 15:09:17.173591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.019 [2024-07-14 15:09:17.173639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.019 [2024-07-14 15:09:17.173669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.019 [2024-07-14 15:09:17.194029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.019 [2024-07-14 15:09:17.194082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.019 [2024-07-14 15:09:17.194108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.019 [2024-07-14 15:09:17.209889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.019 [2024-07-14 15:09:17.209944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.019 [2024-07-14 15:09:17.209982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.019 [2024-07-14 15:09:17.229629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.019 [2024-07-14 15:09:17.229678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.019 [2024-07-14 15:09:17.229707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.019 [2024-07-14 15:09:17.247470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.019 [2024-07-14 15:09:17.247517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.020 [2024-07-14 15:09:17.247554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.020 [2024-07-14 15:09:17.266037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.020 [2024-07-14 15:09:17.266091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:8827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.020 [2024-07-14 15:09:17.266116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.020 [2024-07-14 15:09:17.284612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.020 [2024-07-14 15:09:17.284659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.020 [2024-07-14 15:09:17.284688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.020 [2024-07-14 15:09:17.299883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.020 [2024-07-14 15:09:17.299940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.020 [2024-07-14 15:09:17.299964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.020 [2024-07-14 15:09:17.320369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.020 [2024-07-14 15:09:17.320417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.020 [2024-07-14 15:09:17.320447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.278 [2024-07-14 15:09:17.341016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.278 [2024-07-14 15:09:17.341061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.278 [2024-07-14 15:09:17.341088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.278 [2024-07-14 15:09:17.358152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.278 [2024-07-14 15:09:17.358205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.278 [2024-07-14 15:09:17.358243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.278 [2024-07-14 15:09:17.375124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.278 [2024-07-14 15:09:17.375167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.278 [2024-07-14 15:09:17.375209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.278 [2024-07-14 15:09:17.393449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.278 [2024-07-14 15:09:17.393496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.278 [2024-07-14 15:09:17.393525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.278 [2024-07-14 15:09:17.412722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.278 [2024-07-14 15:09:17.412777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:25308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.278 [2024-07-14 15:09:17.412807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.279 [2024-07-14 15:09:17.430808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.279 [2024-07-14 15:09:17.430856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.279 [2024-07-14 15:09:17.430894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.279 [2024-07-14 15:09:17.448561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.279 [2024-07-14 15:09:17.448608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:17302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.279 [2024-07-14 15:09:17.448638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.279 [2024-07-14 15:09:17.463733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.279 [2024-07-14 15:09:17.463780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.279 [2024-07-14 15:09:17.463809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.279 [2024-07-14 15:09:17.482919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.279 [2024-07-14 15:09:17.482975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.279 [2024-07-14 15:09:17.483015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.279 [2024-07-14 15:09:17.501337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.279 [2024-07-14 15:09:17.501384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.279 [2024-07-14 15:09:17.501413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.279 [2024-07-14 15:09:17.516806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.279 [2024-07-14 15:09:17.516854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.279 [2024-07-14 15:09:17.516891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.279 [2024-07-14 15:09:17.539628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.279 [2024-07-14 15:09:17.539675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.279 [2024-07-14 15:09:17.539705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.279 [2024-07-14 15:09:17.553623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.279 [2024-07-14 15:09:17.553670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.279 [2024-07-14 15:09:17.553707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.279 [2024-07-14 15:09:17.575247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.279 [2024-07-14 15:09:17.575293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.279 [2024-07-14 15:09:17.575322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.538 [2024-07-14 15:09:17.596429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.538 [2024-07-14 15:09:17.596476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:10081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.538 [2024-07-14 15:09:17.596506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.538 [2024-07-14 15:09:17.612356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.538 [2024-07-14 15:09:17.612402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.538 [2024-07-14 15:09:17.612431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.538 [2024-07-14 15:09:17.632375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.538 [2024-07-14 15:09:17.632422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.538 [2024-07-14 15:09:17.632452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.538 [2024-07-14 15:09:17.651740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.538 [2024-07-14 15:09:17.651787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.538 [2024-07-14 15:09:17.651817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.538 [2024-07-14 15:09:17.667114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.538 [2024-07-14 15:09:17.667168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.538 [2024-07-14 15:09:17.667198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.538 [2024-07-14 15:09:17.687585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.538 [2024-07-14 15:09:17.687633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.538 [2024-07-14 15:09:17.687677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.538 [2024-07-14 15:09:17.707642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.538 [2024-07-14 15:09:17.707689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.538 [2024-07-14 15:09:17.707719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.538 [2024-07-14 15:09:17.723139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.538 [2024-07-14 15:09:17.723215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.538 [2024-07-14 15:09:17.723245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.538 [2024-07-14 15:09:17.743462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.538 [2024-07-14 15:09:17.743514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.538 [2024-07-14 15:09:17.743542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.538 [2024-07-14 15:09:17.758933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.538 [2024-07-14 15:09:17.758990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.538 [2024-07-14 15:09:17.759017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.538 [2024-07-14 15:09:17.779524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.538 [2024-07-14 15:09:17.779574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.538 [2024-07-14 15:09:17.779605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.538 [2024-07-14 15:09:17.796815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.538 [2024-07-14 15:09:17.796873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.538 [2024-07-14 15:09:17.796910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.538 [2024-07-14 15:09:17.816884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.538 [2024-07-14 15:09:17.816944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.538 [2024-07-14 15:09:17.816970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.538 [2024-07-14 15:09:17.835208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.538 [2024-07-14 15:09:17.835255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.538 [2024-07-14 15:09:17.835283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.796 [2024-07-14 15:09:17.854057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.796 [2024-07-14 15:09:17.854100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.796 [2024-07-14 15:09:17.854127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.796 [2024-07-14 15:09:17.869014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.796 [2024-07-14 15:09:17.869066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.796 [2024-07-14 15:09:17.869100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.796 [2024-07-14 15:09:17.888048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.796 [2024-07-14 15:09:17.888105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.796 [2024-07-14 15:09:17.888131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.796 [2024-07-14 15:09:17.904722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.797 [2024-07-14 15:09:17.904769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.797 [2024-07-14 15:09:17.904799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.797 00:36:38.797 Latency(us) 00:36:38.797 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:38.797 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:38.797 nvme0n1 : 2.01 13936.62 54.44 0.00 0.00 9170.21 4927.34 24272.59 00:36:38.797 =================================================================================================================== 00:36:38.797 Total : 13936.62 54.44 0.00 0.00 9170.21 4927.34 24272.59 00:36:38.797 0 00:36:38.797 15:09:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:38.797 15:09:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:38.797 15:09:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:38.797 15:09:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:38.797 | .driver_specific 00:36:38.797 | .nvme_error 00:36:38.797 | .status_code 00:36:38.797 | .command_transient_transport_error' 00:36:39.055 15:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 109 > 0 )) 00:36:39.055 15:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2055687 00:36:39.055 15:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2055687 ']' 00:36:39.055 15:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2055687 00:36:39.055 15:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:36:39.055 15:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:39.055 15:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2055687 00:36:39.055 15:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:39.055 15:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:39.055 15:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2055687' 00:36:39.055 killing process with pid 2055687 00:36:39.055 15:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2055687 00:36:39.055 Received shutdown signal, test time was about 2.000000 seconds 00:36:39.055 00:36:39.055 Latency(us) 00:36:39.055 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:39.055 =================================================================================================================== 00:36:39.055 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:39.055 15:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2055687 00:36:40.001 15:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:36:40.001 15:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:40.001 15:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:36:40.001 15:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:36:40.001 15:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:36:40.001 15:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2056351 00:36:40.001 15:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:36:40.001 15:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2056351 /var/tmp/bperf.sock 00:36:40.001 15:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2056351 ']' 00:36:40.001 15:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:40.001 15:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:40.001 15:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:40.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:40.001 15:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:40.001 15:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:40.259 [2024-07-14 15:09:19.369766] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:40.259 [2024-07-14 15:09:19.369941] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2056351 ] 00:36:40.259 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:40.259 Zero copy mechanism will not be used. 00:36:40.259 EAL: No free 2048 kB hugepages reported on node 1 00:36:40.259 [2024-07-14 15:09:19.493251] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:40.516 [2024-07-14 15:09:19.736010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:41.080 15:09:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:41.080 15:09:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:36:41.080 15:09:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:41.080 15:09:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:41.337 15:09:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:41.337 15:09:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:41.337 15:09:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:41.337 15:09:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:41.337 15:09:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:41.337 15:09:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:41.593 nvme0n1 00:36:41.850 15:09:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:36:41.850 15:09:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:41.850 15:09:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:41.850 15:09:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:41.850 15:09:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:41.850 15:09:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:41.850 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:41.850 Zero copy mechanism will not be used. 00:36:41.850 Running I/O for 2 seconds... 00:36:41.850 [2024-07-14 15:09:21.023819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.850 [2024-07-14 15:09:21.023925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.850 [2024-07-14 15:09:21.023960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.850 [2024-07-14 15:09:21.032252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.850 [2024-07-14 15:09:21.032302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.850 [2024-07-14 15:09:21.032332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.850 [2024-07-14 15:09:21.039972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.850 [2024-07-14 15:09:21.040016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.850 [2024-07-14 15:09:21.040044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.850 [2024-07-14 15:09:21.048711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.850 [2024-07-14 15:09:21.048760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.850 [2024-07-14 15:09:21.048790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.850 [2024-07-14 15:09:21.057281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.850 [2024-07-14 15:09:21.057330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.850 [2024-07-14 15:09:21.057360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.850 [2024-07-14 15:09:21.065579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.850 [2024-07-14 15:09:21.065628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.850 [2024-07-14 15:09:21.065658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.850 [2024-07-14 15:09:21.074677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.850 [2024-07-14 15:09:21.074726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.850 [2024-07-14 15:09:21.074756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.850 [2024-07-14 15:09:21.083673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.850 [2024-07-14 15:09:21.083722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.850 [2024-07-14 15:09:21.083752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.850 [2024-07-14 15:09:21.091146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.850 [2024-07-14 15:09:21.091209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.850 [2024-07-14 15:09:21.091239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.850 [2024-07-14 15:09:21.099324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.850 [2024-07-14 15:09:21.099373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.850 [2024-07-14 15:09:21.099403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.850 [2024-07-14 15:09:21.105945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.850 [2024-07-14 15:09:21.105988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.850 [2024-07-14 15:09:21.106015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.850 [2024-07-14 15:09:21.113151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.850 [2024-07-14 15:09:21.113227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.850 [2024-07-14 15:09:21.113258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.850 [2024-07-14 15:09:21.122162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.850 [2024-07-14 15:09:21.122224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.850 [2024-07-14 15:09:21.122254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.850 [2024-07-14 15:09:21.131077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.851 [2024-07-14 15:09:21.131121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.851 [2024-07-14 15:09:21.131148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.851 [2024-07-14 15:09:21.139498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.851 [2024-07-14 15:09:21.139547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.851 [2024-07-14 15:09:21.139577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.851 [2024-07-14 15:09:21.146540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.851 [2024-07-14 15:09:21.146587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.851 [2024-07-14 15:09:21.146625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.851 [2024-07-14 15:09:21.152972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.851 [2024-07-14 15:09:21.153015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.851 [2024-07-14 15:09:21.153042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.109 [2024-07-14 15:09:21.159495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.109 [2024-07-14 15:09:21.159543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.109 [2024-07-14 15:09:21.159573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.109 [2024-07-14 15:09:21.165999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.109 [2024-07-14 15:09:21.166040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.109 [2024-07-14 15:09:21.166067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.109 [2024-07-14 15:09:21.172618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.109 [2024-07-14 15:09:21.172664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.109 [2024-07-14 15:09:21.172693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.109 [2024-07-14 15:09:21.179114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.109 [2024-07-14 15:09:21.179156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.109 [2024-07-14 15:09:21.179183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.109 [2024-07-14 15:09:21.185620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.109 [2024-07-14 15:09:21.185667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.109 [2024-07-14 15:09:21.185696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.109 [2024-07-14 15:09:21.192077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.109 [2024-07-14 15:09:21.192119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.109 [2024-07-14 15:09:21.192145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.109 [2024-07-14 15:09:21.198604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.109 [2024-07-14 15:09:21.198652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.109 [2024-07-14 15:09:21.198681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.109 [2024-07-14 15:09:21.205231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.109 [2024-07-14 15:09:21.205279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.109 [2024-07-14 15:09:21.205308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.109 [2024-07-14 15:09:21.212576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.109 [2024-07-14 15:09:21.212625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.109 [2024-07-14 15:09:21.212654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.109 [2024-07-14 15:09:21.218386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.109 [2024-07-14 15:09:21.218433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.109 [2024-07-14 15:09:21.218463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.109 [2024-07-14 15:09:21.223445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.109 [2024-07-14 15:09:21.223491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.109 [2024-07-14 15:09:21.223521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.109 [2024-07-14 15:09:21.227787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.109 [2024-07-14 15:09:21.227833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.109 [2024-07-14 15:09:21.227863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.109 [2024-07-14 15:09:21.232153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.109 [2024-07-14 15:09:21.232220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.109 [2024-07-14 15:09:21.232250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.109 [2024-07-14 15:09:21.236382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.109 [2024-07-14 15:09:21.236428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.109 [2024-07-14 15:09:21.236458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.109 [2024-07-14 15:09:21.241742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.109 [2024-07-14 15:09:21.241789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.109 [2024-07-14 15:09:21.241817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.109 [2024-07-14 15:09:21.248164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.109 [2024-07-14 15:09:21.248225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.109 [2024-07-14 15:09:21.248272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.109 [2024-07-14 15:09:21.252749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.109 [2024-07-14 15:09:21.252795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.109 [2024-07-14 15:09:21.252824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.109 [2024-07-14 15:09:21.257817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.109 [2024-07-14 15:09:21.257860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.109 [2024-07-14 15:09:21.257895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.109 [2024-07-14 15:09:21.263372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.110 [2024-07-14 15:09:21.263418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.110 [2024-07-14 15:09:21.263447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.110 [2024-07-14 15:09:21.267420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.110 [2024-07-14 15:09:21.267465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.110 [2024-07-14 15:09:21.267494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.110 [2024-07-14 15:09:21.273996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.110 [2024-07-14 15:09:21.274050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.110 [2024-07-14 15:09:21.274076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.110 [2024-07-14 15:09:21.283280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.110 [2024-07-14 15:09:21.283335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.110 [2024-07-14 15:09:21.283365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.110 [2024-07-14 15:09:21.292543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.110 [2024-07-14 15:09:21.292595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.110 [2024-07-14 15:09:21.292625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.110 [2024-07-14 15:09:21.301537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.110 [2024-07-14 15:09:21.301588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.110 [2024-07-14 15:09:21.301618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.110 [2024-07-14 15:09:21.310477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.110 [2024-07-14 15:09:21.310527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.110 [2024-07-14 15:09:21.310558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.110 [2024-07-14 15:09:21.319633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.110 [2024-07-14 15:09:21.319683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.110 [2024-07-14 15:09:21.319713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.110 [2024-07-14 15:09:21.328265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.110 [2024-07-14 15:09:21.328313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.110 [2024-07-14 15:09:21.328343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.110 [2024-07-14 15:09:21.337090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.110 [2024-07-14 15:09:21.337145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.110 [2024-07-14 15:09:21.337171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.110 [2024-07-14 15:09:21.346000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.110 [2024-07-14 15:09:21.346057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.110 [2024-07-14 15:09:21.346083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.110 [2024-07-14 15:09:21.354846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.110 [2024-07-14 15:09:21.354906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.110 [2024-07-14 15:09:21.354938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.110 [2024-07-14 15:09:21.363748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.110 [2024-07-14 15:09:21.363797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.110 [2024-07-14 15:09:21.363828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.110 [2024-07-14 15:09:21.372707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.110 [2024-07-14 15:09:21.372757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.110 [2024-07-14 15:09:21.372786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.110 [2024-07-14 15:09:21.380906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.110 [2024-07-14 15:09:21.380970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.110 [2024-07-14 15:09:21.381008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.110 [2024-07-14 15:09:21.387714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.110 [2024-07-14 15:09:21.387760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.110 [2024-07-14 15:09:21.387788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.110 [2024-07-14 15:09:21.394852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.110 [2024-07-14 15:09:21.394915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.110 [2024-07-14 15:09:21.394942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.110 [2024-07-14 15:09:21.401829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.110 [2024-07-14 15:09:21.401900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.110 [2024-07-14 15:09:21.401932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.110 [2024-07-14 15:09:21.410085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.110 [2024-07-14 15:09:21.410126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.110 [2024-07-14 15:09:21.410149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.369 [2024-07-14 15:09:21.418552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.369 [2024-07-14 15:09:21.418603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.369 [2024-07-14 15:09:21.418633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.369 [2024-07-14 15:09:21.426355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.369 [2024-07-14 15:09:21.426404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.369 [2024-07-14 15:09:21.426433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.369 [2024-07-14 15:09:21.433263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.369 [2024-07-14 15:09:21.433311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.369 [2024-07-14 15:09:21.433341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.369 [2024-07-14 15:09:21.440031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.369 [2024-07-14 15:09:21.440073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.369 [2024-07-14 15:09:21.440098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.369 [2024-07-14 15:09:21.446679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.369 [2024-07-14 15:09:21.446736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.369 [2024-07-14 15:09:21.446766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.369 [2024-07-14 15:09:21.453199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.369 [2024-07-14 15:09:21.453250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.369 [2024-07-14 15:09:21.453280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.369 [2024-07-14 15:09:21.460834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.369 [2024-07-14 15:09:21.460906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.369 [2024-07-14 15:09:21.460952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.369 [2024-07-14 15:09:21.469813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.369 [2024-07-14 15:09:21.469873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.369 [2024-07-14 15:09:21.469940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.369 [2024-07-14 15:09:21.478967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.369 [2024-07-14 15:09:21.479011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.369 [2024-07-14 15:09:21.479037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.369 [2024-07-14 15:09:21.487211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.369 [2024-07-14 15:09:21.487271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.369 [2024-07-14 15:09:21.487299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.369 [2024-07-14 15:09:21.496056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.369 [2024-07-14 15:09:21.496099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.369 [2024-07-14 15:09:21.496124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.369 [2024-07-14 15:09:21.503496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.369 [2024-07-14 15:09:21.503545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.369 [2024-07-14 15:09:21.503574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.369 [2024-07-14 15:09:21.510738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.369 [2024-07-14 15:09:21.510786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.369 [2024-07-14 15:09:21.510825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.369 [2024-07-14 15:09:21.517386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.369 [2024-07-14 15:09:21.517434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.369 [2024-07-14 15:09:21.517463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.369 [2024-07-14 15:09:21.524052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.369 [2024-07-14 15:09:21.524094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.369 [2024-07-14 15:09:21.524134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.369 [2024-07-14 15:09:21.530719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.369 [2024-07-14 15:09:21.530768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.369 [2024-07-14 15:09:21.530797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.369 [2024-07-14 15:09:21.537379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.369 [2024-07-14 15:09:21.537427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.369 [2024-07-14 15:09:21.537457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.369 [2024-07-14 15:09:21.543791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.369 [2024-07-14 15:09:21.543839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.369 [2024-07-14 15:09:21.543869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.369 [2024-07-14 15:09:21.550339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.369 [2024-07-14 15:09:21.550386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.369 [2024-07-14 15:09:21.550416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.369 [2024-07-14 15:09:21.556727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.369 [2024-07-14 15:09:21.556775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.369 [2024-07-14 15:09:21.556804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.369 [2024-07-14 15:09:21.563191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.369 [2024-07-14 15:09:21.563251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.369 [2024-07-14 15:09:21.563280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.369 [2024-07-14 15:09:21.569780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.369 [2024-07-14 15:09:21.569835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.369 [2024-07-14 15:09:21.569865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.369 [2024-07-14 15:09:21.577102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.369 [2024-07-14 15:09:21.577144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.369 [2024-07-14 15:09:21.577186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.369 [2024-07-14 15:09:21.583667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.369 [2024-07-14 15:09:21.583714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.369 [2024-07-14 15:09:21.583743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.369 [2024-07-14 15:09:21.590171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.369 [2024-07-14 15:09:21.590217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.369 [2024-07-14 15:09:21.590246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.369 [2024-07-14 15:09:21.596711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.369 [2024-07-14 15:09:21.596758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.369 [2024-07-14 15:09:21.596788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.369 [2024-07-14 15:09:21.603288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.369 [2024-07-14 15:09:21.603334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.370 [2024-07-14 15:09:21.603364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.370 [2024-07-14 15:09:21.609703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.370 [2024-07-14 15:09:21.609749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.370 [2024-07-14 15:09:21.609779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.370 [2024-07-14 15:09:21.616232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.370 [2024-07-14 15:09:21.616291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.370 [2024-07-14 15:09:21.616321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.370 [2024-07-14 15:09:21.622816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.370 [2024-07-14 15:09:21.622862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.370 [2024-07-14 15:09:21.622901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.370 [2024-07-14 15:09:21.630004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.370 [2024-07-14 15:09:21.630045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.370 [2024-07-14 15:09:21.630070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.370 [2024-07-14 15:09:21.637063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.370 [2024-07-14 15:09:21.637104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.370 [2024-07-14 15:09:21.637129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.370 [2024-07-14 15:09:21.643348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.370 [2024-07-14 15:09:21.643396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.370 [2024-07-14 15:09:21.643424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.370 [2024-07-14 15:09:21.647331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.370 [2024-07-14 15:09:21.647377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.370 [2024-07-14 15:09:21.647406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.370 [2024-07-14 15:09:21.651838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.370 [2024-07-14 15:09:21.651894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.370 [2024-07-14 15:09:21.651940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.370 [2024-07-14 15:09:21.657759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.370 [2024-07-14 15:09:21.657806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.370 [2024-07-14 15:09:21.657835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.370 [2024-07-14 15:09:21.662451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.370 [2024-07-14 15:09:21.662497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.370 [2024-07-14 15:09:21.662526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.370 [2024-07-14 15:09:21.668025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.370 [2024-07-14 15:09:21.668066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.370 [2024-07-14 15:09:21.668092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.370 [2024-07-14 15:09:21.673562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.370 [2024-07-14 15:09:21.673618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.370 [2024-07-14 15:09:21.673648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.629 [2024-07-14 15:09:21.680002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.629 [2024-07-14 15:09:21.680045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.629 [2024-07-14 15:09:21.680072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.629 [2024-07-14 15:09:21.686379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.629 [2024-07-14 15:09:21.686425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.629 [2024-07-14 15:09:21.686454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.629 [2024-07-14 15:09:21.692726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.629 [2024-07-14 15:09:21.692771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.629 [2024-07-14 15:09:21.692800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.629 [2024-07-14 15:09:21.699484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.629 [2024-07-14 15:09:21.699531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.629 [2024-07-14 15:09:21.699591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.629 [2024-07-14 15:09:21.705970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.629 [2024-07-14 15:09:21.706012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.629 [2024-07-14 15:09:21.706039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.629 [2024-07-14 15:09:21.712277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.629 [2024-07-14 15:09:21.712323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.629 [2024-07-14 15:09:21.712351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.629 [2024-07-14 15:09:21.719347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.629 [2024-07-14 15:09:21.719394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.629 [2024-07-14 15:09:21.719424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.629 [2024-07-14 15:09:21.726911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.629 [2024-07-14 15:09:21.726969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.629 [2024-07-14 15:09:21.726995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.629 [2024-07-14 15:09:21.733670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.629 [2024-07-14 15:09:21.733717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.629 [2024-07-14 15:09:21.733746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.629 [2024-07-14 15:09:21.741593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.629 [2024-07-14 15:09:21.741642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.629 [2024-07-14 15:09:21.741671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.629 [2024-07-14 15:09:21.747088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.629 [2024-07-14 15:09:21.747131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.629 [2024-07-14 15:09:21.747158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.629 [2024-07-14 15:09:21.753746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.629 [2024-07-14 15:09:21.753794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.629 [2024-07-14 15:09:21.753823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.629 [2024-07-14 15:09:21.760404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.629 [2024-07-14 15:09:21.760452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.629 [2024-07-14 15:09:21.760482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.629 [2024-07-14 15:09:21.764974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.629 [2024-07-14 15:09:21.765016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.629 [2024-07-14 15:09:21.765043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.629 [2024-07-14 15:09:21.772077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.629 [2024-07-14 15:09:21.772120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.629 [2024-07-14 15:09:21.772147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.629 [2024-07-14 15:09:21.779066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.629 [2024-07-14 15:09:21.779107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.629 [2024-07-14 15:09:21.779132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.629 [2024-07-14 15:09:21.786584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.629 [2024-07-14 15:09:21.786643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.629 [2024-07-14 15:09:21.786673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.629 [2024-07-14 15:09:21.793767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.629 [2024-07-14 15:09:21.793816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.629 [2024-07-14 15:09:21.793846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.629 [2024-07-14 15:09:21.801969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.629 [2024-07-14 15:09:21.802014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.629 [2024-07-14 15:09:21.802041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.629 [2024-07-14 15:09:21.809224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.629 [2024-07-14 15:09:21.809271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.629 [2024-07-14 15:09:21.809301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.629 [2024-07-14 15:09:21.816414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.629 [2024-07-14 15:09:21.816462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.629 [2024-07-14 15:09:21.816492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.629 [2024-07-14 15:09:21.823312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.629 [2024-07-14 15:09:21.823360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.629 [2024-07-14 15:09:21.823390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.629 [2024-07-14 15:09:21.830379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.629 [2024-07-14 15:09:21.830426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.629 [2024-07-14 15:09:21.830455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.629 [2024-07-14 15:09:21.837621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.629 [2024-07-14 15:09:21.837668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.629 [2024-07-14 15:09:21.837697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.630 [2024-07-14 15:09:21.845590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.630 [2024-07-14 15:09:21.845638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.630 [2024-07-14 15:09:21.845668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.630 [2024-07-14 15:09:21.852030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.630 [2024-07-14 15:09:21.852073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.630 [2024-07-14 15:09:21.852099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.630 [2024-07-14 15:09:21.857562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.630 [2024-07-14 15:09:21.857608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.630 [2024-07-14 15:09:21.857637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.630 [2024-07-14 15:09:21.861823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.630 [2024-07-14 15:09:21.861868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.630 [2024-07-14 15:09:21.861906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.630 [2024-07-14 15:09:21.867266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.630 [2024-07-14 15:09:21.867313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.630 [2024-07-14 15:09:21.867343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.630 [2024-07-14 15:09:21.872796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.630 [2024-07-14 15:09:21.872842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.630 [2024-07-14 15:09:21.872922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.630 [2024-07-14 15:09:21.879362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.630 [2024-07-14 15:09:21.879409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.630 [2024-07-14 15:09:21.879438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.630 [2024-07-14 15:09:21.885857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.630 [2024-07-14 15:09:21.885912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.630 [2024-07-14 15:09:21.885956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.630 [2024-07-14 15:09:21.892912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.630 [2024-07-14 15:09:21.892972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.630 [2024-07-14 15:09:21.893013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.630 [2024-07-14 15:09:21.899722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.630 [2024-07-14 15:09:21.899769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.630 [2024-07-14 15:09:21.899821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.630 [2024-07-14 15:09:21.906243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.630 [2024-07-14 15:09:21.906304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.630 [2024-07-14 15:09:21.906334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.630 [2024-07-14 15:09:21.912812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.630 [2024-07-14 15:09:21.912859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.630 [2024-07-14 15:09:21.912915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.630 [2024-07-14 15:09:21.919183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.630 [2024-07-14 15:09:21.919229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.630 [2024-07-14 15:09:21.919258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.630 [2024-07-14 15:09:21.925536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.630 [2024-07-14 15:09:21.925582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.630 [2024-07-14 15:09:21.925611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.630 [2024-07-14 15:09:21.932039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.630 [2024-07-14 15:09:21.932082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.630 [2024-07-14 15:09:21.932108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.889 [2024-07-14 15:09:21.938464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.889 [2024-07-14 15:09:21.938512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.889 [2024-07-14 15:09:21.938541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.889 [2024-07-14 15:09:21.944983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.889 [2024-07-14 15:09:21.945041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.889 [2024-07-14 15:09:21.945067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.889 [2024-07-14 15:09:21.951125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.889 [2024-07-14 15:09:21.951182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.889 [2024-07-14 15:09:21.951208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.889 [2024-07-14 15:09:21.957410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.889 [2024-07-14 15:09:21.957456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.889 [2024-07-14 15:09:21.957485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.889 [2024-07-14 15:09:21.963636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.889 [2024-07-14 15:09:21.963682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.889 [2024-07-14 15:09:21.963711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.889 [2024-07-14 15:09:21.969996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.889 [2024-07-14 15:09:21.970053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.889 [2024-07-14 15:09:21.970078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.889 [2024-07-14 15:09:21.976245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.889 [2024-07-14 15:09:21.976299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.889 [2024-07-14 15:09:21.976328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.889 [2024-07-14 15:09:21.982628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.889 [2024-07-14 15:09:21.982690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.890 [2024-07-14 15:09:21.982719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.890 [2024-07-14 15:09:21.988854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.890 [2024-07-14 15:09:21.988909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.890 [2024-07-14 15:09:21.988939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.890 [2024-07-14 15:09:21.995372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.890 [2024-07-14 15:09:21.995418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.890 [2024-07-14 15:09:21.995447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.890 [2024-07-14 15:09:22.001719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.890 [2024-07-14 15:09:22.001766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.890 [2024-07-14 15:09:22.001795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.890 [2024-07-14 15:09:22.008045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.890 [2024-07-14 15:09:22.008092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.890 [2024-07-14 15:09:22.008131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.890 [2024-07-14 15:09:22.014366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.890 [2024-07-14 15:09:22.014412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.890 [2024-07-14 15:09:22.014441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.890 [2024-07-14 15:09:22.020566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.890 [2024-07-14 15:09:22.020612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.890 [2024-07-14 15:09:22.020641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.890 [2024-07-14 15:09:22.027009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.890 [2024-07-14 15:09:22.027051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.890 [2024-07-14 15:09:22.027092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.890 [2024-07-14 15:09:22.033265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.890 [2024-07-14 15:09:22.033311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.890 [2024-07-14 15:09:22.033340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.890 [2024-07-14 15:09:22.039806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.890 [2024-07-14 15:09:22.039855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.890 [2024-07-14 15:09:22.039894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.890 [2024-07-14 15:09:22.046261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.890 [2024-07-14 15:09:22.046307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.890 [2024-07-14 15:09:22.046336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.890 [2024-07-14 15:09:22.052706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.890 [2024-07-14 15:09:22.052752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.890 [2024-07-14 15:09:22.052781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.890 [2024-07-14 15:09:22.059097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.890 [2024-07-14 15:09:22.059139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.890 [2024-07-14 15:09:22.059166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.890 [2024-07-14 15:09:22.065488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.890 [2024-07-14 15:09:22.065536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.890 [2024-07-14 15:09:22.065565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.890 [2024-07-14 15:09:22.072096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.890 [2024-07-14 15:09:22.072139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.890 [2024-07-14 15:09:22.072165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.890 [2024-07-14 15:09:22.078434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.890 [2024-07-14 15:09:22.078481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.890 [2024-07-14 15:09:22.078509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.890 [2024-07-14 15:09:22.084838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.890 [2024-07-14 15:09:22.084894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.890 [2024-07-14 15:09:22.084941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.890 [2024-07-14 15:09:22.091080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.890 [2024-07-14 15:09:22.091122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.890 [2024-07-14 15:09:22.091163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.890 [2024-07-14 15:09:22.097455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.890 [2024-07-14 15:09:22.097502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.890 [2024-07-14 15:09:22.097531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.890 [2024-07-14 15:09:22.104133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.890 [2024-07-14 15:09:22.104177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.890 [2024-07-14 15:09:22.104204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.890 [2024-07-14 15:09:22.110645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.890 [2024-07-14 15:09:22.110691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.890 [2024-07-14 15:09:22.110720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.890 [2024-07-14 15:09:22.117182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.890 [2024-07-14 15:09:22.117229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.890 [2024-07-14 15:09:22.117265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.890 [2024-07-14 15:09:22.123656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.890 [2024-07-14 15:09:22.123703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.890 [2024-07-14 15:09:22.123732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.890 [2024-07-14 15:09:22.130249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.890 [2024-07-14 15:09:22.130297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.890 [2024-07-14 15:09:22.130327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.890 [2024-07-14 15:09:22.136728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.890 [2024-07-14 15:09:22.136775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.890 [2024-07-14 15:09:22.136803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.890 [2024-07-14 15:09:22.143139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.890 [2024-07-14 15:09:22.143187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.890 [2024-07-14 15:09:22.143217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.890 [2024-07-14 15:09:22.149659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.890 [2024-07-14 15:09:22.149707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.890 [2024-07-14 15:09:22.149736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.890 [2024-07-14 15:09:22.156184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.890 [2024-07-14 15:09:22.156230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.890 [2024-07-14 15:09:22.156259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.890 [2024-07-14 15:09:22.162585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.890 [2024-07-14 15:09:22.162631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.890 [2024-07-14 15:09:22.162661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.890 [2024-07-14 15:09:22.168983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.891 [2024-07-14 15:09:22.169041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.891 [2024-07-14 15:09:22.169066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.891 [2024-07-14 15:09:22.175172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.891 [2024-07-14 15:09:22.175244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.891 [2024-07-14 15:09:22.175275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.891 [2024-07-14 15:09:22.181516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.891 [2024-07-14 15:09:22.181562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.891 [2024-07-14 15:09:22.181592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.891 [2024-07-14 15:09:22.187985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.891 [2024-07-14 15:09:22.188042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.891 [2024-07-14 15:09:22.188069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.891 [2024-07-14 15:09:22.194309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.891 [2024-07-14 15:09:22.194357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.891 [2024-07-14 15:09:22.194386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.150 [2024-07-14 15:09:22.200749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.150 [2024-07-14 15:09:22.200796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.150 [2024-07-14 15:09:22.200825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.150 [2024-07-14 15:09:22.205508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.150 [2024-07-14 15:09:22.205553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.150 [2024-07-14 15:09:22.205583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.150 [2024-07-14 15:09:22.210633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.150 [2024-07-14 15:09:22.210680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.150 [2024-07-14 15:09:22.210709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.150 [2024-07-14 15:09:22.216362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.150 [2024-07-14 15:09:22.216409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.150 [2024-07-14 15:09:22.216438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.150 [2024-07-14 15:09:22.222705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.150 [2024-07-14 15:09:22.222752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.150 [2024-07-14 15:09:22.222789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.150 [2024-07-14 15:09:22.229202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.150 [2024-07-14 15:09:22.229249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.150 [2024-07-14 15:09:22.229279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.150 [2024-07-14 15:09:22.235730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.150 [2024-07-14 15:09:22.235776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.150 [2024-07-14 15:09:22.235805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.150 [2024-07-14 15:09:22.242343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.150 [2024-07-14 15:09:22.242390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.150 [2024-07-14 15:09:22.242419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.150 [2024-07-14 15:09:22.248810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.150 [2024-07-14 15:09:22.248866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.150 [2024-07-14 15:09:22.248937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.150 [2024-07-14 15:09:22.255421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.151 [2024-07-14 15:09:22.255469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.151 [2024-07-14 15:09:22.255513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.151 [2024-07-14 15:09:22.262066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.151 [2024-07-14 15:09:22.262109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.151 [2024-07-14 15:09:22.262136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.151 [2024-07-14 15:09:22.268427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.151 [2024-07-14 15:09:22.268475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.151 [2024-07-14 15:09:22.268505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.151 [2024-07-14 15:09:22.274995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.151 [2024-07-14 15:09:22.275043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.151 [2024-07-14 15:09:22.275072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.151 [2024-07-14 15:09:22.281352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.151 [2024-07-14 15:09:22.281406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.151 [2024-07-14 15:09:22.281436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.151 [2024-07-14 15:09:22.287871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.151 [2024-07-14 15:09:22.287926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.151 [2024-07-14 15:09:22.287955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.151 [2024-07-14 15:09:22.294501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.151 [2024-07-14 15:09:22.294551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.151 [2024-07-14 15:09:22.294580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.151 [2024-07-14 15:09:22.301073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.151 [2024-07-14 15:09:22.301121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.151 [2024-07-14 15:09:22.301150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.151 [2024-07-14 15:09:22.307546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.151 [2024-07-14 15:09:22.307593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.151 [2024-07-14 15:09:22.307622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.151 [2024-07-14 15:09:22.314184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.151 [2024-07-14 15:09:22.314240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.151 [2024-07-14 15:09:22.314270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.151 [2024-07-14 15:09:22.320800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.151 [2024-07-14 15:09:22.320847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.151 [2024-07-14 15:09:22.320894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.151 [2024-07-14 15:09:22.327685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.151 [2024-07-14 15:09:22.327731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.151 [2024-07-14 15:09:22.327760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.151 [2024-07-14 15:09:22.334844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.151 [2024-07-14 15:09:22.334905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.151 [2024-07-14 15:09:22.334944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.151 [2024-07-14 15:09:22.342263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.151 [2024-07-14 15:09:22.342311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.151 [2024-07-14 15:09:22.342341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.151 [2024-07-14 15:09:22.350445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.151 [2024-07-14 15:09:22.350494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.151 [2024-07-14 15:09:22.350523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.151 [2024-07-14 15:09:22.360045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.151 [2024-07-14 15:09:22.360095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.151 [2024-07-14 15:09:22.360125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.151 [2024-07-14 15:09:22.365872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.151 [2024-07-14 15:09:22.365927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.151 [2024-07-14 15:09:22.365957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.151 [2024-07-14 15:09:22.372579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.151 [2024-07-14 15:09:22.372628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.151 [2024-07-14 15:09:22.372658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.151 [2024-07-14 15:09:22.380584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.151 [2024-07-14 15:09:22.380633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.151 [2024-07-14 15:09:22.380663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.151 [2024-07-14 15:09:22.390661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.151 [2024-07-14 15:09:22.390713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.151 [2024-07-14 15:09:22.390744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.151 [2024-07-14 15:09:22.398797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.151 [2024-07-14 15:09:22.398846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.151 [2024-07-14 15:09:22.398900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.151 [2024-07-14 15:09:22.406547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.151 [2024-07-14 15:09:22.406605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.151 [2024-07-14 15:09:22.406636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.151 [2024-07-14 15:09:22.414110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.151 [2024-07-14 15:09:22.414159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.151 [2024-07-14 15:09:22.414198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.151 [2024-07-14 15:09:22.421619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.151 [2024-07-14 15:09:22.421668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.151 [2024-07-14 15:09:22.421698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.151 [2024-07-14 15:09:22.429091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.151 [2024-07-14 15:09:22.429140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.151 [2024-07-14 15:09:22.429175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.151 [2024-07-14 15:09:22.436461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.151 [2024-07-14 15:09:22.436509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.151 [2024-07-14 15:09:22.436540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.151 [2024-07-14 15:09:22.443694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.151 [2024-07-14 15:09:22.443742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.151 [2024-07-14 15:09:22.443772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.151 [2024-07-14 15:09:22.452193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.151 [2024-07-14 15:09:22.452242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.151 [2024-07-14 15:09:22.452272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.412 [2024-07-14 15:09:22.459886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.412 [2024-07-14 15:09:22.459935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.412 [2024-07-14 15:09:22.459965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.412 [2024-07-14 15:09:22.466998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.412 [2024-07-14 15:09:22.467047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.412 [2024-07-14 15:09:22.467076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.412 [2024-07-14 15:09:22.474256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.412 [2024-07-14 15:09:22.474305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.412 [2024-07-14 15:09:22.474336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.412 [2024-07-14 15:09:22.481926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.412 [2024-07-14 15:09:22.481974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.412 [2024-07-14 15:09:22.482004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.412 [2024-07-14 15:09:22.489264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.412 [2024-07-14 15:09:22.489313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.412 [2024-07-14 15:09:22.489342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.412 [2024-07-14 15:09:22.496037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.412 [2024-07-14 15:09:22.496086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.412 [2024-07-14 15:09:22.496116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.412 [2024-07-14 15:09:22.503286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.412 [2024-07-14 15:09:22.503335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.412 [2024-07-14 15:09:22.503364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.412 [2024-07-14 15:09:22.510308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.412 [2024-07-14 15:09:22.510358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.412 [2024-07-14 15:09:22.510388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.412 [2024-07-14 15:09:22.518402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.412 [2024-07-14 15:09:22.518451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.412 [2024-07-14 15:09:22.518481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.412 [2024-07-14 15:09:22.525858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.412 [2024-07-14 15:09:22.525915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.412 [2024-07-14 15:09:22.525946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.412 [2024-07-14 15:09:22.533359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.412 [2024-07-14 15:09:22.533416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.412 [2024-07-14 15:09:22.533446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.412 [2024-07-14 15:09:22.541008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.412 [2024-07-14 15:09:22.541056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.412 [2024-07-14 15:09:22.541086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.412 [2024-07-14 15:09:22.548715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.412 [2024-07-14 15:09:22.548764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.412 [2024-07-14 15:09:22.548794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.412 [2024-07-14 15:09:22.556130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.412 [2024-07-14 15:09:22.556179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.412 [2024-07-14 15:09:22.556209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.412 [2024-07-14 15:09:22.563723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.412 [2024-07-14 15:09:22.563771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.412 [2024-07-14 15:09:22.563817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.412 [2024-07-14 15:09:22.569065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.412 [2024-07-14 15:09:22.569113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.412 [2024-07-14 15:09:22.569142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.412 [2024-07-14 15:09:22.574706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.412 [2024-07-14 15:09:22.574754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.412 [2024-07-14 15:09:22.574784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.412 [2024-07-14 15:09:22.581680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.412 [2024-07-14 15:09:22.581729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.412 [2024-07-14 15:09:22.581758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.412 [2024-07-14 15:09:22.587536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.412 [2024-07-14 15:09:22.587584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.412 [2024-07-14 15:09:22.587614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.412 [2024-07-14 15:09:22.593940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.412 [2024-07-14 15:09:22.593988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.412 [2024-07-14 15:09:22.594018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.412 [2024-07-14 15:09:22.602343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.412 [2024-07-14 15:09:22.602392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.412 [2024-07-14 15:09:22.602422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.412 [2024-07-14 15:09:22.610007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.412 [2024-07-14 15:09:22.610056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.412 [2024-07-14 15:09:22.610086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.413 [2024-07-14 15:09:22.615190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.413 [2024-07-14 15:09:22.615237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.413 [2024-07-14 15:09:22.615266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.413 [2024-07-14 15:09:22.619715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.413 [2024-07-14 15:09:22.619762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.413 [2024-07-14 15:09:22.619792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.413 [2024-07-14 15:09:22.625651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.413 [2024-07-14 15:09:22.625699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.413 [2024-07-14 15:09:22.625729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.413 [2024-07-14 15:09:22.633147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.413 [2024-07-14 15:09:22.633197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.413 [2024-07-14 15:09:22.633227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.413 [2024-07-14 15:09:22.640600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.413 [2024-07-14 15:09:22.640650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.413 [2024-07-14 15:09:22.640680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.413 [2024-07-14 15:09:22.649551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.413 [2024-07-14 15:09:22.649611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.413 [2024-07-14 15:09:22.649642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.413 [2024-07-14 15:09:22.659070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.413 [2024-07-14 15:09:22.659120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.413 [2024-07-14 15:09:22.659149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.413 [2024-07-14 15:09:22.668671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.413 [2024-07-14 15:09:22.668719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.413 [2024-07-14 15:09:22.668749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.413 [2024-07-14 15:09:22.678174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.413 [2024-07-14 15:09:22.678232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.413 [2024-07-14 15:09:22.678262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.413 [2024-07-14 15:09:22.686289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.413 [2024-07-14 15:09:22.686337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.413 [2024-07-14 15:09:22.686368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.413 [2024-07-14 15:09:22.691031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.413 [2024-07-14 15:09:22.691078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.413 [2024-07-14 15:09:22.691108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.413 [2024-07-14 15:09:22.696849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.413 [2024-07-14 15:09:22.696904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.413 [2024-07-14 15:09:22.696936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.413 [2024-07-14 15:09:22.703176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.413 [2024-07-14 15:09:22.703224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.413 [2024-07-14 15:09:22.703255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.413 [2024-07-14 15:09:22.710622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.413 [2024-07-14 15:09:22.710671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.413 [2024-07-14 15:09:22.710701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.693 [2024-07-14 15:09:22.718048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.693 [2024-07-14 15:09:22.718105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.693 [2024-07-14 15:09:22.718137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.693 [2024-07-14 15:09:22.725562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.693 [2024-07-14 15:09:22.725614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.693 [2024-07-14 15:09:22.725645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.693 [2024-07-14 15:09:22.732043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.693 [2024-07-14 15:09:22.732092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.693 [2024-07-14 15:09:22.732122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.693 [2024-07-14 15:09:22.739247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.693 [2024-07-14 15:09:22.739296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.693 [2024-07-14 15:09:22.739327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.693 [2024-07-14 15:09:22.746672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.693 [2024-07-14 15:09:22.746722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.693 [2024-07-14 15:09:22.746752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.693 [2024-07-14 15:09:22.754793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.693 [2024-07-14 15:09:22.754843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.693 [2024-07-14 15:09:22.754873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.693 [2024-07-14 15:09:22.761537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.693 [2024-07-14 15:09:22.761586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.693 [2024-07-14 15:09:22.761616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.693 [2024-07-14 15:09:22.766069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.693 [2024-07-14 15:09:22.766116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.693 [2024-07-14 15:09:22.766146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.693 [2024-07-14 15:09:22.772907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.693 [2024-07-14 15:09:22.772955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.693 [2024-07-14 15:09:22.772995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.693 [2024-07-14 15:09:22.779793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.693 [2024-07-14 15:09:22.779841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.693 [2024-07-14 15:09:22.779871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.693 [2024-07-14 15:09:22.786526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.693 [2024-07-14 15:09:22.786576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.693 [2024-07-14 15:09:22.786606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.693 [2024-07-14 15:09:22.792086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.693 [2024-07-14 15:09:22.792132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.693 [2024-07-14 15:09:22.792162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.693 [2024-07-14 15:09:22.796457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.693 [2024-07-14 15:09:22.796505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.693 [2024-07-14 15:09:22.796535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.693 [2024-07-14 15:09:22.801496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.693 [2024-07-14 15:09:22.801542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.693 [2024-07-14 15:09:22.801571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.693 [2024-07-14 15:09:22.807674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.693 [2024-07-14 15:09:22.807722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.693 [2024-07-14 15:09:22.807752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.693 [2024-07-14 15:09:22.814112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.693 [2024-07-14 15:09:22.814160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.693 [2024-07-14 15:09:22.814191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.693 [2024-07-14 15:09:22.820475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.693 [2024-07-14 15:09:22.820522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.693 [2024-07-14 15:09:22.820551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.693 [2024-07-14 15:09:22.826862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.693 [2024-07-14 15:09:22.826917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.693 [2024-07-14 15:09:22.826946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.693 [2024-07-14 15:09:22.833111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.693 [2024-07-14 15:09:22.833157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.693 [2024-07-14 15:09:22.833186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.693 [2024-07-14 15:09:22.839312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.693 [2024-07-14 15:09:22.839359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.693 [2024-07-14 15:09:22.839388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.693 [2024-07-14 15:09:22.845599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.693 [2024-07-14 15:09:22.845645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.693 [2024-07-14 15:09:22.845674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.693 [2024-07-14 15:09:22.851799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.693 [2024-07-14 15:09:22.851863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.693 [2024-07-14 15:09:22.851904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.693 [2024-07-14 15:09:22.858052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.694 [2024-07-14 15:09:22.858099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.694 [2024-07-14 15:09:22.858128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.694 [2024-07-14 15:09:22.864250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.694 [2024-07-14 15:09:22.864296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.694 [2024-07-14 15:09:22.864325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.694 [2024-07-14 15:09:22.870584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.694 [2024-07-14 15:09:22.870630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.694 [2024-07-14 15:09:22.870661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.694 [2024-07-14 15:09:22.876811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.694 [2024-07-14 15:09:22.876857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.694 [2024-07-14 15:09:22.876906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.694 [2024-07-14 15:09:22.883105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.694 [2024-07-14 15:09:22.883150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.694 [2024-07-14 15:09:22.883179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.694 [2024-07-14 15:09:22.889324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.694 [2024-07-14 15:09:22.889370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.694 [2024-07-14 15:09:22.889398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.694 [2024-07-14 15:09:22.895603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.694 [2024-07-14 15:09:22.895650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.694 [2024-07-14 15:09:22.895679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.694 [2024-07-14 15:09:22.901917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.694 [2024-07-14 15:09:22.901963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.694 [2024-07-14 15:09:22.901992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.694 [2024-07-14 15:09:22.908319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.694 [2024-07-14 15:09:22.908364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.694 [2024-07-14 15:09:22.908393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.694 [2024-07-14 15:09:22.914869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.694 [2024-07-14 15:09:22.914927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.694 [2024-07-14 15:09:22.914956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.694 [2024-07-14 15:09:22.921157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.694 [2024-07-14 15:09:22.921203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.694 [2024-07-14 15:09:22.921232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.694 [2024-07-14 15:09:22.927499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.694 [2024-07-14 15:09:22.927545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.694 [2024-07-14 15:09:22.927574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.694 [2024-07-14 15:09:22.933938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.694 [2024-07-14 15:09:22.933985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.694 [2024-07-14 15:09:22.934014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.694 [2024-07-14 15:09:22.940365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.694 [2024-07-14 15:09:22.940412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.694 [2024-07-14 15:09:22.940441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.694 [2024-07-14 15:09:22.946619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.694 [2024-07-14 15:09:22.946666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.694 [2024-07-14 15:09:22.946695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.694 [2024-07-14 15:09:22.953043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.694 [2024-07-14 15:09:22.953091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.694 [2024-07-14 15:09:22.953121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.694 [2024-07-14 15:09:22.959424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.694 [2024-07-14 15:09:22.959471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.694 [2024-07-14 15:09:22.959500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.694 [2024-07-14 15:09:22.965870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.694 [2024-07-14 15:09:22.965925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.694 [2024-07-14 15:09:22.965954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.694 [2024-07-14 15:09:22.972280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.694 [2024-07-14 15:09:22.972326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.694 [2024-07-14 15:09:22.972355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.694 [2024-07-14 15:09:22.978812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.694 [2024-07-14 15:09:22.978860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.694 [2024-07-14 15:09:22.978901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.694 [2024-07-14 15:09:22.985328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.694 [2024-07-14 15:09:22.985375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.694 [2024-07-14 15:09:22.985413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.959 [2024-07-14 15:09:22.991756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.959 [2024-07-14 15:09:22.991804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.959 [2024-07-14 15:09:22.991832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.959 [2024-07-14 15:09:22.998107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.959 [2024-07-14 15:09:22.998154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.959 [2024-07-14 15:09:22.998183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.959 [2024-07-14 15:09:23.005625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.959 [2024-07-14 15:09:23.005673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.959 [2024-07-14 15:09:23.005702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.959 [2024-07-14 15:09:23.012831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.959 [2024-07-14 15:09:23.012890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.959 [2024-07-14 15:09:23.012927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.959 [2024-07-14 15:09:23.021271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.959 [2024-07-14 15:09:23.021319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.959 [2024-07-14 15:09:23.021348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.959 00:36:43.959 Latency(us) 00:36:43.959 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:43.959 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:36:43.959 nvme0n1 : 2.00 4553.62 569.20 0.00 0.00 3505.65 1007.31 10048.85 00:36:43.959 =================================================================================================================== 00:36:43.959 Total : 4553.62 569.20 0.00 0.00 3505.65 1007.31 10048.85 00:36:43.959 0 00:36:43.959 15:09:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:43.959 15:09:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:43.959 15:09:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:43.959 | .driver_specific 00:36:43.959 | .nvme_error 00:36:43.959 | .status_code 00:36:43.959 | .command_transient_transport_error' 00:36:43.959 15:09:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:44.217 15:09:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 294 > 0 )) 00:36:44.217 15:09:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2056351 00:36:44.217 15:09:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2056351 ']' 00:36:44.217 15:09:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2056351 00:36:44.217 15:09:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:36:44.217 15:09:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:44.217 15:09:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2056351 00:36:44.217 15:09:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:44.217 15:09:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:44.217 15:09:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2056351' 00:36:44.217 killing process with pid 2056351 00:36:44.217 15:09:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2056351 00:36:44.217 Received shutdown signal, test time was about 2.000000 seconds 00:36:44.217 00:36:44.217 Latency(us) 00:36:44.217 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:44.217 =================================================================================================================== 00:36:44.217 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:44.217 15:09:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2056351 00:36:45.150 15:09:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:36:45.150 15:09:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:45.150 15:09:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:36:45.150 15:09:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:36:45.150 15:09:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:36:45.150 15:09:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2056895 00:36:45.150 15:09:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:36:45.150 15:09:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2056895 /var/tmp/bperf.sock 00:36:45.150 15:09:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2056895 ']' 00:36:45.150 15:09:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:45.150 15:09:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:45.150 15:09:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:45.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:45.150 15:09:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:45.150 15:09:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:45.409 [2024-07-14 15:09:24.474593] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:45.409 [2024-07-14 15:09:24.474733] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2056895 ] 00:36:45.409 EAL: No free 2048 kB hugepages reported on node 1 00:36:45.409 [2024-07-14 15:09:24.598775] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:45.667 [2024-07-14 15:09:24.845766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:46.233 15:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:46.233 15:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:36:46.233 15:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:46.233 15:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:46.492 15:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:46.492 15:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.492 15:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:46.492 15:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.492 15:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:46.492 15:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:47.058 nvme0n1 00:36:47.058 15:09:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:36:47.058 15:09:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:47.058 15:09:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:47.058 15:09:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:47.058 15:09:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:47.058 15:09:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:47.058 Running I/O for 2 seconds... 00:36:47.058 [2024-07-14 15:09:26.344279] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:47.058 [2024-07-14 15:09:26.344582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.058 [2024-07-14 15:09:26.344666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:47.058 [2024-07-14 15:09:26.362404] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:47.058 [2024-07-14 15:09:26.362718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.059 [2024-07-14 15:09:26.362802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:47.317 [2024-07-14 15:09:26.380629] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:47.317 [2024-07-14 15:09:26.380964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.317 [2024-07-14 15:09:26.381043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:47.317 [2024-07-14 15:09:26.398577] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:47.317 [2024-07-14 15:09:26.398912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.317 [2024-07-14 15:09:26.398977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:47.317 [2024-07-14 15:09:26.416648] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:47.317 [2024-07-14 15:09:26.417044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.317 [2024-07-14 15:09:26.417127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:47.317 [2024-07-14 15:09:26.434661] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:47.317 [2024-07-14 15:09:26.435009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.317 [2024-07-14 15:09:26.435047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:47.317 [2024-07-14 15:09:26.452757] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:47.317 [2024-07-14 15:09:26.453104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.317 [2024-07-14 15:09:26.453183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:47.317 [2024-07-14 15:09:26.470707] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:47.317 [2024-07-14 15:09:26.471022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.317 [2024-07-14 15:09:26.471061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:47.317 [2024-07-14 15:09:26.488508] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:47.317 [2024-07-14 15:09:26.488812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.317 [2024-07-14 15:09:26.488893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:47.317 [2024-07-14 15:09:26.506141] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:47.317 [2024-07-14 15:09:26.506476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.317 [2024-07-14 15:09:26.506554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:47.317 [2024-07-14 15:09:26.523958] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:47.317 [2024-07-14 15:09:26.524288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.317 [2024-07-14 15:09:26.524384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:47.317 [2024-07-14 15:09:26.541709] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:47.317 [2024-07-14 15:09:26.542015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.317 [2024-07-14 15:09:26.542054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:47.317 [2024-07-14 15:09:26.559231] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:47.317 [2024-07-14 15:09:26.559531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.317 [2024-07-14 15:09:26.559603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:47.317 [2024-07-14 15:09:26.576849] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:47.317 [2024-07-14 15:09:26.577123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.317 [2024-07-14 15:09:26.577160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:47.317 [2024-07-14 15:09:26.594420] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:47.317 [2024-07-14 15:09:26.594760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.317 [2024-07-14 15:09:26.594819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:47.317 [2024-07-14 15:09:26.612298] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:47.317 [2024-07-14 15:09:26.612602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.317 [2024-07-14 15:09:26.612679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:47.577 [2024-07-14 15:09:26.630232] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:47.577 [2024-07-14 15:09:26.630536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.577 [2024-07-14 15:09:26.630620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:47.577 [2024-07-14 15:09:26.647896] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:47.577 [2024-07-14 15:09:26.648177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.577 [2024-07-14 15:09:26.648248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:47.577 [2024-07-14 15:09:26.665550] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:47.577 [2024-07-14 15:09:26.665870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.577 [2024-07-14 15:09:26.665945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:47.577 [2024-07-14 15:09:26.683109] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:47.577 [2024-07-14 15:09:26.683413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.577 [2024-07-14 15:09:26.683494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:47.577 [2024-07-14 15:09:26.700688] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:47.577 [2024-07-14 15:09:26.700996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.577 [2024-07-14 15:09:26.701079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:47.577 [2024-07-14 15:09:26.718330] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:47.577 [2024-07-14 15:09:26.718596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.577 [2024-07-14 15:09:26.718679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:47.577 [2024-07-14 15:09:26.735996] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:47.577 [2024-07-14 15:09:26.736303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.577 [2024-07-14 15:09:26.736394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:47.577 [2024-07-14 15:09:26.753835] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:47.577 [2024-07-14 15:09:26.754138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.577 [2024-07-14 15:09:26.754212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:47.577 [2024-07-14 15:09:26.771568] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:47.577 [2024-07-14 15:09:26.771844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.577 [2024-07-14 15:09:26.771928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:47.577 [2024-07-14 15:09:26.789310] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:47.577 [2024-07-14 15:09:26.789574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.577 [2024-07-14 15:09:26.789611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:47.577 [2024-07-14 15:09:26.806979] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:47.577 [2024-07-14 15:09:26.807293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.577 [2024-07-14 15:09:26.807373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:47.577 [2024-07-14 15:09:26.824710] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:47.577 [2024-07-14 15:09:26.825052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.577 [2024-07-14 15:09:26.825154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:47.577 [2024-07-14 15:09:26.842383] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:47.577 [2024-07-14 15:09:26.842684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.577 [2024-07-14 15:09:26.842721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:47.577 [2024-07-14 15:09:26.860177] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:47.577 [2024-07-14 15:09:26.860490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.577 [2024-07-14 15:09:26.860560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:47.577 [2024-07-14 15:09:26.878068] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:47.577 [2024-07-14 15:09:26.878379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.577 [2024-07-14 15:09:26.878416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:47.837 [2024-07-14 15:09:26.896105] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:47.837 [2024-07-14 15:09:26.896381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.837 [2024-07-14 15:09:26.896420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:47.837 [2024-07-14 15:09:26.913170] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:47.837 [2024-07-14 15:09:26.913437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.837 [2024-07-14 15:09:26.913475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:47.837 [2024-07-14 15:09:26.930645] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:47.837 [2024-07-14 15:09:26.931014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.837 [2024-07-14 15:09:26.931061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:47.837 [2024-07-14 15:09:26.948890] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:47.837 [2024-07-14 15:09:26.949204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.837 [2024-07-14 15:09:26.949242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:47.837 [2024-07-14 15:09:26.967254] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:47.837 [2024-07-14 15:09:26.967570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.837 [2024-07-14 15:09:26.967647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:47.837 [2024-07-14 15:09:26.985398] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:47.837 [2024-07-14 15:09:26.985708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.837 [2024-07-14 15:09:26.985783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:47.837 [2024-07-14 15:09:27.003485] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:47.837 [2024-07-14 15:09:27.003821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.837 [2024-07-14 15:09:27.003934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:47.837 [2024-07-14 15:09:27.021659] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:47.837 [2024-07-14 15:09:27.022011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.837 [2024-07-14 15:09:27.022097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:47.837 [2024-07-14 15:09:27.039896] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:47.837 [2024-07-14 15:09:27.040221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.837 [2024-07-14 15:09:27.040288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:47.837 [2024-07-14 15:09:27.058079] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:47.837 [2024-07-14 15:09:27.058411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.837 [2024-07-14 15:09:27.058480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:47.837 [2024-07-14 15:09:27.076184] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:47.837 [2024-07-14 15:09:27.076514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.837 [2024-07-14 15:09:27.076582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:47.837 [2024-07-14 15:09:27.094034] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:47.837 [2024-07-14 15:09:27.094357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.837 [2024-07-14 15:09:27.094441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:47.837 [2024-07-14 15:09:27.111912] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:47.837 [2024-07-14 15:09:27.112332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.837 [2024-07-14 15:09:27.112404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:47.837 [2024-07-14 15:09:27.129804] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:47.837 [2024-07-14 15:09:27.130127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.837 [2024-07-14 15:09:27.130213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.097 [2024-07-14 15:09:27.147944] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.097 [2024-07-14 15:09:27.148302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.097 [2024-07-14 15:09:27.148375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.097 [2024-07-14 15:09:27.166173] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.097 [2024-07-14 15:09:27.166503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.097 [2024-07-14 15:09:27.166572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.097 [2024-07-14 15:09:27.184304] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.097 [2024-07-14 15:09:27.184648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.097 [2024-07-14 15:09:27.184728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.097 [2024-07-14 15:09:27.202154] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.097 [2024-07-14 15:09:27.202502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.097 [2024-07-14 15:09:27.202572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.097 [2024-07-14 15:09:27.220063] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.097 [2024-07-14 15:09:27.220408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.097 [2024-07-14 15:09:27.220478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.097 [2024-07-14 15:09:27.237829] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.097 [2024-07-14 15:09:27.238152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.097 [2024-07-14 15:09:27.238220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.097 [2024-07-14 15:09:27.255546] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.097 [2024-07-14 15:09:27.255855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.097 [2024-07-14 15:09:27.255950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.097 [2024-07-14 15:09:27.273369] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.097 [2024-07-14 15:09:27.273683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.097 [2024-07-14 15:09:27.273753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.097 [2024-07-14 15:09:27.291162] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.097 [2024-07-14 15:09:27.291489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.097 [2024-07-14 15:09:27.291557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.097 [2024-07-14 15:09:27.308861] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.097 [2024-07-14 15:09:27.309199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.097 [2024-07-14 15:09:27.309269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.097 [2024-07-14 15:09:27.326608] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.097 [2024-07-14 15:09:27.326921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.097 [2024-07-14 15:09:27.327011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.097 [2024-07-14 15:09:27.344382] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.097 [2024-07-14 15:09:27.344698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.098 [2024-07-14 15:09:27.344741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.098 [2024-07-14 15:09:27.362345] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.098 [2024-07-14 15:09:27.362687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.098 [2024-07-14 15:09:27.362764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.098 [2024-07-14 15:09:27.380300] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.098 [2024-07-14 15:09:27.380614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.098 [2024-07-14 15:09:27.380657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.098 [2024-07-14 15:09:27.398101] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.098 [2024-07-14 15:09:27.398424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.098 [2024-07-14 15:09:27.398497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.358 [2024-07-14 15:09:27.416038] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.358 [2024-07-14 15:09:27.416374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.358 [2024-07-14 15:09:27.416450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.358 [2024-07-14 15:09:27.433839] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.358 [2024-07-14 15:09:27.434158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.358 [2024-07-14 15:09:27.434253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.358 [2024-07-14 15:09:27.451857] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.358 [2024-07-14 15:09:27.452211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.358 [2024-07-14 15:09:27.452290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.358 [2024-07-14 15:09:27.470190] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.358 [2024-07-14 15:09:27.470506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.358 [2024-07-14 15:09:27.470585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.358 [2024-07-14 15:09:27.488462] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.358 [2024-07-14 15:09:27.488777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.358 [2024-07-14 15:09:27.488873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.358 [2024-07-14 15:09:27.506379] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.358 [2024-07-14 15:09:27.506690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.358 [2024-07-14 15:09:27.506766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.358 [2024-07-14 15:09:27.524161] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.358 [2024-07-14 15:09:27.524494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.358 [2024-07-14 15:09:27.524570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.358 [2024-07-14 15:09:27.542001] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.358 [2024-07-14 15:09:27.542319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.358 [2024-07-14 15:09:27.542398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.358 [2024-07-14 15:09:27.559764] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.358 [2024-07-14 15:09:27.560088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.358 [2024-07-14 15:09:27.560171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.358 [2024-07-14 15:09:27.577658] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.358 [2024-07-14 15:09:27.577995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.358 [2024-07-14 15:09:27.578085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.358 [2024-07-14 15:09:27.595371] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.358 [2024-07-14 15:09:27.595685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.358 [2024-07-14 15:09:27.595758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.358 [2024-07-14 15:09:27.613158] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.358 [2024-07-14 15:09:27.613477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.358 [2024-07-14 15:09:27.613522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.358 [2024-07-14 15:09:27.631054] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.358 [2024-07-14 15:09:27.631377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.358 [2024-07-14 15:09:27.631449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.358 [2024-07-14 15:09:27.648825] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.358 [2024-07-14 15:09:27.649156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.358 [2024-07-14 15:09:27.649209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.617 [2024-07-14 15:09:27.666759] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.617 [2024-07-14 15:09:27.667072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.618 [2024-07-14 15:09:27.667144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.618 [2024-07-14 15:09:27.684533] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.618 [2024-07-14 15:09:27.684862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.618 [2024-07-14 15:09:27.684960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.618 [2024-07-14 15:09:27.702235] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.618 [2024-07-14 15:09:27.702554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.618 [2024-07-14 15:09:27.702627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.618 [2024-07-14 15:09:27.719925] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.618 [2024-07-14 15:09:27.720215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.618 [2024-07-14 15:09:27.720288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.618 [2024-07-14 15:09:27.737610] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.618 [2024-07-14 15:09:27.737914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.618 [2024-07-14 15:09:27.737984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.618 [2024-07-14 15:09:27.755642] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.618 [2024-07-14 15:09:27.755948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.618 [2024-07-14 15:09:27.756022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.618 [2024-07-14 15:09:27.773447] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.618 [2024-07-14 15:09:27.773780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.618 [2024-07-14 15:09:27.773846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.618 [2024-07-14 15:09:27.791281] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.618 [2024-07-14 15:09:27.791603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.618 [2024-07-14 15:09:27.791682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.618 [2024-07-14 15:09:27.809017] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.618 [2024-07-14 15:09:27.809321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.618 [2024-07-14 15:09:27.809395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.618 [2024-07-14 15:09:27.826662] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.618 [2024-07-14 15:09:27.826972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.618 [2024-07-14 15:09:27.827053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.618 [2024-07-14 15:09:27.844361] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.618 [2024-07-14 15:09:27.844665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.618 [2024-07-14 15:09:27.844702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.618 [2024-07-14 15:09:27.862014] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.618 [2024-07-14 15:09:27.862344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.618 [2024-07-14 15:09:27.862425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.618 [2024-07-14 15:09:27.879768] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.618 [2024-07-14 15:09:27.880109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.618 [2024-07-14 15:09:27.880168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.618 [2024-07-14 15:09:27.897685] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.618 [2024-07-14 15:09:27.898014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.618 [2024-07-14 15:09:27.898112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.618 [2024-07-14 15:09:27.915521] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.618 [2024-07-14 15:09:27.915824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.618 [2024-07-14 15:09:27.915930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.877 [2024-07-14 15:09:27.933384] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.877 [2024-07-14 15:09:27.933688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.877 [2024-07-14 15:09:27.933764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.877 [2024-07-14 15:09:27.951173] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.877 [2024-07-14 15:09:27.951505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.877 [2024-07-14 15:09:27.951584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.877 [2024-07-14 15:09:27.968835] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.877 [2024-07-14 15:09:27.969129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.877 [2024-07-14 15:09:27.969207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.877 [2024-07-14 15:09:27.986552] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.877 [2024-07-14 15:09:27.986853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.877 [2024-07-14 15:09:27.986936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.877 [2024-07-14 15:09:28.004107] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.877 [2024-07-14 15:09:28.004408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.877 [2024-07-14 15:09:28.004481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.877 [2024-07-14 15:09:28.021682] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.877 [2024-07-14 15:09:28.021984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.877 [2024-07-14 15:09:28.022020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.877 [2024-07-14 15:09:28.039370] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.877 [2024-07-14 15:09:28.039674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.877 [2024-07-14 15:09:28.039711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.877 [2024-07-14 15:09:28.057003] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.877 [2024-07-14 15:09:28.057313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.877 [2024-07-14 15:09:28.057350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.877 [2024-07-14 15:09:28.074834] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.877 [2024-07-14 15:09:28.075151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.877 [2024-07-14 15:09:28.075235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.877 [2024-07-14 15:09:28.092657] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.877 [2024-07-14 15:09:28.092961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.878 [2024-07-14 15:09:28.092998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.878 [2024-07-14 15:09:28.110416] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.878 [2024-07-14 15:09:28.110730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.878 [2024-07-14 15:09:28.110767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.878 [2024-07-14 15:09:28.128014] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.878 [2024-07-14 15:09:28.128318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.878 [2024-07-14 15:09:28.128390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.878 [2024-07-14 15:09:28.145806] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.878 [2024-07-14 15:09:28.146121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.878 [2024-07-14 15:09:28.146160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.878 [2024-07-14 15:09:28.163691] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.878 [2024-07-14 15:09:28.163991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.878 [2024-07-14 15:09:28.164028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.878 [2024-07-14 15:09:28.181582] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.878 [2024-07-14 15:09:28.181855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.878 [2024-07-14 15:09:28.181911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:49.136 [2024-07-14 15:09:28.199885] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:49.136 [2024-07-14 15:09:28.200167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.136 [2024-07-14 15:09:28.200214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:49.136 [2024-07-14 15:09:28.218100] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:49.136 [2024-07-14 15:09:28.218430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.136 [2024-07-14 15:09:28.218467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:49.136 [2024-07-14 15:09:28.236439] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:49.136 [2024-07-14 15:09:28.236765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.136 [2024-07-14 15:09:28.236837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:49.136 [2024-07-14 15:09:28.255061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:49.136 [2024-07-14 15:09:28.255390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.136 [2024-07-14 15:09:28.255466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:49.136 [2024-07-14 15:09:28.273322] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:49.136 [2024-07-14 15:09:28.273695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.136 [2024-07-14 15:09:28.273777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:49.136 [2024-07-14 15:09:28.291605] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:49.136 [2024-07-14 15:09:28.291895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.136 [2024-07-14 15:09:28.291933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:49.136 [2024-07-14 15:09:28.309677] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:49.136 [2024-07-14 15:09:28.309965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.136 [2024-07-14 15:09:28.310003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:49.136 [2024-07-14 15:09:28.327573] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:49.136 [2024-07-14 15:09:28.327840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.136 [2024-07-14 15:09:28.327902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:49.136 00:36:49.136 Latency(us) 00:36:49.136 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:49.136 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:49.136 nvme0n1 : 2.01 14252.55 55.67 0.00 0.00 8952.40 7815.77 18544.26 00:36:49.136 =================================================================================================================== 00:36:49.136 Total : 14252.55 55.67 0.00 0.00 8952.40 7815.77 18544.26 00:36:49.136 0 00:36:49.136 15:09:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:49.136 15:09:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:49.136 15:09:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:49.136 15:09:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:49.136 | .driver_specific 00:36:49.136 | .nvme_error 00:36:49.137 | .status_code 00:36:49.137 | .command_transient_transport_error' 00:36:49.394 15:09:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 112 > 0 )) 00:36:49.394 15:09:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2056895 00:36:49.394 15:09:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2056895 ']' 00:36:49.394 15:09:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2056895 00:36:49.394 15:09:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:36:49.394 15:09:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:49.394 15:09:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2056895 00:36:49.394 15:09:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:49.394 15:09:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:49.394 15:09:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2056895' 00:36:49.394 killing process with pid 2056895 00:36:49.394 15:09:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2056895 00:36:49.395 Received shutdown signal, test time was about 2.000000 seconds 00:36:49.395 00:36:49.395 Latency(us) 00:36:49.395 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:49.395 =================================================================================================================== 00:36:49.395 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:49.395 15:09:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2056895 00:36:50.771 15:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:36:50.771 15:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:50.771 15:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:36:50.771 15:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:36:50.771 15:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:36:50.771 15:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2057563 00:36:50.771 15:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:36:50.771 15:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2057563 /var/tmp/bperf.sock 00:36:50.771 15:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2057563 ']' 00:36:50.771 15:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:50.771 15:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:50.771 15:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:50.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:50.771 15:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:50.771 15:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:50.771 [2024-07-14 15:09:29.795956] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:50.771 [2024-07-14 15:09:29.796098] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2057563 ] 00:36:50.771 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:50.771 Zero copy mechanism will not be used. 00:36:50.771 EAL: No free 2048 kB hugepages reported on node 1 00:36:50.771 [2024-07-14 15:09:29.926015] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:51.029 [2024-07-14 15:09:30.208474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:51.594 15:09:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:51.594 15:09:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:36:51.594 15:09:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:51.594 15:09:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:51.851 15:09:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:51.851 15:09:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:51.851 15:09:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:51.852 15:09:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:51.852 15:09:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:51.852 15:09:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:52.109 nvme0n1 00:36:52.109 15:09:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:36:52.109 15:09:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:52.109 15:09:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:52.109 15:09:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:52.109 15:09:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:52.109 15:09:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:52.367 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:52.367 Zero copy mechanism will not be used. 00:36:52.367 Running I/O for 2 seconds... 00:36:52.367 [2024-07-14 15:09:31.512175] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.367 [2024-07-14 15:09:31.512637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.367 [2024-07-14 15:09:31.512694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.367 [2024-07-14 15:09:31.519995] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.367 [2024-07-14 15:09:31.520414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.367 [2024-07-14 15:09:31.520460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.367 [2024-07-14 15:09:31.527696] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.367 [2024-07-14 15:09:31.528115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.367 [2024-07-14 15:09:31.528160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.367 [2024-07-14 15:09:31.535150] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.367 [2024-07-14 15:09:31.535552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.367 [2024-07-14 15:09:31.535596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.367 [2024-07-14 15:09:31.542406] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.367 [2024-07-14 15:09:31.542838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.367 [2024-07-14 15:09:31.542891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.367 [2024-07-14 15:09:31.549799] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.367 [2024-07-14 15:09:31.550240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.367 [2024-07-14 15:09:31.550284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.367 [2024-07-14 15:09:31.557369] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.367 [2024-07-14 15:09:31.557797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.367 [2024-07-14 15:09:31.557842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.367 [2024-07-14 15:09:31.564657] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.367 [2024-07-14 15:09:31.565064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.367 [2024-07-14 15:09:31.565107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.367 [2024-07-14 15:09:31.572279] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.367 [2024-07-14 15:09:31.572706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.367 [2024-07-14 15:09:31.572750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.367 [2024-07-14 15:09:31.579889] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.367 [2024-07-14 15:09:31.580318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.367 [2024-07-14 15:09:31.580363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.367 [2024-07-14 15:09:31.587139] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.368 [2024-07-14 15:09:31.587564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.368 [2024-07-14 15:09:31.587607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.368 [2024-07-14 15:09:31.594826] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.368 [2024-07-14 15:09:31.595226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.368 [2024-07-14 15:09:31.595271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.368 [2024-07-14 15:09:31.602703] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.368 [2024-07-14 15:09:31.603112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.368 [2024-07-14 15:09:31.603160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.368 [2024-07-14 15:09:31.610493] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.368 [2024-07-14 15:09:31.610941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.368 [2024-07-14 15:09:31.610994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.368 [2024-07-14 15:09:31.618634] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.368 [2024-07-14 15:09:31.619061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.368 [2024-07-14 15:09:31.619106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.368 [2024-07-14 15:09:31.627522] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.368 [2024-07-14 15:09:31.627968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.368 [2024-07-14 15:09:31.628012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.368 [2024-07-14 15:09:31.635656] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.368 [2024-07-14 15:09:31.636076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.368 [2024-07-14 15:09:31.636119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.368 [2024-07-14 15:09:31.642850] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.368 [2024-07-14 15:09:31.643282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.368 [2024-07-14 15:09:31.643325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.368 [2024-07-14 15:09:31.649937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.368 [2024-07-14 15:09:31.650377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.368 [2024-07-14 15:09:31.650421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.368 [2024-07-14 15:09:31.657131] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.368 [2024-07-14 15:09:31.657529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.368 [2024-07-14 15:09:31.657580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.368 [2024-07-14 15:09:31.665374] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.368 [2024-07-14 15:09:31.665817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.368 [2024-07-14 15:09:31.665861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.368 [2024-07-14 15:09:31.673229] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.368 [2024-07-14 15:09:31.673639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.368 [2024-07-14 15:09:31.673683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.627 [2024-07-14 15:09:31.680900] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.627 [2024-07-14 15:09:31.681319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.627 [2024-07-14 15:09:31.681362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.627 [2024-07-14 15:09:31.688232] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.627 [2024-07-14 15:09:31.688660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.627 [2024-07-14 15:09:31.688703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.627 [2024-07-14 15:09:31.695499] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.627 [2024-07-14 15:09:31.695951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.627 [2024-07-14 15:09:31.695995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.627 [2024-07-14 15:09:31.702770] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.627 [2024-07-14 15:09:31.703192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.627 [2024-07-14 15:09:31.703236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.627 [2024-07-14 15:09:31.709870] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.627 [2024-07-14 15:09:31.710315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.627 [2024-07-14 15:09:31.710358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.627 [2024-07-14 15:09:31.717850] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.627 [2024-07-14 15:09:31.718305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.627 [2024-07-14 15:09:31.718348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.627 [2024-07-14 15:09:31.726177] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.627 [2024-07-14 15:09:31.726581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.627 [2024-07-14 15:09:31.726625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.627 [2024-07-14 15:09:31.733574] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.627 [2024-07-14 15:09:31.734036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.627 [2024-07-14 15:09:31.734080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.627 [2024-07-14 15:09:31.740776] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.627 [2024-07-14 15:09:31.741224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.627 [2024-07-14 15:09:31.741267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.627 [2024-07-14 15:09:31.748206] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.627 [2024-07-14 15:09:31.748606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.627 [2024-07-14 15:09:31.748650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.627 [2024-07-14 15:09:31.755324] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.627 [2024-07-14 15:09:31.755731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.627 [2024-07-14 15:09:31.755775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.627 [2024-07-14 15:09:31.762433] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.627 [2024-07-14 15:09:31.762642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.627 [2024-07-14 15:09:31.762684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.627 [2024-07-14 15:09:31.771197] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.627 [2024-07-14 15:09:31.771634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.627 [2024-07-14 15:09:31.771679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.627 [2024-07-14 15:09:31.778482] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.628 [2024-07-14 15:09:31.778888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.628 [2024-07-14 15:09:31.778931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.628 [2024-07-14 15:09:31.785546] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.628 [2024-07-14 15:09:31.785984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.628 [2024-07-14 15:09:31.786028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.628 [2024-07-14 15:09:31.792853] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.628 [2024-07-14 15:09:31.793262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.628 [2024-07-14 15:09:31.793305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.628 [2024-07-14 15:09:31.800267] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.628 [2024-07-14 15:09:31.800663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.628 [2024-07-14 15:09:31.800706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.628 [2024-07-14 15:09:31.807253] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.628 [2024-07-14 15:09:31.807694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.628 [2024-07-14 15:09:31.807737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.628 [2024-07-14 15:09:31.814307] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.628 [2024-07-14 15:09:31.814734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.628 [2024-07-14 15:09:31.814776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.628 [2024-07-14 15:09:31.821483] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.628 [2024-07-14 15:09:31.821895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.628 [2024-07-14 15:09:31.821938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.628 [2024-07-14 15:09:31.828581] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.628 [2024-07-14 15:09:31.829015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.628 [2024-07-14 15:09:31.829058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.628 [2024-07-14 15:09:31.835702] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.628 [2024-07-14 15:09:31.836124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.628 [2024-07-14 15:09:31.836168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.628 [2024-07-14 15:09:31.842772] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.628 [2024-07-14 15:09:31.843186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.628 [2024-07-14 15:09:31.843229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.628 [2024-07-14 15:09:31.849759] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.628 [2024-07-14 15:09:31.850195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.628 [2024-07-14 15:09:31.850238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.628 [2024-07-14 15:09:31.858215] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.628 [2024-07-14 15:09:31.858645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.628 [2024-07-14 15:09:31.858688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.628 [2024-07-14 15:09:31.866019] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.628 [2024-07-14 15:09:31.866452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.628 [2024-07-14 15:09:31.866495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.628 [2024-07-14 15:09:31.873896] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.628 [2024-07-14 15:09:31.874329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.628 [2024-07-14 15:09:31.874372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.628 [2024-07-14 15:09:31.881168] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.628 [2024-07-14 15:09:31.881564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.628 [2024-07-14 15:09:31.881607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.628 [2024-07-14 15:09:31.888282] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.628 [2024-07-14 15:09:31.888688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.628 [2024-07-14 15:09:31.888731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.628 [2024-07-14 15:09:31.895375] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.628 [2024-07-14 15:09:31.895783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.628 [2024-07-14 15:09:31.895825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.628 [2024-07-14 15:09:31.902533] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.628 [2024-07-14 15:09:31.902973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.628 [2024-07-14 15:09:31.903016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.628 [2024-07-14 15:09:31.910153] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.628 [2024-07-14 15:09:31.910575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.628 [2024-07-14 15:09:31.910618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.628 [2024-07-14 15:09:31.917686] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.628 [2024-07-14 15:09:31.918119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.628 [2024-07-14 15:09:31.918162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.628 [2024-07-14 15:09:31.925413] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.628 [2024-07-14 15:09:31.925847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.628 [2024-07-14 15:09:31.925899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.628 [2024-07-14 15:09:31.933336] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.628 [2024-07-14 15:09:31.933735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.628 [2024-07-14 15:09:31.933788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.889 [2024-07-14 15:09:31.940703] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.889 [2024-07-14 15:09:31.940911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.889 [2024-07-14 15:09:31.940953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.889 [2024-07-14 15:09:31.948771] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.889 [2024-07-14 15:09:31.949179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.889 [2024-07-14 15:09:31.949223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.889 [2024-07-14 15:09:31.956150] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.889 [2024-07-14 15:09:31.956580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.889 [2024-07-14 15:09:31.956622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.889 [2024-07-14 15:09:31.963612] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.889 [2024-07-14 15:09:31.964034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.889 [2024-07-14 15:09:31.964077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.889 [2024-07-14 15:09:31.970728] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.889 [2024-07-14 15:09:31.971168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.889 [2024-07-14 15:09:31.971211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.889 [2024-07-14 15:09:31.977905] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.889 [2024-07-14 15:09:31.978334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.889 [2024-07-14 15:09:31.978376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.889 [2024-07-14 15:09:31.985769] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.889 [2024-07-14 15:09:31.986199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.889 [2024-07-14 15:09:31.986242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.889 [2024-07-14 15:09:31.994129] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.889 [2024-07-14 15:09:31.994539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.889 [2024-07-14 15:09:31.994582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.889 [2024-07-14 15:09:32.001647] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.889 [2024-07-14 15:09:32.002091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.889 [2024-07-14 15:09:32.002134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.889 [2024-07-14 15:09:32.009223] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.889 [2024-07-14 15:09:32.009666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.889 [2024-07-14 15:09:32.009708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.889 [2024-07-14 15:09:32.016499] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.889 [2024-07-14 15:09:32.016935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.889 [2024-07-14 15:09:32.016979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.889 [2024-07-14 15:09:32.023627] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.889 [2024-07-14 15:09:32.024045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.889 [2024-07-14 15:09:32.024089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.889 [2024-07-14 15:09:32.030789] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.889 [2024-07-14 15:09:32.031193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.889 [2024-07-14 15:09:32.031237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.889 [2024-07-14 15:09:32.037872] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.889 [2024-07-14 15:09:32.038310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.889 [2024-07-14 15:09:32.038354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.889 [2024-07-14 15:09:32.045036] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.889 [2024-07-14 15:09:32.045465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.889 [2024-07-14 15:09:32.045507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.889 [2024-07-14 15:09:32.052232] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.889 [2024-07-14 15:09:32.052639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.889 [2024-07-14 15:09:32.052683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.889 [2024-07-14 15:09:32.059303] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.889 [2024-07-14 15:09:32.059707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.889 [2024-07-14 15:09:32.059759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.889 [2024-07-14 15:09:32.066386] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.889 [2024-07-14 15:09:32.066771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.889 [2024-07-14 15:09:32.066814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.889 [2024-07-14 15:09:32.073387] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.889 [2024-07-14 15:09:32.073818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.889 [2024-07-14 15:09:32.073860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.890 [2024-07-14 15:09:32.080757] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.890 [2024-07-14 15:09:32.081173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.890 [2024-07-14 15:09:32.081215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.890 [2024-07-14 15:09:32.088976] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.890 [2024-07-14 15:09:32.089386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.890 [2024-07-14 15:09:32.089429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.890 [2024-07-14 15:09:32.096224] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.890 [2024-07-14 15:09:32.096625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.890 [2024-07-14 15:09:32.096667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.890 [2024-07-14 15:09:32.103294] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.890 [2024-07-14 15:09:32.103696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.890 [2024-07-14 15:09:32.103738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.890 [2024-07-14 15:09:32.110497] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.890 [2024-07-14 15:09:32.110954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.890 [2024-07-14 15:09:32.110998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.890 [2024-07-14 15:09:32.118000] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.890 [2024-07-14 15:09:32.118398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.890 [2024-07-14 15:09:32.118441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.890 [2024-07-14 15:09:32.125272] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.890 [2024-07-14 15:09:32.125667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.890 [2024-07-14 15:09:32.125710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.890 [2024-07-14 15:09:32.132952] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.890 [2024-07-14 15:09:32.133350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.890 [2024-07-14 15:09:32.133392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.890 [2024-07-14 15:09:32.140242] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.890 [2024-07-14 15:09:32.140634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.890 [2024-07-14 15:09:32.140676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.890 [2024-07-14 15:09:32.147648] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.890 [2024-07-14 15:09:32.148088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.890 [2024-07-14 15:09:32.148131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.890 [2024-07-14 15:09:32.155297] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.890 [2024-07-14 15:09:32.155724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.890 [2024-07-14 15:09:32.155766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.890 [2024-07-14 15:09:32.162904] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.890 [2024-07-14 15:09:32.163301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.890 [2024-07-14 15:09:32.163344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.890 [2024-07-14 15:09:32.170588] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.890 [2024-07-14 15:09:32.171020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.890 [2024-07-14 15:09:32.171063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.890 [2024-07-14 15:09:32.178213] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.890 [2024-07-14 15:09:32.178637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.890 [2024-07-14 15:09:32.178680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.890 [2024-07-14 15:09:32.185317] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.890 [2024-07-14 15:09:32.185740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.890 [2024-07-14 15:09:32.185793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.890 [2024-07-14 15:09:32.192792] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.890 [2024-07-14 15:09:32.193223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.890 [2024-07-14 15:09:32.193266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.150 [2024-07-14 15:09:32.200491] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.150 [2024-07-14 15:09:32.200927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.150 [2024-07-14 15:09:32.200970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.150 [2024-07-14 15:09:32.208280] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.150 [2024-07-14 15:09:32.208707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.150 [2024-07-14 15:09:32.208750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.150 [2024-07-14 15:09:32.216036] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.150 [2024-07-14 15:09:32.216457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.150 [2024-07-14 15:09:32.216499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.150 [2024-07-14 15:09:32.223431] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.150 [2024-07-14 15:09:32.223830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.150 [2024-07-14 15:09:32.223872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.150 [2024-07-14 15:09:32.230838] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.150 [2024-07-14 15:09:32.231265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.150 [2024-07-14 15:09:32.231308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.150 [2024-07-14 15:09:32.238484] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.150 [2024-07-14 15:09:32.238864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.150 [2024-07-14 15:09:32.238916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.150 [2024-07-14 15:09:32.246144] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.150 [2024-07-14 15:09:32.246570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.150 [2024-07-14 15:09:32.246612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.150 [2024-07-14 15:09:32.253666] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.150 [2024-07-14 15:09:32.254090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.150 [2024-07-14 15:09:32.254134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.150 [2024-07-14 15:09:32.261424] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.150 [2024-07-14 15:09:32.261823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.150 [2024-07-14 15:09:32.261865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.150 [2024-07-14 15:09:32.268539] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.150 [2024-07-14 15:09:32.268978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.150 [2024-07-14 15:09:32.269020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.150 [2024-07-14 15:09:32.276401] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.150 [2024-07-14 15:09:32.276799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.150 [2024-07-14 15:09:32.276844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.150 [2024-07-14 15:09:32.283859] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.150 [2024-07-14 15:09:32.284270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.150 [2024-07-14 15:09:32.284313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.150 [2024-07-14 15:09:32.291555] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.150 [2024-07-14 15:09:32.291964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.150 [2024-07-14 15:09:32.292007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.150 [2024-07-14 15:09:32.299169] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.150 [2024-07-14 15:09:32.299565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.150 [2024-07-14 15:09:32.299607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.151 [2024-07-14 15:09:32.306434] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.151 [2024-07-14 15:09:32.306831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.151 [2024-07-14 15:09:32.306873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.151 [2024-07-14 15:09:32.313887] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.151 [2024-07-14 15:09:32.314281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.151 [2024-07-14 15:09:32.314334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.151 [2024-07-14 15:09:32.321358] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.151 [2024-07-14 15:09:32.321786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.151 [2024-07-14 15:09:32.321828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.151 [2024-07-14 15:09:32.328860] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.151 [2024-07-14 15:09:32.329280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.151 [2024-07-14 15:09:32.329322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.151 [2024-07-14 15:09:32.336543] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.151 [2024-07-14 15:09:32.336989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.151 [2024-07-14 15:09:32.337055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.151 [2024-07-14 15:09:32.344394] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.151 [2024-07-14 15:09:32.344829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.151 [2024-07-14 15:09:32.344873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.151 [2024-07-14 15:09:32.352258] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.151 [2024-07-14 15:09:32.352704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.151 [2024-07-14 15:09:32.352747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.151 [2024-07-14 15:09:32.359394] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.151 [2024-07-14 15:09:32.359805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.151 [2024-07-14 15:09:32.359848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.151 [2024-07-14 15:09:32.367071] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.151 [2024-07-14 15:09:32.367504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.151 [2024-07-14 15:09:32.367547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.151 [2024-07-14 15:09:32.374491] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.151 [2024-07-14 15:09:32.374895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.151 [2024-07-14 15:09:32.374939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.151 [2024-07-14 15:09:32.381914] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.151 [2024-07-14 15:09:32.382332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.151 [2024-07-14 15:09:32.382377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.151 [2024-07-14 15:09:32.389647] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.151 [2024-07-14 15:09:32.390080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.151 [2024-07-14 15:09:32.390124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.151 [2024-07-14 15:09:32.397311] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.151 [2024-07-14 15:09:32.397744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.151 [2024-07-14 15:09:32.397788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.151 [2024-07-14 15:09:32.404805] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.151 [2024-07-14 15:09:32.405208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.151 [2024-07-14 15:09:32.405252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.151 [2024-07-14 15:09:32.412500] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.151 [2024-07-14 15:09:32.412956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.151 [2024-07-14 15:09:32.413000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.151 [2024-07-14 15:09:32.419910] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.151 [2024-07-14 15:09:32.420304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.151 [2024-07-14 15:09:32.420347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.151 [2024-07-14 15:09:32.427497] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.151 [2024-07-14 15:09:32.427924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.151 [2024-07-14 15:09:32.427967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.151 [2024-07-14 15:09:32.435207] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.151 [2024-07-14 15:09:32.435632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.151 [2024-07-14 15:09:32.435675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.151 [2024-07-14 15:09:32.442665] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.151 [2024-07-14 15:09:32.443073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.151 [2024-07-14 15:09:32.443132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.151 [2024-07-14 15:09:32.449724] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.151 [2024-07-14 15:09:32.450163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.151 [2024-07-14 15:09:32.450207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.151 [2024-07-14 15:09:32.456954] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.411 [2024-07-14 15:09:32.457334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.411 [2024-07-14 15:09:32.457378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.411 [2024-07-14 15:09:32.464050] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.411 [2024-07-14 15:09:32.464485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.411 [2024-07-14 15:09:32.464528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.411 [2024-07-14 15:09:32.471158] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.411 [2024-07-14 15:09:32.471590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.411 [2024-07-14 15:09:32.471633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.411 [2024-07-14 15:09:32.478520] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.411 [2024-07-14 15:09:32.478966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.411 [2024-07-14 15:09:32.479010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.411 [2024-07-14 15:09:32.486811] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.411 [2024-07-14 15:09:32.487225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.411 [2024-07-14 15:09:32.487269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.411 [2024-07-14 15:09:32.494513] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.411 [2024-07-14 15:09:32.494934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.411 [2024-07-14 15:09:32.494978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.411 [2024-07-14 15:09:32.501762] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.411 [2024-07-14 15:09:32.502174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.411 [2024-07-14 15:09:32.502218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.411 [2024-07-14 15:09:32.508962] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.411 [2024-07-14 15:09:32.509381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.411 [2024-07-14 15:09:32.509424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.411 [2024-07-14 15:09:32.516719] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.411 [2024-07-14 15:09:32.517134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.411 [2024-07-14 15:09:32.517175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.411 [2024-07-14 15:09:32.524074] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.411 [2024-07-14 15:09:32.524494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.411 [2024-07-14 15:09:32.524539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.411 [2024-07-14 15:09:32.531285] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.411 [2024-07-14 15:09:32.531713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.411 [2024-07-14 15:09:32.531757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.411 [2024-07-14 15:09:32.539564] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.411 [2024-07-14 15:09:32.540024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.411 [2024-07-14 15:09:32.540069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.411 [2024-07-14 15:09:32.547680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.411 [2024-07-14 15:09:32.548091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.411 [2024-07-14 15:09:32.548136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.411 [2024-07-14 15:09:32.555298] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.411 [2024-07-14 15:09:32.555736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.411 [2024-07-14 15:09:32.555780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.411 [2024-07-14 15:09:32.563311] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.411 [2024-07-14 15:09:32.563750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.411 [2024-07-14 15:09:32.563794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.411 [2024-07-14 15:09:32.571107] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.411 [2024-07-14 15:09:32.571543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.411 [2024-07-14 15:09:32.571586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.411 [2024-07-14 15:09:32.578529] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.411 [2024-07-14 15:09:32.578938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.411 [2024-07-14 15:09:32.578990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.411 [2024-07-14 15:09:32.585906] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.411 [2024-07-14 15:09:32.586303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.411 [2024-07-14 15:09:32.586346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.411 [2024-07-14 15:09:32.593220] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.411 [2024-07-14 15:09:32.593629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.411 [2024-07-14 15:09:32.593673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.411 [2024-07-14 15:09:32.600350] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.411 [2024-07-14 15:09:32.600780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.411 [2024-07-14 15:09:32.600824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.411 [2024-07-14 15:09:32.607630] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.411 [2024-07-14 15:09:32.608075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.411 [2024-07-14 15:09:32.608120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.411 [2024-07-14 15:09:32.615160] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.411 [2024-07-14 15:09:32.615576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.411 [2024-07-14 15:09:32.615620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.411 [2024-07-14 15:09:32.622518] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.411 [2024-07-14 15:09:32.622964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.411 [2024-07-14 15:09:32.623008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.411 [2024-07-14 15:09:32.629734] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.411 [2024-07-14 15:09:32.630171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.411 [2024-07-14 15:09:32.630216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.411 [2024-07-14 15:09:32.637331] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.412 [2024-07-14 15:09:32.637772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.412 [2024-07-14 15:09:32.637816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.412 [2024-07-14 15:09:32.646076] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.412 [2024-07-14 15:09:32.646531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.412 [2024-07-14 15:09:32.646575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.412 [2024-07-14 15:09:32.653739] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.412 [2024-07-14 15:09:32.654179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.412 [2024-07-14 15:09:32.654223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.412 [2024-07-14 15:09:32.661403] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.412 [2024-07-14 15:09:32.661841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.412 [2024-07-14 15:09:32.661892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.412 [2024-07-14 15:09:32.668710] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.412 [2024-07-14 15:09:32.669120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.412 [2024-07-14 15:09:32.669163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.412 [2024-07-14 15:09:32.675937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.412 [2024-07-14 15:09:32.676339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.412 [2024-07-14 15:09:32.676383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.412 [2024-07-14 15:09:32.683251] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.412 [2024-07-14 15:09:32.683680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.412 [2024-07-14 15:09:32.683724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.412 [2024-07-14 15:09:32.690619] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.412 [2024-07-14 15:09:32.691048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.412 [2024-07-14 15:09:32.691092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.412 [2024-07-14 15:09:32.697975] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.412 [2024-07-14 15:09:32.698405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.412 [2024-07-14 15:09:32.698448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.412 [2024-07-14 15:09:32.705659] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.412 [2024-07-14 15:09:32.706118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.412 [2024-07-14 15:09:32.706162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.412 [2024-07-14 15:09:32.714297] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.412 [2024-07-14 15:09:32.714731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.412 [2024-07-14 15:09:32.714775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.672 [2024-07-14 15:09:32.722037] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.672 [2024-07-14 15:09:32.722470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.672 [2024-07-14 15:09:32.722522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.672 [2024-07-14 15:09:32.730034] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.672 [2024-07-14 15:09:32.730471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.672 [2024-07-14 15:09:32.730515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.672 [2024-07-14 15:09:32.737663] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.672 [2024-07-14 15:09:32.738112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.672 [2024-07-14 15:09:32.738156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.672 [2024-07-14 15:09:32.745224] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.672 [2024-07-14 15:09:32.745528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.673 [2024-07-14 15:09:32.745572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.673 [2024-07-14 15:09:32.752503] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.673 [2024-07-14 15:09:32.752915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.673 [2024-07-14 15:09:32.752983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.673 [2024-07-14 15:09:32.760057] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.673 [2024-07-14 15:09:32.760489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.673 [2024-07-14 15:09:32.760533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.673 [2024-07-14 15:09:32.767631] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.673 [2024-07-14 15:09:32.768069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.673 [2024-07-14 15:09:32.768131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.673 [2024-07-14 15:09:32.775442] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.673 [2024-07-14 15:09:32.775872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.673 [2024-07-14 15:09:32.775934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.673 [2024-07-14 15:09:32.783245] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.673 [2024-07-14 15:09:32.783676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.673 [2024-07-14 15:09:32.783721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.673 [2024-07-14 15:09:32.790631] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.673 [2024-07-14 15:09:32.791054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.673 [2024-07-14 15:09:32.791098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.673 [2024-07-14 15:09:32.798543] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.673 [2024-07-14 15:09:32.798982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.673 [2024-07-14 15:09:32.799026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.673 [2024-07-14 15:09:32.805944] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.673 [2024-07-14 15:09:32.806385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.673 [2024-07-14 15:09:32.806428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.673 [2024-07-14 15:09:32.813477] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.673 [2024-07-14 15:09:32.813931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.673 [2024-07-14 15:09:32.813975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.673 [2024-07-14 15:09:32.820859] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.673 [2024-07-14 15:09:32.821283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.673 [2024-07-14 15:09:32.821326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.673 [2024-07-14 15:09:32.828301] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.673 [2024-07-14 15:09:32.828740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.673 [2024-07-14 15:09:32.828784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.673 [2024-07-14 15:09:32.835916] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.673 [2024-07-14 15:09:32.836296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.673 [2024-07-14 15:09:32.836340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.673 [2024-07-14 15:09:32.843344] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.673 [2024-07-14 15:09:32.843773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.673 [2024-07-14 15:09:32.843815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.673 [2024-07-14 15:09:32.850961] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.673 [2024-07-14 15:09:32.851394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.673 [2024-07-14 15:09:32.851437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.673 [2024-07-14 15:09:32.858585] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.673 [2024-07-14 15:09:32.859041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.673 [2024-07-14 15:09:32.859085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.673 [2024-07-14 15:09:32.866470] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.673 [2024-07-14 15:09:32.866914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.673 [2024-07-14 15:09:32.866968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.673 [2024-07-14 15:09:32.873942] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.673 [2024-07-14 15:09:32.874358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.673 [2024-07-14 15:09:32.874401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.673 [2024-07-14 15:09:32.881664] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.673 [2024-07-14 15:09:32.882124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.673 [2024-07-14 15:09:32.882168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.673 [2024-07-14 15:09:32.889506] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.673 [2024-07-14 15:09:32.889936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.673 [2024-07-14 15:09:32.889980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.673 [2024-07-14 15:09:32.897149] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.673 [2024-07-14 15:09:32.897579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.673 [2024-07-14 15:09:32.897639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.673 [2024-07-14 15:09:32.904821] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.673 [2024-07-14 15:09:32.905266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.673 [2024-07-14 15:09:32.905311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.673 [2024-07-14 15:09:32.912633] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.673 [2024-07-14 15:09:32.913076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.673 [2024-07-14 15:09:32.913120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.673 [2024-07-14 15:09:32.920462] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.673 [2024-07-14 15:09:32.920863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.673 [2024-07-14 15:09:32.920918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.673 [2024-07-14 15:09:32.928197] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.673 [2024-07-14 15:09:32.928600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.673 [2024-07-14 15:09:32.928643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.673 [2024-07-14 15:09:32.935426] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.673 [2024-07-14 15:09:32.935857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.673 [2024-07-14 15:09:32.935910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.673 [2024-07-14 15:09:32.943335] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.673 [2024-07-14 15:09:32.943788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.673 [2024-07-14 15:09:32.943831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.673 [2024-07-14 15:09:32.951052] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.674 [2024-07-14 15:09:32.951464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.674 [2024-07-14 15:09:32.951508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.674 [2024-07-14 15:09:32.958906] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.674 [2024-07-14 15:09:32.959309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.674 [2024-07-14 15:09:32.959353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.674 [2024-07-14 15:09:32.966286] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.674 [2024-07-14 15:09:32.966721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.674 [2024-07-14 15:09:32.966764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.674 [2024-07-14 15:09:32.973502] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.674 [2024-07-14 15:09:32.973944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.674 [2024-07-14 15:09:32.973987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.934 [2024-07-14 15:09:32.980747] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.934 [2024-07-14 15:09:32.981158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.934 [2024-07-14 15:09:32.981202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.934 [2024-07-14 15:09:32.988093] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.934 [2024-07-14 15:09:32.988502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.934 [2024-07-14 15:09:32.988546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.934 [2024-07-14 15:09:32.995603] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.934 [2024-07-14 15:09:32.996014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.934 [2024-07-14 15:09:32.996058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.934 [2024-07-14 15:09:33.003350] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.934 [2024-07-14 15:09:33.003775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.934 [2024-07-14 15:09:33.003818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.934 [2024-07-14 15:09:33.011094] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.934 [2024-07-14 15:09:33.011531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.934 [2024-07-14 15:09:33.011574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.934 [2024-07-14 15:09:33.018307] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.934 [2024-07-14 15:09:33.018721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.934 [2024-07-14 15:09:33.018765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.934 [2024-07-14 15:09:33.025419] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.934 [2024-07-14 15:09:33.025835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.934 [2024-07-14 15:09:33.025899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.934 [2024-07-14 15:09:33.032745] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.935 [2024-07-14 15:09:33.033154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.935 [2024-07-14 15:09:33.033198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.935 [2024-07-14 15:09:33.040063] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.935 [2024-07-14 15:09:33.040468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.935 [2024-07-14 15:09:33.040511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.935 [2024-07-14 15:09:33.047330] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.935 [2024-07-14 15:09:33.047760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.935 [2024-07-14 15:09:33.047802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.935 [2024-07-14 15:09:33.054740] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.935 [2024-07-14 15:09:33.055152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.935 [2024-07-14 15:09:33.055195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.935 [2024-07-14 15:09:33.063332] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.935 [2024-07-14 15:09:33.063744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.935 [2024-07-14 15:09:33.063787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.935 [2024-07-14 15:09:33.071473] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.935 [2024-07-14 15:09:33.071899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.935 [2024-07-14 15:09:33.071942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.935 [2024-07-14 15:09:33.079807] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.935 [2024-07-14 15:09:33.080231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.935 [2024-07-14 15:09:33.080274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.935 [2024-07-14 15:09:33.087919] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.935 [2024-07-14 15:09:33.088357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.935 [2024-07-14 15:09:33.088401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.935 [2024-07-14 15:09:33.095965] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.935 [2024-07-14 15:09:33.096383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.935 [2024-07-14 15:09:33.096427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.935 [2024-07-14 15:09:33.104660] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.935 [2024-07-14 15:09:33.105101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.935 [2024-07-14 15:09:33.105145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.935 [2024-07-14 15:09:33.111986] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.935 [2024-07-14 15:09:33.112422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.935 [2024-07-14 15:09:33.112465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.935 [2024-07-14 15:09:33.119184] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.935 [2024-07-14 15:09:33.119623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.935 [2024-07-14 15:09:33.119666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.935 [2024-07-14 15:09:33.126458] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.935 [2024-07-14 15:09:33.126865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.935 [2024-07-14 15:09:33.126917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.935 [2024-07-14 15:09:33.134595] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.935 [2024-07-14 15:09:33.135042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.935 [2024-07-14 15:09:33.135085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.935 [2024-07-14 15:09:33.142942] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.935 [2024-07-14 15:09:33.143366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.935 [2024-07-14 15:09:33.143411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.935 [2024-07-14 15:09:33.150242] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.935 [2024-07-14 15:09:33.150670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.935 [2024-07-14 15:09:33.150713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.935 [2024-07-14 15:09:33.157330] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.935 [2024-07-14 15:09:33.157730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.935 [2024-07-14 15:09:33.157784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.935 [2024-07-14 15:09:33.165336] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.935 [2024-07-14 15:09:33.165768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.935 [2024-07-14 15:09:33.165811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.935 [2024-07-14 15:09:33.172557] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.935 [2024-07-14 15:09:33.172993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.935 [2024-07-14 15:09:33.173037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.935 [2024-07-14 15:09:33.180904] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.935 [2024-07-14 15:09:33.181353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.935 [2024-07-14 15:09:33.181397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.935 [2024-07-14 15:09:33.188947] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.935 [2024-07-14 15:09:33.189375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.935 [2024-07-14 15:09:33.189418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.935 [2024-07-14 15:09:33.197836] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.935 [2024-07-14 15:09:33.198258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.935 [2024-07-14 15:09:33.198302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.935 [2024-07-14 15:09:33.205327] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.935 [2024-07-14 15:09:33.205740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.935 [2024-07-14 15:09:33.205783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.935 [2024-07-14 15:09:33.212600] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.935 [2024-07-14 15:09:33.213006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.935 [2024-07-14 15:09:33.213049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.935 [2024-07-14 15:09:33.220652] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.935 [2024-07-14 15:09:33.221063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.935 [2024-07-14 15:09:33.221107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.935 [2024-07-14 15:09:33.229407] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.935 [2024-07-14 15:09:33.229800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.935 [2024-07-14 15:09:33.229844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.935 [2024-07-14 15:09:33.238249] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.935 [2024-07-14 15:09:33.238681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.935 [2024-07-14 15:09:33.238725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.195 [2024-07-14 15:09:33.246967] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.195 [2024-07-14 15:09:33.247371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.195 [2024-07-14 15:09:33.247414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.195 [2024-07-14 15:09:33.254502] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.195 [2024-07-14 15:09:33.254944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.195 [2024-07-14 15:09:33.254988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.195 [2024-07-14 15:09:33.262121] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.195 [2024-07-14 15:09:33.262527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.195 [2024-07-14 15:09:33.262571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.195 [2024-07-14 15:09:33.269378] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.195 [2024-07-14 15:09:33.269816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.195 [2024-07-14 15:09:33.269859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.195 [2024-07-14 15:09:33.276619] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.195 [2024-07-14 15:09:33.277016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.195 [2024-07-14 15:09:33.277059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.195 [2024-07-14 15:09:33.283750] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.195 [2024-07-14 15:09:33.284160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.195 [2024-07-14 15:09:33.284205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.195 [2024-07-14 15:09:33.291163] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.195 [2024-07-14 15:09:33.291596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.195 [2024-07-14 15:09:33.291641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.195 [2024-07-14 15:09:33.298789] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.195 [2024-07-14 15:09:33.299199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.195 [2024-07-14 15:09:33.299243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.195 [2024-07-14 15:09:33.306017] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.195 [2024-07-14 15:09:33.306421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.195 [2024-07-14 15:09:33.306465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.195 [2024-07-14 15:09:33.313834] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.195 [2024-07-14 15:09:33.314277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.195 [2024-07-14 15:09:33.314320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.195 [2024-07-14 15:09:33.322036] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.195 [2024-07-14 15:09:33.322438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.195 [2024-07-14 15:09:33.322481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.195 [2024-07-14 15:09:33.329178] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.195 [2024-07-14 15:09:33.329610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.195 [2024-07-14 15:09:33.329653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.195 [2024-07-14 15:09:33.336334] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.195 [2024-07-14 15:09:33.336754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.196 [2024-07-14 15:09:33.336798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.196 [2024-07-14 15:09:33.343964] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.196 [2024-07-14 15:09:33.344399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.196 [2024-07-14 15:09:33.344441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.196 [2024-07-14 15:09:33.351273] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.196 [2024-07-14 15:09:33.351677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.196 [2024-07-14 15:09:33.351721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.196 [2024-07-14 15:09:33.358365] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.196 [2024-07-14 15:09:33.358775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.196 [2024-07-14 15:09:33.358818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.196 [2024-07-14 15:09:33.365539] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.196 [2024-07-14 15:09:33.365984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.196 [2024-07-14 15:09:33.366027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.196 [2024-07-14 15:09:33.372835] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.196 [2024-07-14 15:09:33.373285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.196 [2024-07-14 15:09:33.373329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.196 [2024-07-14 15:09:33.380130] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.196 [2024-07-14 15:09:33.380561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.196 [2024-07-14 15:09:33.380604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.196 [2024-07-14 15:09:33.387585] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.196 [2024-07-14 15:09:33.387973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.196 [2024-07-14 15:09:33.388017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.196 [2024-07-14 15:09:33.394999] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.196 [2024-07-14 15:09:33.395396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.196 [2024-07-14 15:09:33.395439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.196 [2024-07-14 15:09:33.402579] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.196 [2024-07-14 15:09:33.402998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.196 [2024-07-14 15:09:33.403042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.196 [2024-07-14 15:09:33.410487] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.196 [2024-07-14 15:09:33.410919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.196 [2024-07-14 15:09:33.410963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.196 [2024-07-14 15:09:33.417825] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.196 [2024-07-14 15:09:33.418234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.196 [2024-07-14 15:09:33.418277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.196 [2024-07-14 15:09:33.425656] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.196 [2024-07-14 15:09:33.426069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.196 [2024-07-14 15:09:33.426114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.196 [2024-07-14 15:09:33.433373] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.196 [2024-07-14 15:09:33.433804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.196 [2024-07-14 15:09:33.433847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.196 [2024-07-14 15:09:33.441148] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.196 [2024-07-14 15:09:33.441545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.196 [2024-07-14 15:09:33.441588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.196 [2024-07-14 15:09:33.448610] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.196 [2024-07-14 15:09:33.449025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.196 [2024-07-14 15:09:33.449069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.196 [2024-07-14 15:09:33.456104] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.196 [2024-07-14 15:09:33.456518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.196 [2024-07-14 15:09:33.456562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.196 [2024-07-14 15:09:33.464246] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.196 [2024-07-14 15:09:33.464680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.196 [2024-07-14 15:09:33.464723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.196 [2024-07-14 15:09:33.472239] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.196 [2024-07-14 15:09:33.472645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.196 [2024-07-14 15:09:33.472691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.196 [2024-07-14 15:09:33.480646] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.196 [2024-07-14 15:09:33.481087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.196 [2024-07-14 15:09:33.481159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.196 [2024-07-14 15:09:33.488785] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.196 [2024-07-14 15:09:33.489190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.196 [2024-07-14 15:09:33.489234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.196 [2024-07-14 15:09:33.495980] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.196 [2024-07-14 15:09:33.496181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.196 [2024-07-14 15:09:33.496224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.456 [2024-07-14 15:09:33.503634] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.456 [2024-07-14 15:09:33.504061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.456 [2024-07-14 15:09:33.504105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.456 00:36:54.456 Latency(us) 00:36:54.456 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:54.456 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:36:54.456 nvme0n1 : 2.00 4090.78 511.35 0.00 0.00 3899.87 3301.07 10679.94 00:36:54.456 =================================================================================================================== 00:36:54.456 Total : 4090.78 511.35 0.00 0.00 3899.87 3301.07 10679.94 00:36:54.456 0 00:36:54.456 15:09:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:54.456 15:09:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:54.456 15:09:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:54.456 15:09:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:54.456 | .driver_specific 00:36:54.456 | .nvme_error 00:36:54.456 | .status_code 00:36:54.456 | .command_transient_transport_error' 00:36:54.716 15:09:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 264 > 0 )) 00:36:54.716 15:09:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2057563 00:36:54.716 15:09:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2057563 ']' 00:36:54.716 15:09:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2057563 00:36:54.716 15:09:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:36:54.716 15:09:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:54.716 15:09:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2057563 00:36:54.716 15:09:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:54.716 15:09:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:54.716 15:09:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2057563' 00:36:54.716 killing process with pid 2057563 00:36:54.716 15:09:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2057563 00:36:54.716 Received shutdown signal, test time was about 2.000000 seconds 00:36:54.716 00:36:54.716 Latency(us) 00:36:54.716 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:54.716 =================================================================================================================== 00:36:54.716 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:54.716 15:09:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2057563 00:36:55.654 15:09:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2055536 00:36:55.654 15:09:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2055536 ']' 00:36:55.654 15:09:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2055536 00:36:55.654 15:09:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:36:55.654 15:09:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:55.654 15:09:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2055536 00:36:55.654 15:09:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:55.654 15:09:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:55.654 15:09:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2055536' 00:36:55.654 killing process with pid 2055536 00:36:55.654 15:09:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2055536 00:36:55.654 15:09:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2055536 00:36:57.028 00:36:57.028 real 0m23.666s 00:36:57.028 user 0m45.630s 00:36:57.028 sys 0m4.739s 00:36:57.028 15:09:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:57.028 15:09:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:57.028 ************************************ 00:36:57.028 END TEST nvmf_digest_error 00:36:57.028 ************************************ 00:36:57.028 15:09:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:36:57.028 15:09:36 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:36:57.028 15:09:36 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:36:57.028 15:09:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:57.028 15:09:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:36:57.028 15:09:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:57.028 15:09:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:36:57.028 15:09:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:57.028 15:09:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:57.028 rmmod nvme_tcp 00:36:57.028 rmmod nvme_fabrics 00:36:57.288 rmmod nvme_keyring 00:36:57.288 15:09:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:57.288 15:09:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:36:57.288 15:09:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:36:57.288 15:09:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 2055536 ']' 00:36:57.288 15:09:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 2055536 00:36:57.288 15:09:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 2055536 ']' 00:36:57.288 15:09:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 2055536 00:36:57.288 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2055536) - No such process 00:36:57.288 15:09:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 2055536 is not found' 00:36:57.288 Process with pid 2055536 is not found 00:36:57.288 15:09:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:57.288 15:09:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:57.288 15:09:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:57.288 15:09:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:57.288 15:09:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:57.288 15:09:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:57.288 15:09:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:57.288 15:09:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:59.194 15:09:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:59.194 00:36:59.194 real 0m53.098s 00:36:59.194 user 1m34.069s 00:36:59.194 sys 0m11.277s 00:36:59.194 15:09:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:59.194 15:09:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:59.194 ************************************ 00:36:59.194 END TEST nvmf_digest 00:36:59.194 ************************************ 00:36:59.194 15:09:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:36:59.194 15:09:38 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:36:59.194 15:09:38 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:36:59.194 15:09:38 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:36:59.194 15:09:38 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:36:59.194 15:09:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:36:59.194 15:09:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:59.194 15:09:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:59.194 ************************************ 00:36:59.194 START TEST nvmf_bdevperf 00:36:59.194 ************************************ 00:36:59.194 15:09:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:36:59.194 * Looking for test storage... 00:36:59.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:59.194 15:09:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:59.194 15:09:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:36:59.194 15:09:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:59.194 15:09:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:59.194 15:09:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:59.194 15:09:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:59.194 15:09:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:59.194 15:09:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:59.194 15:09:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:59.194 15:09:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:59.194 15:09:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:59.194 15:09:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:59.194 15:09:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:59.194 15:09:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:59.194 15:09:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:59.194 15:09:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:59.194 15:09:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:59.194 15:09:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:59.194 15:09:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:59.194 15:09:38 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:59.194 15:09:38 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:59.194 15:09:38 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:59.194 15:09:38 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:59.195 15:09:38 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:59.195 15:09:38 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:59.195 15:09:38 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:36:59.195 15:09:38 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:59.195 15:09:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:36:59.195 15:09:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:59.195 15:09:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:59.195 15:09:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:59.195 15:09:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:59.195 15:09:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:59.195 15:09:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:59.195 15:09:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:59.195 15:09:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:59.453 15:09:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:59.453 15:09:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:59.453 15:09:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:36:59.453 15:09:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:59.453 15:09:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:59.453 15:09:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:59.453 15:09:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:59.453 15:09:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:59.453 15:09:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:59.453 15:09:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:59.453 15:09:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:59.453 15:09:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:59.453 15:09:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:59.453 15:09:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:36:59.453 15:09:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:01.386 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:01.386 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:37:01.386 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:01.386 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:01.386 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:01.386 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:01.386 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:01.386 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:37:01.386 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:01.386 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:37:01.386 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:37:01.386 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:37:01.386 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:37:01.386 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:37:01.386 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:37:01.386 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:01.386 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:01.387 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:01.387 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:01.387 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:01.387 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:37:01.387 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:01.387 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:37:01.387 00:37:01.387 --- 10.0.0.2 ping statistics --- 00:37:01.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:01.387 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:01.387 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:01.387 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:37:01.387 00:37:01.387 --- 10.0.0.1 ping statistics --- 00:37:01.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:01.387 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2060173 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2060173 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 2060173 ']' 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:01.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:01.387 15:09:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:01.387 [2024-07-14 15:09:40.608031] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:37:01.387 [2024-07-14 15:09:40.608177] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:01.387 EAL: No free 2048 kB hugepages reported on node 1 00:37:01.646 [2024-07-14 15:09:40.751035] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:01.904 [2024-07-14 15:09:41.007080] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:01.904 [2024-07-14 15:09:41.007153] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:01.904 [2024-07-14 15:09:41.007195] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:01.904 [2024-07-14 15:09:41.007216] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:01.904 [2024-07-14 15:09:41.007236] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:01.904 [2024-07-14 15:09:41.007386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:37:01.904 [2024-07-14 15:09:41.007470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:01.904 [2024-07-14 15:09:41.007478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:37:02.468 15:09:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:02.468 15:09:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:37:02.468 15:09:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:02.468 15:09:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:02.468 15:09:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:02.468 15:09:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:02.468 15:09:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:02.468 15:09:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:02.468 15:09:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:02.468 [2024-07-14 15:09:41.599978] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:02.468 15:09:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:02.468 15:09:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:02.468 15:09:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:02.468 15:09:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:02.468 Malloc0 00:37:02.468 15:09:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:02.468 15:09:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:02.468 15:09:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:02.468 15:09:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:02.468 15:09:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:02.468 15:09:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:02.468 15:09:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:02.468 15:09:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:02.468 15:09:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:02.468 15:09:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:02.468 15:09:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:02.468 15:09:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:02.468 [2024-07-14 15:09:41.716125] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:02.468 15:09:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:02.468 15:09:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:37:02.468 15:09:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:37:02.468 15:09:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:37:02.468 15:09:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:37:02.468 15:09:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:02.468 15:09:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:02.468 { 00:37:02.468 "params": { 00:37:02.468 "name": "Nvme$subsystem", 00:37:02.468 "trtype": "$TEST_TRANSPORT", 00:37:02.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:02.468 "adrfam": "ipv4", 00:37:02.468 "trsvcid": "$NVMF_PORT", 00:37:02.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:02.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:02.468 "hdgst": ${hdgst:-false}, 00:37:02.468 "ddgst": ${ddgst:-false} 00:37:02.468 }, 00:37:02.468 "method": "bdev_nvme_attach_controller" 00:37:02.468 } 00:37:02.468 EOF 00:37:02.468 )") 00:37:02.468 15:09:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:37:02.468 15:09:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:37:02.468 15:09:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:37:02.468 15:09:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:02.468 "params": { 00:37:02.468 "name": "Nvme1", 00:37:02.468 "trtype": "tcp", 00:37:02.468 "traddr": "10.0.0.2", 00:37:02.468 "adrfam": "ipv4", 00:37:02.468 "trsvcid": "4420", 00:37:02.468 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:02.468 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:02.468 "hdgst": false, 00:37:02.468 "ddgst": false 00:37:02.468 }, 00:37:02.468 "method": "bdev_nvme_attach_controller" 00:37:02.468 }' 00:37:02.727 [2024-07-14 15:09:41.798602] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:37:02.727 [2024-07-14 15:09:41.798743] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2060333 ] 00:37:02.727 EAL: No free 2048 kB hugepages reported on node 1 00:37:02.727 [2024-07-14 15:09:41.930908] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:02.985 [2024-07-14 15:09:42.165681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:03.551 Running I/O for 1 seconds... 00:37:04.494 00:37:04.494 Latency(us) 00:37:04.494 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:04.494 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:04.494 Verification LBA range: start 0x0 length 0x4000 00:37:04.494 Nvme1n1 : 1.02 6241.83 24.38 0.00 0.00 20412.96 4563.25 16796.63 00:37:04.494 =================================================================================================================== 00:37:04.494 Total : 6241.83 24.38 0.00 0.00 20412.96 4563.25 16796.63 00:37:05.432 15:09:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2060729 00:37:05.432 15:09:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:37:05.432 15:09:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:37:05.432 15:09:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:37:05.432 15:09:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:37:05.432 15:09:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:37:05.432 15:09:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:05.432 15:09:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:05.432 { 00:37:05.432 "params": { 00:37:05.432 "name": "Nvme$subsystem", 00:37:05.432 "trtype": "$TEST_TRANSPORT", 00:37:05.432 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:05.432 "adrfam": "ipv4", 00:37:05.432 "trsvcid": "$NVMF_PORT", 00:37:05.432 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:05.432 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:05.432 "hdgst": ${hdgst:-false}, 00:37:05.432 "ddgst": ${ddgst:-false} 00:37:05.432 }, 00:37:05.432 "method": "bdev_nvme_attach_controller" 00:37:05.432 } 00:37:05.432 EOF 00:37:05.432 )") 00:37:05.432 15:09:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:37:05.432 15:09:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:37:05.690 15:09:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:37:05.690 15:09:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:05.690 "params": { 00:37:05.690 "name": "Nvme1", 00:37:05.690 "trtype": "tcp", 00:37:05.690 "traddr": "10.0.0.2", 00:37:05.690 "adrfam": "ipv4", 00:37:05.690 "trsvcid": "4420", 00:37:05.690 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:05.690 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:05.690 "hdgst": false, 00:37:05.690 "ddgst": false 00:37:05.690 }, 00:37:05.690 "method": "bdev_nvme_attach_controller" 00:37:05.690 }' 00:37:05.690 [2024-07-14 15:09:44.813002] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:37:05.690 [2024-07-14 15:09:44.813170] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2060729 ] 00:37:05.690 EAL: No free 2048 kB hugepages reported on node 1 00:37:05.690 [2024-07-14 15:09:44.941750] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:05.948 [2024-07-14 15:09:45.174098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:06.516 Running I/O for 15 seconds... 00:37:09.054 15:09:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2060173 00:37:09.054 15:09:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:37:09.054 [2024-07-14 15:09:47.755520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:108024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.054 [2024-07-14 15:09:47.755600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.054 [2024-07-14 15:09:47.755668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:108032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.054 [2024-07-14 15:09:47.755694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.054 [2024-07-14 15:09:47.755736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:108040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.054 [2024-07-14 15:09:47.755759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.054 [2024-07-14 15:09:47.755798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:108048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.054 [2024-07-14 15:09:47.755819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.054 [2024-07-14 15:09:47.755841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:108056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.054 [2024-07-14 15:09:47.756051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.054 [2024-07-14 15:09:47.756086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:108064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.054 [2024-07-14 15:09:47.756108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.054 [2024-07-14 15:09:47.756132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:108072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.054 [2024-07-14 15:09:47.756161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.054 [2024-07-14 15:09:47.756203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:108080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.054 [2024-07-14 15:09:47.756227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.054 [2024-07-14 15:09:47.756253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:108088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.055 [2024-07-14 15:09:47.756277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.055 [2024-07-14 15:09:47.756303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:108096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.055 [2024-07-14 15:09:47.756332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.055 [2024-07-14 15:09:47.756358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:108104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.055 [2024-07-14 15:09:47.756382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.055 [2024-07-14 15:09:47.756408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:108112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.055 [2024-07-14 15:09:47.756432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.055 [2024-07-14 15:09:47.756475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:108120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.055 [2024-07-14 15:09:47.756500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.055 [2024-07-14 15:09:47.756526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:108128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.055 [2024-07-14 15:09:47.756558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.055 [2024-07-14 15:09:47.756585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:108136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.055 [2024-07-14 15:09:47.756610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.055 [2024-07-14 15:09:47.756635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:108144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.055 [2024-07-14 15:09:47.756659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.055 [2024-07-14 15:09:47.756685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:108152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.055 [2024-07-14 15:09:47.756708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.055 [2024-07-14 15:09:47.756733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:108160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.055 [2024-07-14 15:09:47.756758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.055 [2024-07-14 15:09:47.756783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:108168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.055 [2024-07-14 15:09:47.756806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.055 [2024-07-14 15:09:47.756832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:108176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.055 [2024-07-14 15:09:47.756865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.055 [2024-07-14 15:09:47.756901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.055 [2024-07-14 15:09:47.756940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.055 [2024-07-14 15:09:47.756964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:108192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.055 [2024-07-14 15:09:47.756986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.055 [2024-07-14 15:09:47.757009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:108200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.055 [2024-07-14 15:09:47.757030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.055 [2024-07-14 15:09:47.757053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:108208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.055 [2024-07-14 15:09:47.757074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.055 [2024-07-14 15:09:47.757097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:108216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.055 [2024-07-14 15:09:47.757118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.055 [2024-07-14 15:09:47.757142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:108224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.055 [2024-07-14 15:09:47.757184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.055 [2024-07-14 15:09:47.757216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:108232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.055 [2024-07-14 15:09:47.757241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.055 [2024-07-14 15:09:47.757266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.055 [2024-07-14 15:09:47.757290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.055 [2024-07-14 15:09:47.757316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:108248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.055 [2024-07-14 15:09:47.757340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.055 [2024-07-14 15:09:47.757365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:108256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.055 [2024-07-14 15:09:47.757389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.055 [2024-07-14 15:09:47.757414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:108264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.055 [2024-07-14 15:09:47.757438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.055 [2024-07-14 15:09:47.757464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:108272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.055 [2024-07-14 15:09:47.757487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.055 [2024-07-14 15:09:47.757513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:108280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.055 [2024-07-14 15:09:47.757536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.055 [2024-07-14 15:09:47.757561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:108288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.055 [2024-07-14 15:09:47.757586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.055 [2024-07-14 15:09:47.757612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:108296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.055 [2024-07-14 15:09:47.757635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.055 [2024-07-14 15:09:47.757661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:108304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.055 [2024-07-14 15:09:47.757685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.055 [2024-07-14 15:09:47.757711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:108312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.055 [2024-07-14 15:09:47.757734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.055 [2024-07-14 15:09:47.757760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:108320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.055 [2024-07-14 15:09:47.757783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.055 [2024-07-14 15:09:47.757809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:108328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.055 [2024-07-14 15:09:47.757838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.055 [2024-07-14 15:09:47.757875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:108336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.055 [2024-07-14 15:09:47.757924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.055 [2024-07-14 15:09:47.757949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:108344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.055 [2024-07-14 15:09:47.757971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.055 [2024-07-14 15:09:47.757994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:108352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.055 [2024-07-14 15:09:47.758015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.055 [2024-07-14 15:09:47.758038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:108360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.055 [2024-07-14 15:09:47.758059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.055 [2024-07-14 15:09:47.758082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:108368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.055 [2024-07-14 15:09:47.758104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.055 [2024-07-14 15:09:47.758127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:108376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.056 [2024-07-14 15:09:47.758149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.056 [2024-07-14 15:09:47.758190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:108384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.056 [2024-07-14 15:09:47.758214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.056 [2024-07-14 15:09:47.758239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:108392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.056 [2024-07-14 15:09:47.758262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.056 [2024-07-14 15:09:47.758288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:108400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.056 [2024-07-14 15:09:47.758311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.056 [2024-07-14 15:09:47.758336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:108408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.056 [2024-07-14 15:09:47.758360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.056 [2024-07-14 15:09:47.758385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:108416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.056 [2024-07-14 15:09:47.758409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.056 [2024-07-14 15:09:47.758435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:108424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.056 [2024-07-14 15:09:47.758458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.056 [2024-07-14 15:09:47.758489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:108432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.056 [2024-07-14 15:09:47.758513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.056 [2024-07-14 15:09:47.758539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:108440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.056 [2024-07-14 15:09:47.758563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.056 [2024-07-14 15:09:47.758588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:108448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.056 [2024-07-14 15:09:47.758611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.056 [2024-07-14 15:09:47.758637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:108456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.056 [2024-07-14 15:09:47.758661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.056 [2024-07-14 15:09:47.758687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:108464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.056 [2024-07-14 15:09:47.758710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.056 [2024-07-14 15:09:47.758737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:108472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.056 [2024-07-14 15:09:47.758761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.056 [2024-07-14 15:09:47.758786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:108480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.056 [2024-07-14 15:09:47.758810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.056 [2024-07-14 15:09:47.758835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:108488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.056 [2024-07-14 15:09:47.758869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.056 [2024-07-14 15:09:47.758904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:108496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.056 [2024-07-14 15:09:47.758944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.056 [2024-07-14 15:09:47.758967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:108504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.056 [2024-07-14 15:09:47.758988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.056 [2024-07-14 15:09:47.759010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:108512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.056 [2024-07-14 15:09:47.759031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.056 [2024-07-14 15:09:47.759054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.056 [2024-07-14 15:09:47.759075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.056 [2024-07-14 15:09:47.759098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:107584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.056 [2024-07-14 15:09:47.759122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.056 [2024-07-14 15:09:47.759145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:107592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.056 [2024-07-14 15:09:47.759188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.056 [2024-07-14 15:09:47.759210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:107600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.056 [2024-07-14 15:09:47.759245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.056 [2024-07-14 15:09:47.759272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:107608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.056 [2024-07-14 15:09:47.759295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.056 [2024-07-14 15:09:47.759321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:107616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.056 [2024-07-14 15:09:47.759345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.056 [2024-07-14 15:09:47.759370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:107624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.056 [2024-07-14 15:09:47.759393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.056 [2024-07-14 15:09:47.759419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:107632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.056 [2024-07-14 15:09:47.759442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.056 [2024-07-14 15:09:47.759468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:108528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.056 [2024-07-14 15:09:47.759491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.056 [2024-07-14 15:09:47.759517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:107640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.056 [2024-07-14 15:09:47.759540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.056 [2024-07-14 15:09:47.759566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:107648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.056 [2024-07-14 15:09:47.759590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.056 [2024-07-14 15:09:47.759615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:107656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.056 [2024-07-14 15:09:47.759638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.056 [2024-07-14 15:09:47.759663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:107664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.056 [2024-07-14 15:09:47.759687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.056 [2024-07-14 15:09:47.759713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:107672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.056 [2024-07-14 15:09:47.759736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.056 [2024-07-14 15:09:47.759778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:107680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.056 [2024-07-14 15:09:47.759806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.056 [2024-07-14 15:09:47.759833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:107688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.056 [2024-07-14 15:09:47.759864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.056 [2024-07-14 15:09:47.759899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:107696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.056 [2024-07-14 15:09:47.759936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.056 [2024-07-14 15:09:47.759959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:107704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.056 [2024-07-14 15:09:47.759980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.057 [2024-07-14 15:09:47.760002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:107712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.057 [2024-07-14 15:09:47.760023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.057 [2024-07-14 15:09:47.760046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:107720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.057 [2024-07-14 15:09:47.760066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.057 [2024-07-14 15:09:47.760088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:107728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.057 [2024-07-14 15:09:47.760109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.057 [2024-07-14 15:09:47.760131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:107736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.057 [2024-07-14 15:09:47.760177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.057 [2024-07-14 15:09:47.760200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:107744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.057 [2024-07-14 15:09:47.760220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.057 [2024-07-14 15:09:47.760261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.057 [2024-07-14 15:09:47.760286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.057 [2024-07-14 15:09:47.760312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:107760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.057 [2024-07-14 15:09:47.760336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.057 [2024-07-14 15:09:47.760361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:107768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.057 [2024-07-14 15:09:47.760384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.057 [2024-07-14 15:09:47.760410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.057 [2024-07-14 15:09:47.760434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.057 [2024-07-14 15:09:47.760464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:107784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.057 [2024-07-14 15:09:47.760489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.057 [2024-07-14 15:09:47.760514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:107792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.057 [2024-07-14 15:09:47.760538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.057 [2024-07-14 15:09:47.760564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:107800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.057 [2024-07-14 15:09:47.760587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.057 [2024-07-14 15:09:47.760612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:107808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.057 [2024-07-14 15:09:47.760636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.057 [2024-07-14 15:09:47.760661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:107816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.057 [2024-07-14 15:09:47.760685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.057 [2024-07-14 15:09:47.760710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:107824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.057 [2024-07-14 15:09:47.760734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.057 [2024-07-14 15:09:47.760759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:107832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.057 [2024-07-14 15:09:47.760782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.057 [2024-07-14 15:09:47.760808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:107840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.057 [2024-07-14 15:09:47.760832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.057 [2024-07-14 15:09:47.760865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:107848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.057 [2024-07-14 15:09:47.760896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.057 [2024-07-14 15:09:47.760936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:107856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.057 [2024-07-14 15:09:47.760957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.057 [2024-07-14 15:09:47.760979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:107864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.057 [2024-07-14 15:09:47.760999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.057 [2024-07-14 15:09:47.761020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:107872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.057 [2024-07-14 15:09:47.761040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.057 [2024-07-14 15:09:47.761062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:107880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.057 [2024-07-14 15:09:47.761086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.057 [2024-07-14 15:09:47.761108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:107888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.057 [2024-07-14 15:09:47.761128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.057 [2024-07-14 15:09:47.761164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:107896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.057 [2024-07-14 15:09:47.761184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.057 [2024-07-14 15:09:47.761207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:107904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.057 [2024-07-14 15:09:47.761244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.057 [2024-07-14 15:09:47.761270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.057 [2024-07-14 15:09:47.761293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.057 [2024-07-14 15:09:47.761318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:107920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.057 [2024-07-14 15:09:47.761342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.057 [2024-07-14 15:09:47.761367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:107928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.057 [2024-07-14 15:09:47.761391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.057 [2024-07-14 15:09:47.761416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:107936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.057 [2024-07-14 15:09:47.761439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.057 [2024-07-14 15:09:47.761465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:107944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.057 [2024-07-14 15:09:47.761488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.057 [2024-07-14 15:09:47.761513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:107952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.057 [2024-07-14 15:09:47.761537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.057 [2024-07-14 15:09:47.761562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.057 [2024-07-14 15:09:47.761586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.058 [2024-07-14 15:09:47.761611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:108544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.058 [2024-07-14 15:09:47.761635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.058 [2024-07-14 15:09:47.761660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:108552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.058 [2024-07-14 15:09:47.761684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.058 [2024-07-14 15:09:47.761715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:108560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.058 [2024-07-14 15:09:47.761740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.058 [2024-07-14 15:09:47.761765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:108568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.058 [2024-07-14 15:09:47.761789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.058 [2024-07-14 15:09:47.761815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:108576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.058 [2024-07-14 15:09:47.761839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.058 [2024-07-14 15:09:47.761874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:108584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.058 [2024-07-14 15:09:47.761920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.058 [2024-07-14 15:09:47.761944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:108592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.058 [2024-07-14 15:09:47.761965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.058 [2024-07-14 15:09:47.761986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:108600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:09.058 [2024-07-14 15:09:47.762006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.058 [2024-07-14 15:09:47.762028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:107960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.058 [2024-07-14 15:09:47.762048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.058 [2024-07-14 15:09:47.762070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:107968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.058 [2024-07-14 15:09:47.762090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.058 [2024-07-14 15:09:47.762111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:107976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.058 [2024-07-14 15:09:47.762131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.058 [2024-07-14 15:09:47.762177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:107984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.058 [2024-07-14 15:09:47.762197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.058 [2024-07-14 15:09:47.762219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:107992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.058 [2024-07-14 15:09:47.762257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.058 [2024-07-14 15:09:47.762283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:108000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.058 [2024-07-14 15:09:47.762307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.058 [2024-07-14 15:09:47.762331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:108008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.058 [2024-07-14 15:09:47.762360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.058 [2024-07-14 15:09:47.762385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2c80 is same with the state(5) to be set 00:37:09.058 [2024-07-14 15:09:47.762413] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:09.058 [2024-07-14 15:09:47.762432] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:09.058 [2024-07-14 15:09:47.762453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108016 len:8 PRP1 0x0 PRP2 0x0 00:37:09.058 [2024-07-14 15:09:47.762475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.058 [2024-07-14 15:09:47.762787] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f2c80 was disconnected and freed. reset controller. 00:37:09.058 [2024-07-14 15:09:47.762924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:09.058 [2024-07-14 15:09:47.762958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.058 [2024-07-14 15:09:47.762983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:09.058 [2024-07-14 15:09:47.763003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.058 [2024-07-14 15:09:47.763024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:09.058 [2024-07-14 15:09:47.763043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.058 [2024-07-14 15:09:47.763064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:09.058 [2024-07-14 15:09:47.763084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:09.058 [2024-07-14 15:09:47.763102] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.058 [2024-07-14 15:09:47.767357] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.058 [2024-07-14 15:09:47.767425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.058 [2024-07-14 15:09:47.768195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.058 [2024-07-14 15:09:47.768248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.058 [2024-07-14 15:09:47.768273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.058 [2024-07-14 15:09:47.768571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.058 [2024-07-14 15:09:47.768870] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.058 [2024-07-14 15:09:47.768936] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.058 [2024-07-14 15:09:47.768959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.058 [2024-07-14 15:09:47.773027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.058 [2024-07-14 15:09:47.782235] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.058 [2024-07-14 15:09:47.782728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.058 [2024-07-14 15:09:47.782785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.058 [2024-07-14 15:09:47.782812] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.058 [2024-07-14 15:09:47.783113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.058 [2024-07-14 15:09:47.783407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.058 [2024-07-14 15:09:47.783438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.058 [2024-07-14 15:09:47.783460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.058 [2024-07-14 15:09:47.787582] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.058 [2024-07-14 15:09:47.796791] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.058 [2024-07-14 15:09:47.797295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.058 [2024-07-14 15:09:47.797337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.058 [2024-07-14 15:09:47.797363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.058 [2024-07-14 15:09:47.797646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.058 [2024-07-14 15:09:47.797950] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.058 [2024-07-14 15:09:47.797982] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.058 [2024-07-14 15:09:47.798003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.058 [2024-07-14 15:09:47.802147] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.059 [2024-07-14 15:09:47.811342] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.059 [2024-07-14 15:09:47.811796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.059 [2024-07-14 15:09:47.811838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.059 [2024-07-14 15:09:47.811864] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.059 [2024-07-14 15:09:47.812159] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.059 [2024-07-14 15:09:47.812447] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.059 [2024-07-14 15:09:47.812478] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.059 [2024-07-14 15:09:47.812500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.059 [2024-07-14 15:09:47.816599] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.059 [2024-07-14 15:09:47.825721] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.059 [2024-07-14 15:09:47.826207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.059 [2024-07-14 15:09:47.826258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.059 [2024-07-14 15:09:47.826282] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.059 [2024-07-14 15:09:47.826587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.059 [2024-07-14 15:09:47.826889] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.059 [2024-07-14 15:09:47.826919] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.059 [2024-07-14 15:09:47.826941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.059 [2024-07-14 15:09:47.831012] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.059 [2024-07-14 15:09:47.840092] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.059 [2024-07-14 15:09:47.840572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.059 [2024-07-14 15:09:47.840613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.059 [2024-07-14 15:09:47.840639] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.059 [2024-07-14 15:09:47.840932] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.059 [2024-07-14 15:09:47.841215] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.059 [2024-07-14 15:09:47.841246] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.059 [2024-07-14 15:09:47.841268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.059 [2024-07-14 15:09:47.845339] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.059 [2024-07-14 15:09:47.854690] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.059 [2024-07-14 15:09:47.855154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.059 [2024-07-14 15:09:47.855195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.059 [2024-07-14 15:09:47.855221] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.059 [2024-07-14 15:09:47.855503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.059 [2024-07-14 15:09:47.855787] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.059 [2024-07-14 15:09:47.855818] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.059 [2024-07-14 15:09:47.855839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.059 [2024-07-14 15:09:47.859930] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.059 [2024-07-14 15:09:47.869263] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.059 [2024-07-14 15:09:47.869715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.059 [2024-07-14 15:09:47.869755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.059 [2024-07-14 15:09:47.869780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.059 [2024-07-14 15:09:47.870072] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.059 [2024-07-14 15:09:47.870356] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.059 [2024-07-14 15:09:47.870387] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.059 [2024-07-14 15:09:47.870408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.059 [2024-07-14 15:09:47.874496] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.059 [2024-07-14 15:09:47.883812] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.059 [2024-07-14 15:09:47.884265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.059 [2024-07-14 15:09:47.884306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.059 [2024-07-14 15:09:47.884331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.059 [2024-07-14 15:09:47.884610] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.059 [2024-07-14 15:09:47.884905] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.059 [2024-07-14 15:09:47.884937] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.059 [2024-07-14 15:09:47.884959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.059 [2024-07-14 15:09:47.889019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.059 [2024-07-14 15:09:47.898336] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.059 [2024-07-14 15:09:47.898759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.059 [2024-07-14 15:09:47.898800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.059 [2024-07-14 15:09:47.898824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.059 [2024-07-14 15:09:47.899115] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.059 [2024-07-14 15:09:47.899399] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.060 [2024-07-14 15:09:47.899429] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.060 [2024-07-14 15:09:47.899452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.060 [2024-07-14 15:09:47.903528] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.060 [2024-07-14 15:09:47.912855] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.060 [2024-07-14 15:09:47.913302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.060 [2024-07-14 15:09:47.913343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.060 [2024-07-14 15:09:47.913369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.060 [2024-07-14 15:09:47.913650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.060 [2024-07-14 15:09:47.913967] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.060 [2024-07-14 15:09:47.913998] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.060 [2024-07-14 15:09:47.914020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.060 [2024-07-14 15:09:47.918108] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.060 [2024-07-14 15:09:47.927228] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.060 [2024-07-14 15:09:47.927685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.060 [2024-07-14 15:09:47.927731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.060 [2024-07-14 15:09:47.927756] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.060 [2024-07-14 15:09:47.928056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.060 [2024-07-14 15:09:47.928340] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.060 [2024-07-14 15:09:47.928371] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.060 [2024-07-14 15:09:47.928392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.060 [2024-07-14 15:09:47.932477] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.060 [2024-07-14 15:09:47.941592] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.060 [2024-07-14 15:09:47.942063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.060 [2024-07-14 15:09:47.942105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.060 [2024-07-14 15:09:47.942130] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.060 [2024-07-14 15:09:47.942410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.060 [2024-07-14 15:09:47.942694] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.060 [2024-07-14 15:09:47.942726] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.060 [2024-07-14 15:09:47.942747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.060 [2024-07-14 15:09:47.946828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.060 [2024-07-14 15:09:47.956158] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.060 [2024-07-14 15:09:47.956630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.060 [2024-07-14 15:09:47.956670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.060 [2024-07-14 15:09:47.956696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.060 [2024-07-14 15:09:47.956988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.060 [2024-07-14 15:09:47.957273] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.060 [2024-07-14 15:09:47.957304] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.060 [2024-07-14 15:09:47.957326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.060 [2024-07-14 15:09:47.961381] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.060 [2024-07-14 15:09:47.970681] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.060 [2024-07-14 15:09:47.971141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.060 [2024-07-14 15:09:47.971182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.060 [2024-07-14 15:09:47.971208] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.060 [2024-07-14 15:09:47.971488] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.060 [2024-07-14 15:09:47.971776] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.060 [2024-07-14 15:09:47.971807] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.060 [2024-07-14 15:09:47.971829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.060 [2024-07-14 15:09:47.975897] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.060 [2024-07-14 15:09:47.985201] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.060 [2024-07-14 15:09:47.985631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.060 [2024-07-14 15:09:47.985680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.060 [2024-07-14 15:09:47.985714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.060 [2024-07-14 15:09:47.986040] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.060 [2024-07-14 15:09:47.986323] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.060 [2024-07-14 15:09:47.986354] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.060 [2024-07-14 15:09:47.986375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.060 [2024-07-14 15:09:47.990429] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.060 [2024-07-14 15:09:47.999735] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.060 [2024-07-14 15:09:48.000196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.060 [2024-07-14 15:09:48.000237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.060 [2024-07-14 15:09:48.000263] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.060 [2024-07-14 15:09:48.000543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.060 [2024-07-14 15:09:48.000826] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.060 [2024-07-14 15:09:48.000858] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.060 [2024-07-14 15:09:48.000890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.060 [2024-07-14 15:09:48.004948] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.060 [2024-07-14 15:09:48.014236] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.061 [2024-07-14 15:09:48.014663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.061 [2024-07-14 15:09:48.014703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.061 [2024-07-14 15:09:48.014727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.061 [2024-07-14 15:09:48.015020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.061 [2024-07-14 15:09:48.015303] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.061 [2024-07-14 15:09:48.015335] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.061 [2024-07-14 15:09:48.015362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.061 [2024-07-14 15:09:48.019416] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.061 [2024-07-14 15:09:48.028709] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.061 [2024-07-14 15:09:48.029155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.061 [2024-07-14 15:09:48.029195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.061 [2024-07-14 15:09:48.029220] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.061 [2024-07-14 15:09:48.029499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.061 [2024-07-14 15:09:48.029782] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.061 [2024-07-14 15:09:48.029812] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.061 [2024-07-14 15:09:48.029834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.061 [2024-07-14 15:09:48.033893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.061 [2024-07-14 15:09:48.043193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.061 [2024-07-14 15:09:48.043660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.061 [2024-07-14 15:09:48.043701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.061 [2024-07-14 15:09:48.043727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.061 [2024-07-14 15:09:48.044020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.061 [2024-07-14 15:09:48.044303] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.061 [2024-07-14 15:09:48.044334] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.061 [2024-07-14 15:09:48.044355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.061 [2024-07-14 15:09:48.048411] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.061 [2024-07-14 15:09:48.057599] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.061 [2024-07-14 15:09:48.058042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.061 [2024-07-14 15:09:48.058083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.061 [2024-07-14 15:09:48.058109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.061 [2024-07-14 15:09:48.058388] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.061 [2024-07-14 15:09:48.058672] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.061 [2024-07-14 15:09:48.058703] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.061 [2024-07-14 15:09:48.058724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.061 [2024-07-14 15:09:48.062784] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.061 [2024-07-14 15:09:48.072079] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.061 [2024-07-14 15:09:48.072550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.061 [2024-07-14 15:09:48.072597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.061 [2024-07-14 15:09:48.072623] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.061 [2024-07-14 15:09:48.072917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.061 [2024-07-14 15:09:48.073202] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.061 [2024-07-14 15:09:48.073234] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.061 [2024-07-14 15:09:48.073256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.061 [2024-07-14 15:09:48.077309] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.061 [2024-07-14 15:09:48.086604] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.061 [2024-07-14 15:09:48.087049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.061 [2024-07-14 15:09:48.087089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.061 [2024-07-14 15:09:48.087114] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.061 [2024-07-14 15:09:48.087394] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.061 [2024-07-14 15:09:48.087676] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.061 [2024-07-14 15:09:48.087707] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.061 [2024-07-14 15:09:48.087729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.061 [2024-07-14 15:09:48.091781] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.061 [2024-07-14 15:09:48.101106] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.061 [2024-07-14 15:09:48.101567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.061 [2024-07-14 15:09:48.101607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.061 [2024-07-14 15:09:48.101632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.061 [2024-07-14 15:09:48.101925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.061 [2024-07-14 15:09:48.102209] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.061 [2024-07-14 15:09:48.102240] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.061 [2024-07-14 15:09:48.102261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.061 [2024-07-14 15:09:48.106327] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.061 [2024-07-14 15:09:48.115631] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.061 [2024-07-14 15:09:48.116083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.061 [2024-07-14 15:09:48.116124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.061 [2024-07-14 15:09:48.116149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.061 [2024-07-14 15:09:48.116437] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.061 [2024-07-14 15:09:48.116721] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.061 [2024-07-14 15:09:48.116752] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.061 [2024-07-14 15:09:48.116773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.061 [2024-07-14 15:09:48.120831] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.061 [2024-07-14 15:09:48.130149] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.062 [2024-07-14 15:09:48.130607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.062 [2024-07-14 15:09:48.130648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.062 [2024-07-14 15:09:48.130673] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.062 [2024-07-14 15:09:48.130965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.062 [2024-07-14 15:09:48.131249] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.062 [2024-07-14 15:09:48.131279] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.062 [2024-07-14 15:09:48.131301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.062 [2024-07-14 15:09:48.135348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.062 [2024-07-14 15:09:48.144634] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.062 [2024-07-14 15:09:48.145216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.062 [2024-07-14 15:09:48.145277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.062 [2024-07-14 15:09:48.145302] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.062 [2024-07-14 15:09:48.145581] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.062 [2024-07-14 15:09:48.145864] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.062 [2024-07-14 15:09:48.145906] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.062 [2024-07-14 15:09:48.145929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.062 [2024-07-14 15:09:48.149971] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.062 [2024-07-14 15:09:48.159052] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.062 [2024-07-14 15:09:48.159500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.062 [2024-07-14 15:09:48.159540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.062 [2024-07-14 15:09:48.159564] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.062 [2024-07-14 15:09:48.159844] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.062 [2024-07-14 15:09:48.160138] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.062 [2024-07-14 15:09:48.160170] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.062 [2024-07-14 15:09:48.160198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.062 [2024-07-14 15:09:48.164257] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.062 [2024-07-14 15:09:48.173553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.062 [2024-07-14 15:09:48.174011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.062 [2024-07-14 15:09:48.174052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.062 [2024-07-14 15:09:48.174077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.062 [2024-07-14 15:09:48.174357] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.062 [2024-07-14 15:09:48.174640] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.062 [2024-07-14 15:09:48.174671] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.062 [2024-07-14 15:09:48.174692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.062 [2024-07-14 15:09:48.178749] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.062 [2024-07-14 15:09:48.188052] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.062 [2024-07-14 15:09:48.188581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.062 [2024-07-14 15:09:48.188621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.062 [2024-07-14 15:09:48.188647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.062 [2024-07-14 15:09:48.188941] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.062 [2024-07-14 15:09:48.189238] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.062 [2024-07-14 15:09:48.189269] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.062 [2024-07-14 15:09:48.189291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.062 [2024-07-14 15:09:48.193340] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.062 [2024-07-14 15:09:48.202408] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.062 [2024-07-14 15:09:48.202936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.062 [2024-07-14 15:09:48.202977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.062 [2024-07-14 15:09:48.203002] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.062 [2024-07-14 15:09:48.203283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.062 [2024-07-14 15:09:48.203566] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.062 [2024-07-14 15:09:48.203597] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.062 [2024-07-14 15:09:48.203618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.062 [2024-07-14 15:09:48.207671] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.062 [2024-07-14 15:09:48.216970] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.062 [2024-07-14 15:09:48.217424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.062 [2024-07-14 15:09:48.217465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.062 [2024-07-14 15:09:48.217490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.062 [2024-07-14 15:09:48.217771] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.062 [2024-07-14 15:09:48.218067] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.062 [2024-07-14 15:09:48.218099] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.062 [2024-07-14 15:09:48.218121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.062 [2024-07-14 15:09:48.222173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.062 [2024-07-14 15:09:48.231468] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.062 [2024-07-14 15:09:48.231917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.062 [2024-07-14 15:09:48.231958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.062 [2024-07-14 15:09:48.231983] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.062 [2024-07-14 15:09:48.232263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.062 [2024-07-14 15:09:48.232546] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.062 [2024-07-14 15:09:48.232577] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.062 [2024-07-14 15:09:48.232598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.062 [2024-07-14 15:09:48.236659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.062 [2024-07-14 15:09:48.245969] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.062 [2024-07-14 15:09:48.246420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.062 [2024-07-14 15:09:48.246460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.062 [2024-07-14 15:09:48.246485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.063 [2024-07-14 15:09:48.246765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.063 [2024-07-14 15:09:48.247064] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.063 [2024-07-14 15:09:48.247096] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.063 [2024-07-14 15:09:48.247117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.063 [2024-07-14 15:09:48.251180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.063 [2024-07-14 15:09:48.260488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.063 [2024-07-14 15:09:48.260963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.063 [2024-07-14 15:09:48.261005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.063 [2024-07-14 15:09:48.261030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.063 [2024-07-14 15:09:48.261316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.063 [2024-07-14 15:09:48.261599] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.063 [2024-07-14 15:09:48.261631] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.063 [2024-07-14 15:09:48.261652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.063 [2024-07-14 15:09:48.265709] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.063 [2024-07-14 15:09:48.275014] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.063 [2024-07-14 15:09:48.275460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.063 [2024-07-14 15:09:48.275500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.063 [2024-07-14 15:09:48.275525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.063 [2024-07-14 15:09:48.275805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.063 [2024-07-14 15:09:48.276100] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.063 [2024-07-14 15:09:48.276131] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.063 [2024-07-14 15:09:48.276153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.063 [2024-07-14 15:09:48.280205] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.063 [2024-07-14 15:09:48.289498] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.063 [2024-07-14 15:09:48.289904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.063 [2024-07-14 15:09:48.289946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.063 [2024-07-14 15:09:48.289971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.063 [2024-07-14 15:09:48.290252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.063 [2024-07-14 15:09:48.290535] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.063 [2024-07-14 15:09:48.290566] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.063 [2024-07-14 15:09:48.290587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.063 [2024-07-14 15:09:48.294643] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.063 [2024-07-14 15:09:48.303967] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.063 [2024-07-14 15:09:48.304435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.063 [2024-07-14 15:09:48.304476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.063 [2024-07-14 15:09:48.304500] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.063 [2024-07-14 15:09:48.304781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.063 [2024-07-14 15:09:48.305077] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.063 [2024-07-14 15:09:48.305109] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.063 [2024-07-14 15:09:48.305136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.063 [2024-07-14 15:09:48.309184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.063 [2024-07-14 15:09:48.318476] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.063 [2024-07-14 15:09:48.318921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.063 [2024-07-14 15:09:48.318962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.063 [2024-07-14 15:09:48.318987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.063 [2024-07-14 15:09:48.319268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.063 [2024-07-14 15:09:48.319550] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.063 [2024-07-14 15:09:48.319581] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.063 [2024-07-14 15:09:48.319603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.063 [2024-07-14 15:09:48.323654] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.063 [2024-07-14 15:09:48.332944] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.063 [2024-07-14 15:09:48.333393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.063 [2024-07-14 15:09:48.333432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.063 [2024-07-14 15:09:48.333457] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.063 [2024-07-14 15:09:48.333737] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.063 [2024-07-14 15:09:48.334032] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.063 [2024-07-14 15:09:48.334064] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.063 [2024-07-14 15:09:48.334086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.063 [2024-07-14 15:09:48.338139] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.063 [2024-07-14 15:09:48.347432] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.063 [2024-07-14 15:09:48.347902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.063 [2024-07-14 15:09:48.347943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.063 [2024-07-14 15:09:48.347967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.063 [2024-07-14 15:09:48.348247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.063 [2024-07-14 15:09:48.348531] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.063 [2024-07-14 15:09:48.348561] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.063 [2024-07-14 15:09:48.348583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.063 [2024-07-14 15:09:48.352665] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.323 [2024-07-14 15:09:48.361995] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.323 [2024-07-14 15:09:48.362539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.323 [2024-07-14 15:09:48.362599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.323 [2024-07-14 15:09:48.362624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.323 [2024-07-14 15:09:48.362919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.324 [2024-07-14 15:09:48.363202] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.324 [2024-07-14 15:09:48.363233] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.324 [2024-07-14 15:09:48.363255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.324 [2024-07-14 15:09:48.367302] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.324 [2024-07-14 15:09:48.376370] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.324 [2024-07-14 15:09:48.376841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.324 [2024-07-14 15:09:48.376891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.324 [2024-07-14 15:09:48.376917] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.324 [2024-07-14 15:09:48.377198] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.324 [2024-07-14 15:09:48.377483] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.324 [2024-07-14 15:09:48.377514] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.324 [2024-07-14 15:09:48.377535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.324 [2024-07-14 15:09:48.381593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.324 [2024-07-14 15:09:48.390895] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.324 [2024-07-14 15:09:48.391346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.324 [2024-07-14 15:09:48.391387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.324 [2024-07-14 15:09:48.391412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.324 [2024-07-14 15:09:48.391691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.324 [2024-07-14 15:09:48.391988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.324 [2024-07-14 15:09:48.392020] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.324 [2024-07-14 15:09:48.392056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.324 [2024-07-14 15:09:48.396106] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.324 [2024-07-14 15:09:48.405408] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.324 [2024-07-14 15:09:48.405860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.324 [2024-07-14 15:09:48.405909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.324 [2024-07-14 15:09:48.405935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.324 [2024-07-14 15:09:48.406221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.324 [2024-07-14 15:09:48.406504] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.324 [2024-07-14 15:09:48.406535] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.324 [2024-07-14 15:09:48.406557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.324 [2024-07-14 15:09:48.410618] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.324 [2024-07-14 15:09:48.419920] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.324 [2024-07-14 15:09:48.420362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.324 [2024-07-14 15:09:48.420402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.324 [2024-07-14 15:09:48.420427] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.324 [2024-07-14 15:09:48.420707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.324 [2024-07-14 15:09:48.421004] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.324 [2024-07-14 15:09:48.421036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.324 [2024-07-14 15:09:48.421058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.324 [2024-07-14 15:09:48.425109] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.324 [2024-07-14 15:09:48.434426] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.324 [2024-07-14 15:09:48.434889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.324 [2024-07-14 15:09:48.434930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.324 [2024-07-14 15:09:48.434955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.324 [2024-07-14 15:09:48.435236] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.324 [2024-07-14 15:09:48.435519] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.324 [2024-07-14 15:09:48.435550] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.324 [2024-07-14 15:09:48.435571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.324 [2024-07-14 15:09:48.439627] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.324 [2024-07-14 15:09:48.448944] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.324 [2024-07-14 15:09:48.449376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.324 [2024-07-14 15:09:48.449416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.324 [2024-07-14 15:09:48.449441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.324 [2024-07-14 15:09:48.449721] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.324 [2024-07-14 15:09:48.450018] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.324 [2024-07-14 15:09:48.450050] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.324 [2024-07-14 15:09:48.450079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.324 [2024-07-14 15:09:48.454137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.324 [2024-07-14 15:09:48.463430] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.324 [2024-07-14 15:09:48.463858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.324 [2024-07-14 15:09:48.463906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.324 [2024-07-14 15:09:48.463932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.324 [2024-07-14 15:09:48.464212] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.324 [2024-07-14 15:09:48.464494] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.324 [2024-07-14 15:09:48.464525] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.324 [2024-07-14 15:09:48.464547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.324 [2024-07-14 15:09:48.468591] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.324 [2024-07-14 15:09:48.477896] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.324 [2024-07-14 15:09:48.478321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.324 [2024-07-14 15:09:48.478361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.324 [2024-07-14 15:09:48.478385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.324 [2024-07-14 15:09:48.478665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.325 [2024-07-14 15:09:48.478962] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.325 [2024-07-14 15:09:48.478994] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.325 [2024-07-14 15:09:48.479015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.325 [2024-07-14 15:09:48.483090] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.325 [2024-07-14 15:09:48.492376] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.325 [2024-07-14 15:09:48.492848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.325 [2024-07-14 15:09:48.492896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.325 [2024-07-14 15:09:48.492923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.325 [2024-07-14 15:09:48.493203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.325 [2024-07-14 15:09:48.493486] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.325 [2024-07-14 15:09:48.493517] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.325 [2024-07-14 15:09:48.493538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.325 [2024-07-14 15:09:48.497588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.325 [2024-07-14 15:09:48.506912] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.325 [2024-07-14 15:09:48.507340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.325 [2024-07-14 15:09:48.507381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.325 [2024-07-14 15:09:48.507407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.325 [2024-07-14 15:09:48.507688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.325 [2024-07-14 15:09:48.507988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.325 [2024-07-14 15:09:48.508020] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.325 [2024-07-14 15:09:48.508041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.325 [2024-07-14 15:09:48.512084] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.325 [2024-07-14 15:09:48.521365] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.325 [2024-07-14 15:09:48.521786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.325 [2024-07-14 15:09:48.521826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.325 [2024-07-14 15:09:48.521850] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.325 [2024-07-14 15:09:48.522140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.325 [2024-07-14 15:09:48.522423] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.325 [2024-07-14 15:09:48.522455] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.325 [2024-07-14 15:09:48.522477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.325 [2024-07-14 15:09:48.526530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.325 [2024-07-14 15:09:48.535818] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.325 [2024-07-14 15:09:48.536289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.325 [2024-07-14 15:09:48.536330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.325 [2024-07-14 15:09:48.536354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.325 [2024-07-14 15:09:48.536634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.325 [2024-07-14 15:09:48.536928] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.325 [2024-07-14 15:09:48.536960] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.325 [2024-07-14 15:09:48.536982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.325 [2024-07-14 15:09:48.541045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.325 [2024-07-14 15:09:48.550355] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.325 [2024-07-14 15:09:48.550779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.325 [2024-07-14 15:09:48.550826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.325 [2024-07-14 15:09:48.550859] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.325 [2024-07-14 15:09:48.551157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.325 [2024-07-14 15:09:48.551442] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.325 [2024-07-14 15:09:48.551473] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.325 [2024-07-14 15:09:48.551494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.325 [2024-07-14 15:09:48.555552] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.325 [2024-07-14 15:09:48.564868] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.325 [2024-07-14 15:09:48.565333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.325 [2024-07-14 15:09:48.565374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.325 [2024-07-14 15:09:48.565399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.325 [2024-07-14 15:09:48.565679] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.325 [2024-07-14 15:09:48.565976] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.325 [2024-07-14 15:09:48.566007] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.325 [2024-07-14 15:09:48.566029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.325 [2024-07-14 15:09:48.570078] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.325 [2024-07-14 15:09:48.579374] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.325 [2024-07-14 15:09:48.579808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.325 [2024-07-14 15:09:48.579849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.325 [2024-07-14 15:09:48.579874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.325 [2024-07-14 15:09:48.580166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.325 [2024-07-14 15:09:48.580449] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.325 [2024-07-14 15:09:48.580480] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.325 [2024-07-14 15:09:48.580501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.325 [2024-07-14 15:09:48.584552] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.325 [2024-07-14 15:09:48.593828] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.325 [2024-07-14 15:09:48.594278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.325 [2024-07-14 15:09:48.594318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.325 [2024-07-14 15:09:48.594344] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.325 [2024-07-14 15:09:48.594624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.325 [2024-07-14 15:09:48.594919] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.325 [2024-07-14 15:09:48.594951] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.325 [2024-07-14 15:09:48.594978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.325 [2024-07-14 15:09:48.599030] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.325 [2024-07-14 15:09:48.608381] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.325 [2024-07-14 15:09:48.608910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.326 [2024-07-14 15:09:48.608973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.326 [2024-07-14 15:09:48.608998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.326 [2024-07-14 15:09:48.609278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.326 [2024-07-14 15:09:48.609562] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.326 [2024-07-14 15:09:48.609593] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.326 [2024-07-14 15:09:48.609614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.326 [2024-07-14 15:09:48.613670] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.326 [2024-07-14 15:09:48.622747] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.326 [2024-07-14 15:09:48.623211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.326 [2024-07-14 15:09:48.623252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.326 [2024-07-14 15:09:48.623277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.326 [2024-07-14 15:09:48.623557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.326 [2024-07-14 15:09:48.623840] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.326 [2024-07-14 15:09:48.623871] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.326 [2024-07-14 15:09:48.623910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.326 [2024-07-14 15:09:48.627986] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.587 [2024-07-14 15:09:48.637294] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.587 [2024-07-14 15:09:48.637751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.587 [2024-07-14 15:09:48.637791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.587 [2024-07-14 15:09:48.637816] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.587 [2024-07-14 15:09:48.638107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.587 [2024-07-14 15:09:48.638391] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.587 [2024-07-14 15:09:48.638423] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.587 [2024-07-14 15:09:48.638445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.587 [2024-07-14 15:09:48.642513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.587 [2024-07-14 15:09:48.651847] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.587 [2024-07-14 15:09:48.652402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.587 [2024-07-14 15:09:48.652442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.587 [2024-07-14 15:09:48.652468] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.587 [2024-07-14 15:09:48.652746] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.587 [2024-07-14 15:09:48.653044] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.587 [2024-07-14 15:09:48.653076] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.587 [2024-07-14 15:09:48.653097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.587 [2024-07-14 15:09:48.657164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.587 [2024-07-14 15:09:48.666251] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.587 [2024-07-14 15:09:48.666729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.587 [2024-07-14 15:09:48.666769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.587 [2024-07-14 15:09:48.666794] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.587 [2024-07-14 15:09:48.667085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.587 [2024-07-14 15:09:48.667369] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.587 [2024-07-14 15:09:48.667401] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.587 [2024-07-14 15:09:48.667423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.587 [2024-07-14 15:09:48.671484] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.587 [2024-07-14 15:09:48.680812] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.587 [2024-07-14 15:09:48.681274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.587 [2024-07-14 15:09:48.681315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.588 [2024-07-14 15:09:48.681340] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.588 [2024-07-14 15:09:48.681621] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.588 [2024-07-14 15:09:48.681919] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.588 [2024-07-14 15:09:48.681951] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.588 [2024-07-14 15:09:48.681973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.588 [2024-07-14 15:09:48.686030] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.588 [2024-07-14 15:09:48.695393] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.588 [2024-07-14 15:09:48.695852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.588 [2024-07-14 15:09:48.695904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.588 [2024-07-14 15:09:48.695937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.588 [2024-07-14 15:09:48.696223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.588 [2024-07-14 15:09:48.696507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.588 [2024-07-14 15:09:48.696539] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.588 [2024-07-14 15:09:48.696560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.588 [2024-07-14 15:09:48.700619] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.588 [2024-07-14 15:09:48.709982] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.588 [2024-07-14 15:09:48.710509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.588 [2024-07-14 15:09:48.710568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.588 [2024-07-14 15:09:48.710593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.588 [2024-07-14 15:09:48.710873] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.588 [2024-07-14 15:09:48.711169] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.588 [2024-07-14 15:09:48.711200] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.588 [2024-07-14 15:09:48.711222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.588 [2024-07-14 15:09:48.715280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.588 [2024-07-14 15:09:48.724414] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.588 [2024-07-14 15:09:48.724854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.588 [2024-07-14 15:09:48.724905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.588 [2024-07-14 15:09:48.724942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.588 [2024-07-14 15:09:48.725223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.588 [2024-07-14 15:09:48.725507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.588 [2024-07-14 15:09:48.725539] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.588 [2024-07-14 15:09:48.725560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.588 [2024-07-14 15:09:48.729636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.588 [2024-07-14 15:09:48.738969] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.588 [2024-07-14 15:09:48.739482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.588 [2024-07-14 15:09:48.739539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.588 [2024-07-14 15:09:48.739564] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.588 [2024-07-14 15:09:48.739842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.588 [2024-07-14 15:09:48.740137] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.588 [2024-07-14 15:09:48.740185] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.588 [2024-07-14 15:09:48.740208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.588 [2024-07-14 15:09:48.744299] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.588 [2024-07-14 15:09:48.753440] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.588 [2024-07-14 15:09:48.753890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.588 [2024-07-14 15:09:48.753932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.588 [2024-07-14 15:09:48.753957] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.588 [2024-07-14 15:09:48.754240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.588 [2024-07-14 15:09:48.754524] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.588 [2024-07-14 15:09:48.754555] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.588 [2024-07-14 15:09:48.754576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.588 [2024-07-14 15:09:48.758636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.588 [2024-07-14 15:09:48.767975] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.588 [2024-07-14 15:09:48.768515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.588 [2024-07-14 15:09:48.768555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.588 [2024-07-14 15:09:48.768581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.588 [2024-07-14 15:09:48.768862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.588 [2024-07-14 15:09:48.769171] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.588 [2024-07-14 15:09:48.769203] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.588 [2024-07-14 15:09:48.769224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.588 [2024-07-14 15:09:48.773409] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.588 [2024-07-14 15:09:48.782491] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.588 [2024-07-14 15:09:48.782955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.588 [2024-07-14 15:09:48.782997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.588 [2024-07-14 15:09:48.783023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.588 [2024-07-14 15:09:48.783302] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.588 [2024-07-14 15:09:48.783584] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.588 [2024-07-14 15:09:48.783616] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.588 [2024-07-14 15:09:48.783638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.588 [2024-07-14 15:09:48.787686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.588 [2024-07-14 15:09:48.797007] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.589 [2024-07-14 15:09:48.797450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.589 [2024-07-14 15:09:48.797491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.589 [2024-07-14 15:09:48.797517] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.589 [2024-07-14 15:09:48.797798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.589 [2024-07-14 15:09:48.798092] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.589 [2024-07-14 15:09:48.798124] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.589 [2024-07-14 15:09:48.798147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.589 [2024-07-14 15:09:48.802218] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.589 [2024-07-14 15:09:48.811514] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.589 [2024-07-14 15:09:48.811983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.589 [2024-07-14 15:09:48.812058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.589 [2024-07-14 15:09:48.812084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.589 [2024-07-14 15:09:48.812366] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.589 [2024-07-14 15:09:48.812649] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.589 [2024-07-14 15:09:48.812680] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.589 [2024-07-14 15:09:48.812702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.589 [2024-07-14 15:09:48.816758] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.589 [2024-07-14 15:09:48.826056] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.589 [2024-07-14 15:09:48.826503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.589 [2024-07-14 15:09:48.826543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.589 [2024-07-14 15:09:48.826568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.589 [2024-07-14 15:09:48.826847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.589 [2024-07-14 15:09:48.827141] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.589 [2024-07-14 15:09:48.827172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.589 [2024-07-14 15:09:48.827194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.589 [2024-07-14 15:09:48.831235] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.589 [2024-07-14 15:09:48.840554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.589 [2024-07-14 15:09:48.841053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.589 [2024-07-14 15:09:48.841094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.589 [2024-07-14 15:09:48.841125] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.589 [2024-07-14 15:09:48.841409] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.589 [2024-07-14 15:09:48.841692] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.589 [2024-07-14 15:09:48.841723] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.589 [2024-07-14 15:09:48.841744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.589 [2024-07-14 15:09:48.845795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.589 [2024-07-14 15:09:48.855148] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.589 [2024-07-14 15:09:48.855575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.589 [2024-07-14 15:09:48.855616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.589 [2024-07-14 15:09:48.855641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.589 [2024-07-14 15:09:48.855931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.589 [2024-07-14 15:09:48.856216] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.589 [2024-07-14 15:09:48.856247] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.589 [2024-07-14 15:09:48.856269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.589 [2024-07-14 15:09:48.860335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.589 [2024-07-14 15:09:48.869670] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.589 [2024-07-14 15:09:48.870123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.589 [2024-07-14 15:09:48.870163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.589 [2024-07-14 15:09:48.870188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.589 [2024-07-14 15:09:48.870468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.589 [2024-07-14 15:09:48.870759] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.589 [2024-07-14 15:09:48.870790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.589 [2024-07-14 15:09:48.870812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.589 [2024-07-14 15:09:48.874898] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.589 [2024-07-14 15:09:48.884315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.589 [2024-07-14 15:09:48.884787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.589 [2024-07-14 15:09:48.884828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.589 [2024-07-14 15:09:48.884854] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.589 [2024-07-14 15:09:48.885144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.589 [2024-07-14 15:09:48.885430] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.589 [2024-07-14 15:09:48.885467] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.589 [2024-07-14 15:09:48.885489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.589 [2024-07-14 15:09:48.889575] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.851 [2024-07-14 15:09:48.898259] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.851 [2024-07-14 15:09:48.898691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.851 [2024-07-14 15:09:48.898727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.851 [2024-07-14 15:09:48.898749] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.851 [2024-07-14 15:09:48.899044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.851 [2024-07-14 15:09:48.899347] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.851 [2024-07-14 15:09:48.899374] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.851 [2024-07-14 15:09:48.899393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.851 [2024-07-14 15:09:48.903068] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.851 [2024-07-14 15:09:48.912885] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.851 [2024-07-14 15:09:48.913346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.851 [2024-07-14 15:09:48.913387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.851 [2024-07-14 15:09:48.913412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.851 [2024-07-14 15:09:48.913694] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.851 [2024-07-14 15:09:48.913992] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.851 [2024-07-14 15:09:48.914024] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.851 [2024-07-14 15:09:48.914046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.851 [2024-07-14 15:09:48.918137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.851 [2024-07-14 15:09:48.927307] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.851 [2024-07-14 15:09:48.927864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.851 [2024-07-14 15:09:48.927947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.851 [2024-07-14 15:09:48.927973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.851 [2024-07-14 15:09:48.928255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.851 [2024-07-14 15:09:48.928537] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.851 [2024-07-14 15:09:48.928569] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.851 [2024-07-14 15:09:48.928590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.851 [2024-07-14 15:09:48.932660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.851 [2024-07-14 15:09:48.941856] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.851 [2024-07-14 15:09:48.942323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.851 [2024-07-14 15:09:48.942364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.851 [2024-07-14 15:09:48.942389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.851 [2024-07-14 15:09:48.942672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.851 [2024-07-14 15:09:48.942977] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.851 [2024-07-14 15:09:48.943016] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.851 [2024-07-14 15:09:48.943038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.851 [2024-07-14 15:09:48.947143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.851 [2024-07-14 15:09:48.956304] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.851 [2024-07-14 15:09:48.956775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.851 [2024-07-14 15:09:48.956815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.851 [2024-07-14 15:09:48.956841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.851 [2024-07-14 15:09:48.957132] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.851 [2024-07-14 15:09:48.957418] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.851 [2024-07-14 15:09:48.957449] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.851 [2024-07-14 15:09:48.957471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.851 [2024-07-14 15:09:48.961576] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.851 [2024-07-14 15:09:48.970741] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.851 [2024-07-14 15:09:48.971227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.851 [2024-07-14 15:09:48.971285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.851 [2024-07-14 15:09:48.971311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.852 [2024-07-14 15:09:48.971592] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.852 [2024-07-14 15:09:48.971888] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.852 [2024-07-14 15:09:48.971927] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.852 [2024-07-14 15:09:48.971949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.852 [2024-07-14 15:09:48.976064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.852 [2024-07-14 15:09:48.985233] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.852 [2024-07-14 15:09:48.985688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.852 [2024-07-14 15:09:48.985729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.852 [2024-07-14 15:09:48.985760] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.852 [2024-07-14 15:09:48.986056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.852 [2024-07-14 15:09:48.986342] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.852 [2024-07-14 15:09:48.986374] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.852 [2024-07-14 15:09:48.986396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.852 [2024-07-14 15:09:48.990487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.852 [2024-07-14 15:09:48.999631] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.852 [2024-07-14 15:09:49.000081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.852 [2024-07-14 15:09:49.000122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.852 [2024-07-14 15:09:49.000148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.852 [2024-07-14 15:09:49.000429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.852 [2024-07-14 15:09:49.000715] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.852 [2024-07-14 15:09:49.000746] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.852 [2024-07-14 15:09:49.000768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.852 [2024-07-14 15:09:49.004889] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.852 [2024-07-14 15:09:49.014001] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.852 [2024-07-14 15:09:49.014459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.852 [2024-07-14 15:09:49.014500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.852 [2024-07-14 15:09:49.014525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.852 [2024-07-14 15:09:49.014825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.852 [2024-07-14 15:09:49.015123] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.852 [2024-07-14 15:09:49.015155] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.852 [2024-07-14 15:09:49.015176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.852 [2024-07-14 15:09:49.019263] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.852 [2024-07-14 15:09:49.028384] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.852 [2024-07-14 15:09:49.028829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.852 [2024-07-14 15:09:49.028870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.852 [2024-07-14 15:09:49.028906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.852 [2024-07-14 15:09:49.029189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.852 [2024-07-14 15:09:49.029474] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.852 [2024-07-14 15:09:49.029511] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.852 [2024-07-14 15:09:49.029534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.852 [2024-07-14 15:09:49.033618] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.852 [2024-07-14 15:09:49.042992] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.852 [2024-07-14 15:09:49.043459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.852 [2024-07-14 15:09:49.043500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.852 [2024-07-14 15:09:49.043526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.852 [2024-07-14 15:09:49.043808] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.852 [2024-07-14 15:09:49.044103] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.852 [2024-07-14 15:09:49.044135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.852 [2024-07-14 15:09:49.044157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.852 [2024-07-14 15:09:49.048246] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.852 [2024-07-14 15:09:49.057396] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.852 [2024-07-14 15:09:49.057842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.852 [2024-07-14 15:09:49.057890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.852 [2024-07-14 15:09:49.057918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.852 [2024-07-14 15:09:49.058200] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.852 [2024-07-14 15:09:49.058484] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.852 [2024-07-14 15:09:49.058515] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.852 [2024-07-14 15:09:49.058537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.852 [2024-07-14 15:09:49.062623] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.852 [2024-07-14 15:09:49.072003] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.852 [2024-07-14 15:09:49.072464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.852 [2024-07-14 15:09:49.072505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.852 [2024-07-14 15:09:49.072530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.852 [2024-07-14 15:09:49.072812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.852 [2024-07-14 15:09:49.073188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.852 [2024-07-14 15:09:49.073222] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.852 [2024-07-14 15:09:49.073244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.852 [2024-07-14 15:09:49.077321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.852 [2024-07-14 15:09:49.086454] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.852 [2024-07-14 15:09:49.086928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.852 [2024-07-14 15:09:49.086971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.852 [2024-07-14 15:09:49.086997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.852 [2024-07-14 15:09:49.087280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.853 [2024-07-14 15:09:49.087566] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.853 [2024-07-14 15:09:49.087597] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.853 [2024-07-14 15:09:49.087619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.853 [2024-07-14 15:09:49.091711] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.853 [2024-07-14 15:09:49.100850] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.853 [2024-07-14 15:09:49.101291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.853 [2024-07-14 15:09:49.101332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.853 [2024-07-14 15:09:49.101357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.853 [2024-07-14 15:09:49.101651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.853 [2024-07-14 15:09:49.101952] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.853 [2024-07-14 15:09:49.101984] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.853 [2024-07-14 15:09:49.102006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.853 [2024-07-14 15:09:49.106094] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.853 [2024-07-14 15:09:49.115224] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.853 [2024-07-14 15:09:49.115691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.853 [2024-07-14 15:09:49.115731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.853 [2024-07-14 15:09:49.115757] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.853 [2024-07-14 15:09:49.116051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.853 [2024-07-14 15:09:49.116337] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.853 [2024-07-14 15:09:49.116368] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.853 [2024-07-14 15:09:49.116390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.853 [2024-07-14 15:09:49.120476] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.853 [2024-07-14 15:09:49.129608] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.853 [2024-07-14 15:09:49.130076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.853 [2024-07-14 15:09:49.130117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.853 [2024-07-14 15:09:49.130149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.853 [2024-07-14 15:09:49.130431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.853 [2024-07-14 15:09:49.130717] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.853 [2024-07-14 15:09:49.130748] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.853 [2024-07-14 15:09:49.130769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.853 [2024-07-14 15:09:49.134855] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.853 [2024-07-14 15:09:49.144018] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.853 [2024-07-14 15:09:49.144450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.853 [2024-07-14 15:09:49.144490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.853 [2024-07-14 15:09:49.144515] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.853 [2024-07-14 15:09:49.144797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.853 [2024-07-14 15:09:49.145098] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.853 [2024-07-14 15:09:49.145130] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.853 [2024-07-14 15:09:49.145152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.853 [2024-07-14 15:09:49.149233] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.115 [2024-07-14 15:09:49.158617] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.115 [2024-07-14 15:09:49.159093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.115 [2024-07-14 15:09:49.159134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.115 [2024-07-14 15:09:49.159159] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.115 [2024-07-14 15:09:49.159442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.115 [2024-07-14 15:09:49.159728] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.115 [2024-07-14 15:09:49.159760] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.115 [2024-07-14 15:09:49.159781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.115 [2024-07-14 15:09:49.163866] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.115 [2024-07-14 15:09:49.173234] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.115 [2024-07-14 15:09:49.173682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.115 [2024-07-14 15:09:49.173722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.115 [2024-07-14 15:09:49.173747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.115 [2024-07-14 15:09:49.174040] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.115 [2024-07-14 15:09:49.174324] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.115 [2024-07-14 15:09:49.174361] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.115 [2024-07-14 15:09:49.174385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.115 [2024-07-14 15:09:49.178467] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.115 [2024-07-14 15:09:49.187838] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.115 [2024-07-14 15:09:49.188299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.115 [2024-07-14 15:09:49.188340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.115 [2024-07-14 15:09:49.188366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.115 [2024-07-14 15:09:49.188646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.115 [2024-07-14 15:09:49.188944] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.115 [2024-07-14 15:09:49.188976] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.115 [2024-07-14 15:09:49.188997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.115 [2024-07-14 15:09:49.193084] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.115 [2024-07-14 15:09:49.202465] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.115 [2024-07-14 15:09:49.202901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.115 [2024-07-14 15:09:49.202941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.115 [2024-07-14 15:09:49.202967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.115 [2024-07-14 15:09:49.203249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.115 [2024-07-14 15:09:49.203533] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.115 [2024-07-14 15:09:49.203565] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.115 [2024-07-14 15:09:49.203586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.115 [2024-07-14 15:09:49.207664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.115 [2024-07-14 15:09:49.217033] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.115 [2024-07-14 15:09:49.217483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.115 [2024-07-14 15:09:49.217524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.115 [2024-07-14 15:09:49.217550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.115 [2024-07-14 15:09:49.217831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.115 [2024-07-14 15:09:49.218125] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.115 [2024-07-14 15:09:49.218169] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.115 [2024-07-14 15:09:49.218191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.115 [2024-07-14 15:09:49.222277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.115 [2024-07-14 15:09:49.231395] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.115 [2024-07-14 15:09:49.231860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.115 [2024-07-14 15:09:49.231908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.115 [2024-07-14 15:09:49.231934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.115 [2024-07-14 15:09:49.232216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.115 [2024-07-14 15:09:49.232501] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.115 [2024-07-14 15:09:49.232532] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.115 [2024-07-14 15:09:49.232554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.115 [2024-07-14 15:09:49.236631] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.115 [2024-07-14 15:09:49.245999] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.115 [2024-07-14 15:09:49.246432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.115 [2024-07-14 15:09:49.246471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.115 [2024-07-14 15:09:49.246496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.115 [2024-07-14 15:09:49.246777] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.115 [2024-07-14 15:09:49.247075] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.115 [2024-07-14 15:09:49.247107] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.115 [2024-07-14 15:09:49.247129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.115 [2024-07-14 15:09:49.251215] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.115 [2024-07-14 15:09:49.260576] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.115 [2024-07-14 15:09:49.261007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.115 [2024-07-14 15:09:49.261047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.115 [2024-07-14 15:09:49.261073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.115 [2024-07-14 15:09:49.261354] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.115 [2024-07-14 15:09:49.261637] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.115 [2024-07-14 15:09:49.261669] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.115 [2024-07-14 15:09:49.261690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.115 [2024-07-14 15:09:49.265775] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.115 [2024-07-14 15:09:49.275134] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.116 [2024-07-14 15:09:49.275602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.116 [2024-07-14 15:09:49.275643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.116 [2024-07-14 15:09:49.275674] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.116 [2024-07-14 15:09:49.275969] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.116 [2024-07-14 15:09:49.276254] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.116 [2024-07-14 15:09:49.276286] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.116 [2024-07-14 15:09:49.276307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.116 [2024-07-14 15:09:49.280385] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.116 [2024-07-14 15:09:49.289728] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.116 [2024-07-14 15:09:49.290211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.116 [2024-07-14 15:09:49.290252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.116 [2024-07-14 15:09:49.290278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.116 [2024-07-14 15:09:49.290560] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.116 [2024-07-14 15:09:49.290845] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.116 [2024-07-14 15:09:49.290885] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.116 [2024-07-14 15:09:49.290910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.116 [2024-07-14 15:09:49.294989] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.116 [2024-07-14 15:09:49.304133] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.116 [2024-07-14 15:09:49.304638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.116 [2024-07-14 15:09:49.304679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.116 [2024-07-14 15:09:49.304705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.116 [2024-07-14 15:09:49.304999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.116 [2024-07-14 15:09:49.305285] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.116 [2024-07-14 15:09:49.305316] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.116 [2024-07-14 15:09:49.305338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.116 [2024-07-14 15:09:49.309411] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.116 [2024-07-14 15:09:49.318551] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.116 [2024-07-14 15:09:49.319010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.116 [2024-07-14 15:09:49.319050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.116 [2024-07-14 15:09:49.319076] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.116 [2024-07-14 15:09:49.319359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.116 [2024-07-14 15:09:49.319649] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.116 [2024-07-14 15:09:49.319681] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.116 [2024-07-14 15:09:49.319702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.116 [2024-07-14 15:09:49.323786] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.116 [2024-07-14 15:09:49.333139] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.116 [2024-07-14 15:09:49.333584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.116 [2024-07-14 15:09:49.333624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.116 [2024-07-14 15:09:49.333649] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.116 [2024-07-14 15:09:49.333943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.116 [2024-07-14 15:09:49.334235] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.116 [2024-07-14 15:09:49.334267] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.116 [2024-07-14 15:09:49.334288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.116 [2024-07-14 15:09:49.338368] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.116 [2024-07-14 15:09:49.347720] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.116 [2024-07-14 15:09:49.348203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.116 [2024-07-14 15:09:49.348243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.116 [2024-07-14 15:09:49.348269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.116 [2024-07-14 15:09:49.348549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.116 [2024-07-14 15:09:49.348834] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.116 [2024-07-14 15:09:49.348865] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.116 [2024-07-14 15:09:49.348899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.116 [2024-07-14 15:09:49.352995] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.116 [2024-07-14 15:09:49.362107] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.116 [2024-07-14 15:09:49.362511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.116 [2024-07-14 15:09:49.362551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.116 [2024-07-14 15:09:49.362576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.116 [2024-07-14 15:09:49.362858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.116 [2024-07-14 15:09:49.363154] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.116 [2024-07-14 15:09:49.363186] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.116 [2024-07-14 15:09:49.363208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.116 [2024-07-14 15:09:49.367294] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.116 [2024-07-14 15:09:49.376640] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.116 [2024-07-14 15:09:49.377080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.116 [2024-07-14 15:09:49.377120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.116 [2024-07-14 15:09:49.377146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.116 [2024-07-14 15:09:49.377428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.116 [2024-07-14 15:09:49.377714] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.116 [2024-07-14 15:09:49.377745] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.116 [2024-07-14 15:09:49.377767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.116 [2024-07-14 15:09:49.381856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.116 [2024-07-14 15:09:49.391221] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.116 [2024-07-14 15:09:49.391686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.117 [2024-07-14 15:09:49.391726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.117 [2024-07-14 15:09:49.391751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.117 [2024-07-14 15:09:49.392047] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.117 [2024-07-14 15:09:49.392330] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.117 [2024-07-14 15:09:49.392362] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.117 [2024-07-14 15:09:49.392383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.117 [2024-07-14 15:09:49.396448] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.117 [2024-07-14 15:09:49.405813] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.117 [2024-07-14 15:09:49.406272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.117 [2024-07-14 15:09:49.406313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.117 [2024-07-14 15:09:49.406339] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.117 [2024-07-14 15:09:49.406620] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.117 [2024-07-14 15:09:49.406917] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.117 [2024-07-14 15:09:49.406948] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.117 [2024-07-14 15:09:49.406970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.117 [2024-07-14 15:09:49.411039] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.117 [2024-07-14 15:09:49.420392] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.117 [2024-07-14 15:09:49.420835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.117 [2024-07-14 15:09:49.420884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.117 [2024-07-14 15:09:49.420918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.377 [2024-07-14 15:09:49.421202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.377 [2024-07-14 15:09:49.421488] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.377 [2024-07-14 15:09:49.421521] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.377 [2024-07-14 15:09:49.421543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.377 [2024-07-14 15:09:49.425640] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.377 [2024-07-14 15:09:49.434757] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.377 [2024-07-14 15:09:49.435217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.377 [2024-07-14 15:09:49.435257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.377 [2024-07-14 15:09:49.435282] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.377 [2024-07-14 15:09:49.435562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.377 [2024-07-14 15:09:49.435847] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.377 [2024-07-14 15:09:49.435888] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.377 [2024-07-14 15:09:49.435913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.377 [2024-07-14 15:09:49.439994] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.377 [2024-07-14 15:09:49.449141] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.377 [2024-07-14 15:09:49.449594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.377 [2024-07-14 15:09:49.449635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.377 [2024-07-14 15:09:49.449661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.377 [2024-07-14 15:09:49.449954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.377 [2024-07-14 15:09:49.450239] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.377 [2024-07-14 15:09:49.450271] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.377 [2024-07-14 15:09:49.450292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.377 [2024-07-14 15:09:49.454383] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.377 [2024-07-14 15:09:49.463731] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.377 [2024-07-14 15:09:49.464184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.377 [2024-07-14 15:09:49.464225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.377 [2024-07-14 15:09:49.464250] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.377 [2024-07-14 15:09:49.464531] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.377 [2024-07-14 15:09:49.464826] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.377 [2024-07-14 15:09:49.464858] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.377 [2024-07-14 15:09:49.464890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.377 [2024-07-14 15:09:49.468974] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.377 [2024-07-14 15:09:49.478320] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.377 [2024-07-14 15:09:49.478747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.377 [2024-07-14 15:09:49.478789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.377 [2024-07-14 15:09:49.478815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.377 [2024-07-14 15:09:49.479113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.377 [2024-07-14 15:09:49.479398] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.377 [2024-07-14 15:09:49.479430] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.377 [2024-07-14 15:09:49.479452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.377 [2024-07-14 15:09:49.483522] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.377 [2024-07-14 15:09:49.492882] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.377 [2024-07-14 15:09:49.493342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.377 [2024-07-14 15:09:49.493382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.377 [2024-07-14 15:09:49.493407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.377 [2024-07-14 15:09:49.493688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.377 [2024-07-14 15:09:49.493983] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.377 [2024-07-14 15:09:49.494015] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.377 [2024-07-14 15:09:49.494037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.377 [2024-07-14 15:09:49.498122] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.377 [2024-07-14 15:09:49.507274] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.377 [2024-07-14 15:09:49.507741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.377 [2024-07-14 15:09:49.507782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.377 [2024-07-14 15:09:49.507808] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.377 [2024-07-14 15:09:49.508101] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.377 [2024-07-14 15:09:49.508387] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.377 [2024-07-14 15:09:49.508418] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.377 [2024-07-14 15:09:49.508440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.377 [2024-07-14 15:09:49.512517] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.377 [2024-07-14 15:09:49.521857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.377 [2024-07-14 15:09:49.522312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.377 [2024-07-14 15:09:49.522352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.377 [2024-07-14 15:09:49.522378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.377 [2024-07-14 15:09:49.522658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.377 [2024-07-14 15:09:49.522955] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.377 [2024-07-14 15:09:49.522986] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.377 [2024-07-14 15:09:49.523008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.377 [2024-07-14 15:09:49.527079] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.377 [2024-07-14 15:09:49.536434] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.377 [2024-07-14 15:09:49.536885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.377 [2024-07-14 15:09:49.536925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.377 [2024-07-14 15:09:49.536951] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.377 [2024-07-14 15:09:49.537235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.377 [2024-07-14 15:09:49.537520] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.377 [2024-07-14 15:09:49.537551] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.377 [2024-07-14 15:09:49.537573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.377 [2024-07-14 15:09:49.541673] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.377 [2024-07-14 15:09:49.550832] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.377 [2024-07-14 15:09:49.551303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.377 [2024-07-14 15:09:49.551343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.377 [2024-07-14 15:09:49.551368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.377 [2024-07-14 15:09:49.551650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.377 [2024-07-14 15:09:49.551946] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.377 [2024-07-14 15:09:49.551978] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.377 [2024-07-14 15:09:49.552000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.377 [2024-07-14 15:09:49.556093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.377 [2024-07-14 15:09:49.565240] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.377 [2024-07-14 15:09:49.565695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.377 [2024-07-14 15:09:49.565740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.377 [2024-07-14 15:09:49.565766] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.377 [2024-07-14 15:09:49.566059] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.377 [2024-07-14 15:09:49.566345] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.377 [2024-07-14 15:09:49.566376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.377 [2024-07-14 15:09:49.566398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.377 [2024-07-14 15:09:49.570483] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.378 [2024-07-14 15:09:49.579603] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.378 [2024-07-14 15:09:49.580078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.378 [2024-07-14 15:09:49.580119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.378 [2024-07-14 15:09:49.580144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.378 [2024-07-14 15:09:49.580425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.378 [2024-07-14 15:09:49.580709] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.378 [2024-07-14 15:09:49.580740] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.378 [2024-07-14 15:09:49.580762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.378 [2024-07-14 15:09:49.584845] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.378 [2024-07-14 15:09:49.594212] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.378 [2024-07-14 15:09:49.594669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.378 [2024-07-14 15:09:49.594709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.378 [2024-07-14 15:09:49.594733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.378 [2024-07-14 15:09:49.595026] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.378 [2024-07-14 15:09:49.595313] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.378 [2024-07-14 15:09:49.595344] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.378 [2024-07-14 15:09:49.595366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.378 [2024-07-14 15:09:49.599436] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.378 [2024-07-14 15:09:49.608799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.378 [2024-07-14 15:09:49.609249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.378 [2024-07-14 15:09:49.609291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.378 [2024-07-14 15:09:49.609316] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.378 [2024-07-14 15:09:49.609597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.378 [2024-07-14 15:09:49.609899] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.378 [2024-07-14 15:09:49.609931] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.378 [2024-07-14 15:09:49.609953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.378 [2024-07-14 15:09:49.614040] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.378 [2024-07-14 15:09:49.623391] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.378 [2024-07-14 15:09:49.623860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.378 [2024-07-14 15:09:49.623907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.378 [2024-07-14 15:09:49.623933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.378 [2024-07-14 15:09:49.624216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.378 [2024-07-14 15:09:49.624499] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.378 [2024-07-14 15:09:49.624530] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.378 [2024-07-14 15:09:49.624552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.378 [2024-07-14 15:09:49.628641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.378 [2024-07-14 15:09:49.638022] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.378 [2024-07-14 15:09:49.638466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.378 [2024-07-14 15:09:49.638506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.378 [2024-07-14 15:09:49.638532] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.378 [2024-07-14 15:09:49.638812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.378 [2024-07-14 15:09:49.639108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.378 [2024-07-14 15:09:49.639140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.378 [2024-07-14 15:09:49.639162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.378 [2024-07-14 15:09:49.643239] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.378 [2024-07-14 15:09:49.652611] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.378 [2024-07-14 15:09:49.653085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.378 [2024-07-14 15:09:49.653126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.378 [2024-07-14 15:09:49.653151] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.378 [2024-07-14 15:09:49.653433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.378 [2024-07-14 15:09:49.653717] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.378 [2024-07-14 15:09:49.653748] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.378 [2024-07-14 15:09:49.653770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.378 [2024-07-14 15:09:49.657856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.378 [2024-07-14 15:09:49.667204] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.378 [2024-07-14 15:09:49.667676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.378 [2024-07-14 15:09:49.667716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.378 [2024-07-14 15:09:49.667742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.378 [2024-07-14 15:09:49.668036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.378 [2024-07-14 15:09:49.668321] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.378 [2024-07-14 15:09:49.668352] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.378 [2024-07-14 15:09:49.668373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.378 [2024-07-14 15:09:49.672443] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.378 [2024-07-14 15:09:49.681793] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.378 [2024-07-14 15:09:49.682314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.378 [2024-07-14 15:09:49.682355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.378 [2024-07-14 15:09:49.682381] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.378 [2024-07-14 15:09:49.682664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.378 [2024-07-14 15:09:49.682961] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.378 [2024-07-14 15:09:49.682994] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.378 [2024-07-14 15:09:49.683016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.637 [2024-07-14 15:09:49.687110] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.637 [2024-07-14 15:09:49.696249] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.637 [2024-07-14 15:09:49.696724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.637 [2024-07-14 15:09:49.696764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.637 [2024-07-14 15:09:49.696790] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.637 [2024-07-14 15:09:49.697082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.637 [2024-07-14 15:09:49.697368] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.637 [2024-07-14 15:09:49.697399] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.637 [2024-07-14 15:09:49.697421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.637 [2024-07-14 15:09:49.701520] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.637 [2024-07-14 15:09:49.710678] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.637 [2024-07-14 15:09:49.711170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.637 [2024-07-14 15:09:49.711216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.637 [2024-07-14 15:09:49.711243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.637 [2024-07-14 15:09:49.711524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.637 [2024-07-14 15:09:49.711808] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.637 [2024-07-14 15:09:49.711838] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.637 [2024-07-14 15:09:49.711860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.637 [2024-07-14 15:09:49.715977] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.637 [2024-07-14 15:09:49.725122] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.637 [2024-07-14 15:09:49.725570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.637 [2024-07-14 15:09:49.725611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.637 [2024-07-14 15:09:49.725636] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.637 [2024-07-14 15:09:49.725939] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.637 [2024-07-14 15:09:49.726224] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.637 [2024-07-14 15:09:49.726254] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.637 [2024-07-14 15:09:49.726276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.637 [2024-07-14 15:09:49.730351] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.637 [2024-07-14 15:09:49.739487] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.637 [2024-07-14 15:09:49.739942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.637 [2024-07-14 15:09:49.739984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.637 [2024-07-14 15:09:49.740010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.637 [2024-07-14 15:09:49.740293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.637 [2024-07-14 15:09:49.740580] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.637 [2024-07-14 15:09:49.740611] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.637 [2024-07-14 15:09:49.740633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.637 [2024-07-14 15:09:49.744729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.637 [2024-07-14 15:09:49.753886] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.637 [2024-07-14 15:09:49.754319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.637 [2024-07-14 15:09:49.754360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.637 [2024-07-14 15:09:49.754385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.637 [2024-07-14 15:09:49.754668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.637 [2024-07-14 15:09:49.754974] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.637 [2024-07-14 15:09:49.755006] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.637 [2024-07-14 15:09:49.755028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.637 [2024-07-14 15:09:49.759132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.637 [2024-07-14 15:09:49.768498] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.637 [2024-07-14 15:09:49.768963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.637 [2024-07-14 15:09:49.769006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.637 [2024-07-14 15:09:49.769032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.637 [2024-07-14 15:09:49.769313] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.637 [2024-07-14 15:09:49.769596] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.637 [2024-07-14 15:09:49.769627] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.637 [2024-07-14 15:09:49.769648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.637 [2024-07-14 15:09:49.773731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.637 [2024-07-14 15:09:49.782882] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.637 [2024-07-14 15:09:49.783345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.637 [2024-07-14 15:09:49.783385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.637 [2024-07-14 15:09:49.783411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.637 [2024-07-14 15:09:49.783692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.637 [2024-07-14 15:09:49.783986] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.637 [2024-07-14 15:09:49.784018] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.637 [2024-07-14 15:09:49.784041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.637 [2024-07-14 15:09:49.788321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.637 [2024-07-14 15:09:49.797476] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.637 [2024-07-14 15:09:49.797935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.638 [2024-07-14 15:09:49.797984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.638 [2024-07-14 15:09:49.798011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.638 [2024-07-14 15:09:49.798302] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.638 [2024-07-14 15:09:49.798591] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.638 [2024-07-14 15:09:49.798623] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.638 [2024-07-14 15:09:49.798650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.638 [2024-07-14 15:09:49.802762] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.638 [2024-07-14 15:09:49.811907] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.638 [2024-07-14 15:09:49.812384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.638 [2024-07-14 15:09:49.812425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.638 [2024-07-14 15:09:49.812451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.638 [2024-07-14 15:09:49.812732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.638 [2024-07-14 15:09:49.813038] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.638 [2024-07-14 15:09:49.813069] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.638 [2024-07-14 15:09:49.813092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.638 [2024-07-14 15:09:49.817175] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.638 [2024-07-14 15:09:49.826314] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.638 [2024-07-14 15:09:49.826739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.638 [2024-07-14 15:09:49.826779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.638 [2024-07-14 15:09:49.826804] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.638 [2024-07-14 15:09:49.827095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.638 [2024-07-14 15:09:49.827379] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.638 [2024-07-14 15:09:49.827410] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.638 [2024-07-14 15:09:49.827432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.638 [2024-07-14 15:09:49.831515] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.638 [2024-07-14 15:09:49.840904] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.638 [2024-07-14 15:09:49.841329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.638 [2024-07-14 15:09:49.841379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.638 [2024-07-14 15:09:49.841419] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.638 [2024-07-14 15:09:49.841700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.638 [2024-07-14 15:09:49.842004] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.638 [2024-07-14 15:09:49.842035] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.638 [2024-07-14 15:09:49.842057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.638 [2024-07-14 15:09:49.846161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.638 [2024-07-14 15:09:49.855292] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.638 [2024-07-14 15:09:49.855733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.638 [2024-07-14 15:09:49.855781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.638 [2024-07-14 15:09:49.855808] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.638 [2024-07-14 15:09:49.856101] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.638 [2024-07-14 15:09:49.856386] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.638 [2024-07-14 15:09:49.856418] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.638 [2024-07-14 15:09:49.856440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.638 [2024-07-14 15:09:49.860522] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.638 [2024-07-14 15:09:49.869899] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.638 [2024-07-14 15:09:49.870348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.638 [2024-07-14 15:09:49.870388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.638 [2024-07-14 15:09:49.870413] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.638 [2024-07-14 15:09:49.870694] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.638 [2024-07-14 15:09:49.870992] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.638 [2024-07-14 15:09:49.871023] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.638 [2024-07-14 15:09:49.871045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.638 [2024-07-14 15:09:49.875143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.638 [2024-07-14 15:09:49.884318] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.638 [2024-07-14 15:09:49.884789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.638 [2024-07-14 15:09:49.884830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.638 [2024-07-14 15:09:49.884855] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.638 [2024-07-14 15:09:49.885145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.638 [2024-07-14 15:09:49.885431] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.638 [2024-07-14 15:09:49.885462] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.638 [2024-07-14 15:09:49.885484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.638 [2024-07-14 15:09:49.889570] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.638 [2024-07-14 15:09:49.898717] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.638 [2024-07-14 15:09:49.899180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.638 [2024-07-14 15:09:49.899220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.638 [2024-07-14 15:09:49.899244] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.638 [2024-07-14 15:09:49.899531] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.638 [2024-07-14 15:09:49.899815] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.638 [2024-07-14 15:09:49.899846] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.638 [2024-07-14 15:09:49.899868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.638 [2024-07-14 15:09:49.903979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.638 [2024-07-14 15:09:49.913131] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.638 [2024-07-14 15:09:49.913600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.638 [2024-07-14 15:09:49.913641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.638 [2024-07-14 15:09:49.913667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.638 [2024-07-14 15:09:49.913959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.638 [2024-07-14 15:09:49.914243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.638 [2024-07-14 15:09:49.914274] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.638 [2024-07-14 15:09:49.914295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.638 [2024-07-14 15:09:49.918395] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.638 [2024-07-14 15:09:49.927549] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.638 [2024-07-14 15:09:49.928036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.638 [2024-07-14 15:09:49.928078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.638 [2024-07-14 15:09:49.928103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.638 [2024-07-14 15:09:49.928386] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.638 [2024-07-14 15:09:49.928671] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.638 [2024-07-14 15:09:49.928702] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.638 [2024-07-14 15:09:49.928724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.638 [2024-07-14 15:09:49.932810] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.638 [2024-07-14 15:09:49.942192] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.638 [2024-07-14 15:09:49.942613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.638 [2024-07-14 15:09:49.942653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.638 [2024-07-14 15:09:49.942679] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.638 [2024-07-14 15:09:49.942973] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.638 [2024-07-14 15:09:49.943259] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.638 [2024-07-14 15:09:49.943290] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.638 [2024-07-14 15:09:49.943317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.900 [2024-07-14 15:09:49.947422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.900 [2024-07-14 15:09:49.956801] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.900 [2024-07-14 15:09:49.957239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.900 [2024-07-14 15:09:49.957280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.900 [2024-07-14 15:09:49.957306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.900 [2024-07-14 15:09:49.957588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.900 [2024-07-14 15:09:49.957872] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.900 [2024-07-14 15:09:49.957914] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.900 [2024-07-14 15:09:49.957936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.900 [2024-07-14 15:09:49.962018] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.900 [2024-07-14 15:09:49.971412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.900 [2024-07-14 15:09:49.971889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.900 [2024-07-14 15:09:49.971941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.900 [2024-07-14 15:09:49.971967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.900 [2024-07-14 15:09:49.972249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.900 [2024-07-14 15:09:49.972534] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.900 [2024-07-14 15:09:49.972565] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.900 [2024-07-14 15:09:49.972587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.900 [2024-07-14 15:09:49.976694] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.900 [2024-07-14 15:09:49.985915] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.900 [2024-07-14 15:09:49.986368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.900 [2024-07-14 15:09:49.986408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.900 [2024-07-14 15:09:49.986434] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.900 [2024-07-14 15:09:49.986715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.900 [2024-07-14 15:09:49.987012] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.900 [2024-07-14 15:09:49.987044] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.900 [2024-07-14 15:09:49.987066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.900 [2024-07-14 15:09:49.991176] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.900 [2024-07-14 15:09:50.000341] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.900 [2024-07-14 15:09:50.000827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.900 [2024-07-14 15:09:50.000868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.900 [2024-07-14 15:09:50.000904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.900 [2024-07-14 15:09:50.001194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.900 [2024-07-14 15:09:50.001479] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.900 [2024-07-14 15:09:50.001511] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.900 [2024-07-14 15:09:50.001533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.900 [2024-07-14 15:09:50.005641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.900 [2024-07-14 15:09:50.014858] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.900 [2024-07-14 15:09:50.015344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.900 [2024-07-14 15:09:50.015386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.900 [2024-07-14 15:09:50.015413] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.900 [2024-07-14 15:09:50.015696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.900 [2024-07-14 15:09:50.015994] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.900 [2024-07-14 15:09:50.016027] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.900 [2024-07-14 15:09:50.016049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.900 [2024-07-14 15:09:50.020170] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.900 [2024-07-14 15:09:50.030918] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.900 [2024-07-14 15:09:50.031559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.900 [2024-07-14 15:09:50.031613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.900 [2024-07-14 15:09:50.031650] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.900 [2024-07-14 15:09:50.032111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.900 [2024-07-14 15:09:50.032406] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.900 [2024-07-14 15:09:50.032436] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.900 [2024-07-14 15:09:50.032457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.900 [2024-07-14 15:09:50.036282] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.900 [2024-07-14 15:09:50.045035] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.900 [2024-07-14 15:09:50.045521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.900 [2024-07-14 15:09:50.045560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.900 [2024-07-14 15:09:50.045584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.900 [2024-07-14 15:09:50.045852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.900 [2024-07-14 15:09:50.046136] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.900 [2024-07-14 15:09:50.046165] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.900 [2024-07-14 15:09:50.046186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.900 [2024-07-14 15:09:50.049940] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.900 [2024-07-14 15:09:50.059401] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.900 [2024-07-14 15:09:50.059834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.900 [2024-07-14 15:09:50.059871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.900 [2024-07-14 15:09:50.059906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.900 [2024-07-14 15:09:50.060185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.900 [2024-07-14 15:09:50.060453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.900 [2024-07-14 15:09:50.060479] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.900 [2024-07-14 15:09:50.060498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.900 [2024-07-14 15:09:50.064358] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.900 [2024-07-14 15:09:50.073465] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.900 [2024-07-14 15:09:50.073973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.900 [2024-07-14 15:09:50.074010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.900 [2024-07-14 15:09:50.074033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.900 [2024-07-14 15:09:50.074308] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.900 [2024-07-14 15:09:50.074569] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.900 [2024-07-14 15:09:50.074596] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.900 [2024-07-14 15:09:50.074615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.900 [2024-07-14 15:09:50.078408] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.900 [2024-07-14 15:09:50.087663] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.900 [2024-07-14 15:09:50.088115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.900 [2024-07-14 15:09:50.088152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.900 [2024-07-14 15:09:50.088175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.900 [2024-07-14 15:09:50.088447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.900 [2024-07-14 15:09:50.088712] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.900 [2024-07-14 15:09:50.088738] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.900 [2024-07-14 15:09:50.088763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.900 [2024-07-14 15:09:50.092615] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.900 [2024-07-14 15:09:50.101720] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.900 [2024-07-14 15:09:50.102190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.900 [2024-07-14 15:09:50.102228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.900 [2024-07-14 15:09:50.102251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.900 [2024-07-14 15:09:50.102537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.900 [2024-07-14 15:09:50.102794] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.900 [2024-07-14 15:09:50.102820] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.900 [2024-07-14 15:09:50.102839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.900 [2024-07-14 15:09:50.106463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.900 [2024-07-14 15:09:50.115681] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.900 [2024-07-14 15:09:50.116165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.900 [2024-07-14 15:09:50.116203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.900 [2024-07-14 15:09:50.116225] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.900 [2024-07-14 15:09:50.116506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.900 [2024-07-14 15:09:50.116741] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.900 [2024-07-14 15:09:50.116767] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.900 [2024-07-14 15:09:50.116784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.900 [2024-07-14 15:09:50.120412] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.900 [2024-07-14 15:09:50.129649] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.900 [2024-07-14 15:09:50.130123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.900 [2024-07-14 15:09:50.130160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.900 [2024-07-14 15:09:50.130183] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.900 [2024-07-14 15:09:50.130466] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.900 [2024-07-14 15:09:50.130700] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.900 [2024-07-14 15:09:50.130726] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.900 [2024-07-14 15:09:50.130743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.900 [2024-07-14 15:09:50.134342] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.900 [2024-07-14 15:09:50.143503] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.900 [2024-07-14 15:09:50.144002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.900 [2024-07-14 15:09:50.144039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.900 [2024-07-14 15:09:50.144062] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.901 [2024-07-14 15:09:50.144341] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.901 [2024-07-14 15:09:50.144596] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.901 [2024-07-14 15:09:50.144622] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.901 [2024-07-14 15:09:50.144640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.901 [2024-07-14 15:09:50.148288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.901 [2024-07-14 15:09:50.157403] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.901 [2024-07-14 15:09:50.157929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.901 [2024-07-14 15:09:50.157966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.901 [2024-07-14 15:09:50.157990] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.901 [2024-07-14 15:09:50.158274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.901 [2024-07-14 15:09:50.158526] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.901 [2024-07-14 15:09:50.158553] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.901 [2024-07-14 15:09:50.158571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.901 [2024-07-14 15:09:50.162217] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.901 [2024-07-14 15:09:50.171479] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.901 [2024-07-14 15:09:50.171916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.901 [2024-07-14 15:09:50.171954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.901 [2024-07-14 15:09:50.171977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.901 [2024-07-14 15:09:50.172261] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.901 [2024-07-14 15:09:50.172495] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.901 [2024-07-14 15:09:50.172521] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.901 [2024-07-14 15:09:50.172538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.901 [2024-07-14 15:09:50.176109] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.901 [2024-07-14 15:09:50.185353] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.901 [2024-07-14 15:09:50.185781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.901 [2024-07-14 15:09:50.185818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.901 [2024-07-14 15:09:50.185841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.901 [2024-07-14 15:09:50.186109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.901 [2024-07-14 15:09:50.186366] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.901 [2024-07-14 15:09:50.186392] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.901 [2024-07-14 15:09:50.186410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.901 [2024-07-14 15:09:50.190008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.901 [2024-07-14 15:09:50.199016] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.901 [2024-07-14 15:09:50.199466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.901 [2024-07-14 15:09:50.199519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.901 [2024-07-14 15:09:50.199542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.901 [2024-07-14 15:09:50.199837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.901 [2024-07-14 15:09:50.200141] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.901 [2024-07-14 15:09:50.200170] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.901 [2024-07-14 15:09:50.200190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.901 [2024-07-14 15:09:50.203931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.162 [2024-07-14 15:09:50.213045] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.162 [2024-07-14 15:09:50.213451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.162 [2024-07-14 15:09:50.213502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.162 [2024-07-14 15:09:50.213526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.162 [2024-07-14 15:09:50.213806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.162 [2024-07-14 15:09:50.214092] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.162 [2024-07-14 15:09:50.214120] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.162 [2024-07-14 15:09:50.214140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.162 [2024-07-14 15:09:50.217546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.162 [2024-07-14 15:09:50.226802] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.162 [2024-07-14 15:09:50.227274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.162 [2024-07-14 15:09:50.227325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.162 [2024-07-14 15:09:50.227347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.162 [2024-07-14 15:09:50.227646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.162 [2024-07-14 15:09:50.227907] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.162 [2024-07-14 15:09:50.227950] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.162 [2024-07-14 15:09:50.227976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.162 [2024-07-14 15:09:50.231412] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.162 [2024-07-14 15:09:50.240626] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.162 [2024-07-14 15:09:50.241031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.162 [2024-07-14 15:09:50.241068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.162 [2024-07-14 15:09:50.241091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.162 [2024-07-14 15:09:50.241372] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.162 [2024-07-14 15:09:50.241606] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.162 [2024-07-14 15:09:50.241632] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.162 [2024-07-14 15:09:50.241662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.162 [2024-07-14 15:09:50.245122] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.162 [2024-07-14 15:09:50.254298] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.162 [2024-07-14 15:09:50.254722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.162 [2024-07-14 15:09:50.254766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.162 [2024-07-14 15:09:50.254790] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.162 [2024-07-14 15:09:50.255069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.162 [2024-07-14 15:09:50.255342] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.162 [2024-07-14 15:09:50.255368] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.163 [2024-07-14 15:09:50.255386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.163 [2024-07-14 15:09:50.258811] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.163 [2024-07-14 15:09:50.268075] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.163 [2024-07-14 15:09:50.268528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.163 [2024-07-14 15:09:50.268565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.163 [2024-07-14 15:09:50.268587] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.163 [2024-07-14 15:09:50.268893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.163 [2024-07-14 15:09:50.269165] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.163 [2024-07-14 15:09:50.269194] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.163 [2024-07-14 15:09:50.269227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.163 [2024-07-14 15:09:50.272614] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.163 [2024-07-14 15:09:50.281832] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.163 [2024-07-14 15:09:50.282250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.163 [2024-07-14 15:09:50.282285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.163 [2024-07-14 15:09:50.282308] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.163 [2024-07-14 15:09:50.282581] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.163 [2024-07-14 15:09:50.282815] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.163 [2024-07-14 15:09:50.282840] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.163 [2024-07-14 15:09:50.282874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.163 [2024-07-14 15:09:50.286332] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.163 [2024-07-14 15:09:50.295502] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.163 [2024-07-14 15:09:50.295931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.163 [2024-07-14 15:09:50.295967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.163 [2024-07-14 15:09:50.295990] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.163 [2024-07-14 15:09:50.296274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.163 [2024-07-14 15:09:50.296532] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.163 [2024-07-14 15:09:50.296559] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.163 [2024-07-14 15:09:50.296579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.163 [2024-07-14 15:09:50.300422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.163 [2024-07-14 15:09:50.309428] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.163 [2024-07-14 15:09:50.309909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.163 [2024-07-14 15:09:50.309947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.163 [2024-07-14 15:09:50.309970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.163 [2024-07-14 15:09:50.310249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.163 [2024-07-14 15:09:50.310482] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.163 [2024-07-14 15:09:50.310508] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.163 [2024-07-14 15:09:50.310525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.163 [2024-07-14 15:09:50.313985] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.163 [2024-07-14 15:09:50.323178] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.163 [2024-07-14 15:09:50.323619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.163 [2024-07-14 15:09:50.323655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.163 [2024-07-14 15:09:50.323677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.163 [2024-07-14 15:09:50.323985] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.163 [2024-07-14 15:09:50.324233] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.163 [2024-07-14 15:09:50.324274] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.163 [2024-07-14 15:09:50.324292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.163 [2024-07-14 15:09:50.327663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.163 [2024-07-14 15:09:50.336856] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.163 [2024-07-14 15:09:50.337352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.163 [2024-07-14 15:09:50.337403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.163 [2024-07-14 15:09:50.337427] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.163 [2024-07-14 15:09:50.337705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.163 [2024-07-14 15:09:50.337970] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.163 [2024-07-14 15:09:50.337998] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.163 [2024-07-14 15:09:50.338018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.163 [2024-07-14 15:09:50.341422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.163 [2024-07-14 15:09:50.350656] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.163 [2024-07-14 15:09:50.351109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.163 [2024-07-14 15:09:50.351146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.163 [2024-07-14 15:09:50.351169] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.163 [2024-07-14 15:09:50.351463] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.163 [2024-07-14 15:09:50.351696] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.163 [2024-07-14 15:09:50.351721] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.163 [2024-07-14 15:09:50.351739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.163 [2024-07-14 15:09:50.355218] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.163 [2024-07-14 15:09:50.364469] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.163 [2024-07-14 15:09:50.364895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.163 [2024-07-14 15:09:50.364933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.163 [2024-07-14 15:09:50.364956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.163 [2024-07-14 15:09:50.365241] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.163 [2024-07-14 15:09:50.365475] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.163 [2024-07-14 15:09:50.365501] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.163 [2024-07-14 15:09:50.365523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.163 [2024-07-14 15:09:50.369321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.163 [2024-07-14 15:09:50.378844] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.163 [2024-07-14 15:09:50.379346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.163 [2024-07-14 15:09:50.379380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.163 [2024-07-14 15:09:50.379418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.163 [2024-07-14 15:09:50.379707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.163 [2024-07-14 15:09:50.380011] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.163 [2024-07-14 15:09:50.380039] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.163 [2024-07-14 15:09:50.380073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.163 [2024-07-14 15:09:50.384159] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.163 [2024-07-14 15:09:50.393331] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.163 [2024-07-14 15:09:50.393899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.163 [2024-07-14 15:09:50.393941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.163 [2024-07-14 15:09:50.393967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.163 [2024-07-14 15:09:50.394249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.163 [2024-07-14 15:09:50.394535] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.163 [2024-07-14 15:09:50.394566] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.163 [2024-07-14 15:09:50.394587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.163 [2024-07-14 15:09:50.398675] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.163 [2024-07-14 15:09:50.407837] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.163 [2024-07-14 15:09:50.408303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.163 [2024-07-14 15:09:50.408344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.163 [2024-07-14 15:09:50.408369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.163 [2024-07-14 15:09:50.408650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.164 [2024-07-14 15:09:50.408949] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.164 [2024-07-14 15:09:50.408980] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.164 [2024-07-14 15:09:50.409002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.164 [2024-07-14 15:09:50.413113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.164 [2024-07-14 15:09:50.422247] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.164 [2024-07-14 15:09:50.422696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.164 [2024-07-14 15:09:50.422738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.164 [2024-07-14 15:09:50.422763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.164 [2024-07-14 15:09:50.423058] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.164 [2024-07-14 15:09:50.423345] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.164 [2024-07-14 15:09:50.423376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.164 [2024-07-14 15:09:50.423397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.164 [2024-07-14 15:09:50.427511] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.164 [2024-07-14 15:09:50.436629] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.164 [2024-07-14 15:09:50.437115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.164 [2024-07-14 15:09:50.437157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.164 [2024-07-14 15:09:50.437182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.164 [2024-07-14 15:09:50.437462] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.164 [2024-07-14 15:09:50.437746] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.164 [2024-07-14 15:09:50.437777] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.164 [2024-07-14 15:09:50.437798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.164 [2024-07-14 15:09:50.441874] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.164 [2024-07-14 15:09:50.451037] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.164 [2024-07-14 15:09:50.451503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.164 [2024-07-14 15:09:50.451543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.164 [2024-07-14 15:09:50.451569] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.164 [2024-07-14 15:09:50.451849] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.164 [2024-07-14 15:09:50.452144] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.164 [2024-07-14 15:09:50.452176] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.164 [2024-07-14 15:09:50.452198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.164 [2024-07-14 15:09:50.456286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.164 [2024-07-14 15:09:50.465630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.164 [2024-07-14 15:09:50.466104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.164 [2024-07-14 15:09:50.466145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.164 [2024-07-14 15:09:50.466170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.164 [2024-07-14 15:09:50.466458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.164 [2024-07-14 15:09:50.466742] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.164 [2024-07-14 15:09:50.466773] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.164 [2024-07-14 15:09:50.466795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.425 [2024-07-14 15:09:50.470907] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.425 [2024-07-14 15:09:50.480024] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.425 [2024-07-14 15:09:50.480479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.425 [2024-07-14 15:09:50.480519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.425 [2024-07-14 15:09:50.480544] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.425 [2024-07-14 15:09:50.480824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.425 [2024-07-14 15:09:50.481120] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.425 [2024-07-14 15:09:50.481152] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.425 [2024-07-14 15:09:50.481174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.425 [2024-07-14 15:09:50.485251] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.425 [2024-07-14 15:09:50.494586] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.425 [2024-07-14 15:09:50.495054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.425 [2024-07-14 15:09:50.495095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.425 [2024-07-14 15:09:50.495120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.425 [2024-07-14 15:09:50.495401] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.426 [2024-07-14 15:09:50.495684] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.426 [2024-07-14 15:09:50.495715] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.426 [2024-07-14 15:09:50.495737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.426 [2024-07-14 15:09:50.499816] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.426 [2024-07-14 15:09:50.509182] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.426 [2024-07-14 15:09:50.509641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.426 [2024-07-14 15:09:50.509681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.426 [2024-07-14 15:09:50.509705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.426 [2024-07-14 15:09:50.509998] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.426 [2024-07-14 15:09:50.510282] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.426 [2024-07-14 15:09:50.510320] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.426 [2024-07-14 15:09:50.510343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.426 [2024-07-14 15:09:50.514418] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.426 [2024-07-14 15:09:50.523752] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.426 [2024-07-14 15:09:50.524223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.426 [2024-07-14 15:09:50.524263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.426 [2024-07-14 15:09:50.524289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.426 [2024-07-14 15:09:50.524569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.426 [2024-07-14 15:09:50.524852] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.426 [2024-07-14 15:09:50.524894] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.426 [2024-07-14 15:09:50.524918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.426 [2024-07-14 15:09:50.528987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.426 [2024-07-14 15:09:50.538319] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.426 [2024-07-14 15:09:50.538797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.426 [2024-07-14 15:09:50.538838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.426 [2024-07-14 15:09:50.538865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.426 [2024-07-14 15:09:50.539165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.426 [2024-07-14 15:09:50.539449] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.426 [2024-07-14 15:09:50.539480] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.426 [2024-07-14 15:09:50.539502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.426 [2024-07-14 15:09:50.543568] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.426 [2024-07-14 15:09:50.552910] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.426 [2024-07-14 15:09:50.553408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.426 [2024-07-14 15:09:50.553449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.426 [2024-07-14 15:09:50.553474] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.426 [2024-07-14 15:09:50.553755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.426 [2024-07-14 15:09:50.554061] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.426 [2024-07-14 15:09:50.554094] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.426 [2024-07-14 15:09:50.554116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.426 [2024-07-14 15:09:50.558177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.426 [2024-07-14 15:09:50.567273] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.426 [2024-07-14 15:09:50.567844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.426 [2024-07-14 15:09:50.567893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.426 [2024-07-14 15:09:50.567920] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.426 [2024-07-14 15:09:50.568202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.426 [2024-07-14 15:09:50.568485] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.426 [2024-07-14 15:09:50.568516] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.426 [2024-07-14 15:09:50.568538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.426 [2024-07-14 15:09:50.572612] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.426 [2024-07-14 15:09:50.581703] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.426 [2024-07-14 15:09:50.582260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.426 [2024-07-14 15:09:50.582320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.426 [2024-07-14 15:09:50.582345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.426 [2024-07-14 15:09:50.582625] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.426 [2024-07-14 15:09:50.582922] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.426 [2024-07-14 15:09:50.582954] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.426 [2024-07-14 15:09:50.582976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.426 [2024-07-14 15:09:50.587041] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.426 [2024-07-14 15:09:50.596119] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.426 [2024-07-14 15:09:50.596545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.426 [2024-07-14 15:09:50.596585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.426 [2024-07-14 15:09:50.596610] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.426 [2024-07-14 15:09:50.596901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.426 [2024-07-14 15:09:50.597185] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.426 [2024-07-14 15:09:50.597217] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.426 [2024-07-14 15:09:50.597238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.426 [2024-07-14 15:09:50.601290] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.426 [2024-07-14 15:09:50.610591] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.426 [2024-07-14 15:09:50.611092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.426 [2024-07-14 15:09:50.611133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.426 [2024-07-14 15:09:50.611159] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.426 [2024-07-14 15:09:50.611444] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.426 [2024-07-14 15:09:50.611727] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.426 [2024-07-14 15:09:50.611759] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.426 [2024-07-14 15:09:50.611780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.426 [2024-07-14 15:09:50.615826] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.426 [2024-07-14 15:09:50.625135] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.426 [2024-07-14 15:09:50.625597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.426 [2024-07-14 15:09:50.625655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.426 [2024-07-14 15:09:50.625680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.426 [2024-07-14 15:09:50.625974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.426 [2024-07-14 15:09:50.626255] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.426 [2024-07-14 15:09:50.626287] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.426 [2024-07-14 15:09:50.626309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.426 [2024-07-14 15:09:50.630365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.426 [2024-07-14 15:09:50.639677] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.426 [2024-07-14 15:09:50.640148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.426 [2024-07-14 15:09:50.640190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.426 [2024-07-14 15:09:50.640215] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.426 [2024-07-14 15:09:50.640495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.426 [2024-07-14 15:09:50.640778] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.426 [2024-07-14 15:09:50.640809] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.426 [2024-07-14 15:09:50.640831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.426 [2024-07-14 15:09:50.644900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.426 [2024-07-14 15:09:50.654224] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.426 [2024-07-14 15:09:50.654689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.426 [2024-07-14 15:09:50.654740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.426 [2024-07-14 15:09:50.654766] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.427 [2024-07-14 15:09:50.655060] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.427 [2024-07-14 15:09:50.655343] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.427 [2024-07-14 15:09:50.655381] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.427 [2024-07-14 15:09:50.655404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.427 [2024-07-14 15:09:50.659467] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.427 [2024-07-14 15:09:50.668775] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.427 [2024-07-14 15:09:50.669241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.427 [2024-07-14 15:09:50.669282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.427 [2024-07-14 15:09:50.669307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.427 [2024-07-14 15:09:50.669587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.427 [2024-07-14 15:09:50.669868] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.427 [2024-07-14 15:09:50.669911] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.427 [2024-07-14 15:09:50.669933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.427 [2024-07-14 15:09:50.674000] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.427 [2024-07-14 15:09:50.683292] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.427 [2024-07-14 15:09:50.683842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.427 [2024-07-14 15:09:50.683908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.427 [2024-07-14 15:09:50.683934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.427 [2024-07-14 15:09:50.684215] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.427 [2024-07-14 15:09:50.684497] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.427 [2024-07-14 15:09:50.684528] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.427 [2024-07-14 15:09:50.684550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.427 [2024-07-14 15:09:50.688608] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.427 [2024-07-14 15:09:50.697670] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.427 [2024-07-14 15:09:50.698130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.427 [2024-07-14 15:09:50.698170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.427 [2024-07-14 15:09:50.698195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.427 [2024-07-14 15:09:50.698476] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.427 [2024-07-14 15:09:50.698760] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.427 [2024-07-14 15:09:50.698791] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.427 [2024-07-14 15:09:50.698813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.427 [2024-07-14 15:09:50.702888] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.427 [2024-07-14 15:09:50.712236] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.427 [2024-07-14 15:09:50.712671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.427 [2024-07-14 15:09:50.712712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.427 [2024-07-14 15:09:50.712739] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.427 [2024-07-14 15:09:50.713034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.427 [2024-07-14 15:09:50.713318] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.427 [2024-07-14 15:09:50.713349] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.427 [2024-07-14 15:09:50.713371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.427 [2024-07-14 15:09:50.717434] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.427 [2024-07-14 15:09:50.726747] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.427 [2024-07-14 15:09:50.727198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.427 [2024-07-14 15:09:50.727239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.427 [2024-07-14 15:09:50.727264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.427 [2024-07-14 15:09:50.727544] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.427 [2024-07-14 15:09:50.727825] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.427 [2024-07-14 15:09:50.727856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.427 [2024-07-14 15:09:50.727891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.427 [2024-07-14 15:09:50.731953] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2060173 Killed "${NVMF_APP[@]}" "$@" 00:37:11.687 15:09:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:37:11.687 15:09:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:37:11.687 15:09:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:11.687 15:09:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:11.687 15:09:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:11.687 [2024-07-14 15:09:50.741314] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.687 [2024-07-14 15:09:50.741765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.687 [2024-07-14 15:09:50.741806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.687 [2024-07-14 15:09:50.741832] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.687 [2024-07-14 15:09:50.742120] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.687 [2024-07-14 15:09:50.742402] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.687 [2024-07-14 15:09:50.742433] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.687 [2024-07-14 15:09:50.742455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.687 15:09:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2061392 00:37:11.687 15:09:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:11.687 15:09:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2061392 00:37:11.687 15:09:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 2061392 ']' 00:37:11.687 15:09:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:11.687 [2024-07-14 15:09:50.746525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.687 15:09:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:11.687 15:09:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:11.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:11.687 15:09:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:11.687 15:09:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:11.687 [2024-07-14 15:09:50.755861] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.687 [2024-07-14 15:09:50.756325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.687 [2024-07-14 15:09:50.756367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.687 [2024-07-14 15:09:50.756392] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.687 [2024-07-14 15:09:50.756672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.687 [2024-07-14 15:09:50.756972] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.687 [2024-07-14 15:09:50.757005] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.687 [2024-07-14 15:09:50.757027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.687 [2024-07-14 15:09:50.761103] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.687 [2024-07-14 15:09:50.770436] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.687 [2024-07-14 15:09:50.770903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.687 [2024-07-14 15:09:50.770945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.687 [2024-07-14 15:09:50.770970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.687 [2024-07-14 15:09:50.771250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.687 [2024-07-14 15:09:50.771533] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.687 [2024-07-14 15:09:50.771564] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.687 [2024-07-14 15:09:50.771586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.687 [2024-07-14 15:09:50.775652] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.687 [2024-07-14 15:09:50.784972] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.687 [2024-07-14 15:09:50.785430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.687 [2024-07-14 15:09:50.785470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.687 [2024-07-14 15:09:50.785496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.687 [2024-07-14 15:09:50.785782] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.687 [2024-07-14 15:09:50.786078] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.687 [2024-07-14 15:09:50.786110] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.687 [2024-07-14 15:09:50.786131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.687 [2024-07-14 15:09:50.790180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.687 [2024-07-14 15:09:50.799468] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.687 [2024-07-14 15:09:50.799937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.687 [2024-07-14 15:09:50.799978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.687 [2024-07-14 15:09:50.800003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.687 [2024-07-14 15:09:50.800284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.687 [2024-07-14 15:09:50.800566] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.687 [2024-07-14 15:09:50.800596] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.687 [2024-07-14 15:09:50.800618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.687 [2024-07-14 15:09:50.804927] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.687 [2024-07-14 15:09:50.814014] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.687 [2024-07-14 15:09:50.814459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.687 [2024-07-14 15:09:50.814499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.687 [2024-07-14 15:09:50.814525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.687 [2024-07-14 15:09:50.814805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.688 [2024-07-14 15:09:50.815105] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.688 [2024-07-14 15:09:50.815137] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.688 [2024-07-14 15:09:50.815159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.688 [2024-07-14 15:09:50.819217] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.688 [2024-07-14 15:09:50.828522] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.688 [2024-07-14 15:09:50.828945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.688 [2024-07-14 15:09:50.828986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.688 [2024-07-14 15:09:50.829011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.688 [2024-07-14 15:09:50.829291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.688 [2024-07-14 15:09:50.829574] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.688 [2024-07-14 15:09:50.829611] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.688 [2024-07-14 15:09:50.829634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.688 [2024-07-14 15:09:50.832195] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:37:11.688 [2024-07-14 15:09:50.832331] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:11.688 [2024-07-14 15:09:50.833700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.688 [2024-07-14 15:09:50.843020] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.688 [2024-07-14 15:09:50.843578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.688 [2024-07-14 15:09:50.843636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.688 [2024-07-14 15:09:50.843661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.688 [2024-07-14 15:09:50.843953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.688 [2024-07-14 15:09:50.844237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.688 [2024-07-14 15:09:50.844268] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.688 [2024-07-14 15:09:50.844290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.688 [2024-07-14 15:09:50.848354] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.688 [2024-07-14 15:09:50.857430] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.688 [2024-07-14 15:09:50.857893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.688 [2024-07-14 15:09:50.857935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.688 [2024-07-14 15:09:50.857961] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.688 [2024-07-14 15:09:50.858256] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.688 [2024-07-14 15:09:50.858539] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.688 [2024-07-14 15:09:50.858570] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.688 [2024-07-14 15:09:50.858591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.688 [2024-07-14 15:09:50.862641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.688 [2024-07-14 15:09:50.871979] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.688 [2024-07-14 15:09:50.872511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.688 [2024-07-14 15:09:50.872568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.688 [2024-07-14 15:09:50.872593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.688 [2024-07-14 15:09:50.872873] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.688 [2024-07-14 15:09:50.873176] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.688 [2024-07-14 15:09:50.873214] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.688 [2024-07-14 15:09:50.873240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.688 [2024-07-14 15:09:50.877303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.688 [2024-07-14 15:09:50.886374] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.688 [2024-07-14 15:09:50.886842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.688 [2024-07-14 15:09:50.886892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.688 [2024-07-14 15:09:50.886919] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.688 [2024-07-14 15:09:50.887200] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.688 [2024-07-14 15:09:50.887484] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.688 [2024-07-14 15:09:50.887516] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.688 [2024-07-14 15:09:50.887538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.688 [2024-07-14 15:09:50.891589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.688 [2024-07-14 15:09:50.900903] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.688 [2024-07-14 15:09:50.901399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.688 [2024-07-14 15:09:50.901440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.688 [2024-07-14 15:09:50.901465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.688 [2024-07-14 15:09:50.901744] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.688 [2024-07-14 15:09:50.902039] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.688 [2024-07-14 15:09:50.902071] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.688 [2024-07-14 15:09:50.902093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.688 [2024-07-14 15:09:50.906166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.688 EAL: No free 2048 kB hugepages reported on node 1 00:37:11.688 [2024-07-14 15:09:50.915464] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.688 [2024-07-14 15:09:50.915949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.688 [2024-07-14 15:09:50.915991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.688 [2024-07-14 15:09:50.916017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.688 [2024-07-14 15:09:50.916298] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.688 [2024-07-14 15:09:50.916581] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.688 [2024-07-14 15:09:50.916612] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.688 [2024-07-14 15:09:50.916633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.688 [2024-07-14 15:09:50.920680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.688 [2024-07-14 15:09:50.930018] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.688 [2024-07-14 15:09:50.930466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.688 [2024-07-14 15:09:50.930507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.688 [2024-07-14 15:09:50.930533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.688 [2024-07-14 15:09:50.930813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.688 [2024-07-14 15:09:50.931107] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.688 [2024-07-14 15:09:50.931139] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.688 [2024-07-14 15:09:50.931170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.688 [2024-07-14 15:09:50.935261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.688 [2024-07-14 15:09:50.944451] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.688 [2024-07-14 15:09:50.944872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.688 [2024-07-14 15:09:50.944926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.688 [2024-07-14 15:09:50.944952] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.688 [2024-07-14 15:09:50.945245] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.688 [2024-07-14 15:09:50.945529] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.688 [2024-07-14 15:09:50.945560] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.688 [2024-07-14 15:09:50.945582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.688 [2024-07-14 15:09:50.949720] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.688 [2024-07-14 15:09:50.959103] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.688 [2024-07-14 15:09:50.959566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.688 [2024-07-14 15:09:50.959612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.688 [2024-07-14 15:09:50.959639] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.688 [2024-07-14 15:09:50.959933] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.688 [2024-07-14 15:09:50.960218] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.688 [2024-07-14 15:09:50.960250] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.688 [2024-07-14 15:09:50.960272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.688 [2024-07-14 15:09:50.964427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.688 [2024-07-14 15:09:50.973742] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.688 [2024-07-14 15:09:50.974166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.689 [2024-07-14 15:09:50.974212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.689 [2024-07-14 15:09:50.974247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.689 [2024-07-14 15:09:50.974530] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.689 [2024-07-14 15:09:50.974825] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.689 [2024-07-14 15:09:50.974857] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.689 [2024-07-14 15:09:50.974901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.689 [2024-07-14 15:09:50.979065] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.689 [2024-07-14 15:09:50.984053] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:11.689 [2024-07-14 15:09:50.988365] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.689 [2024-07-14 15:09:50.988814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.689 [2024-07-14 15:09:50.988854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.689 [2024-07-14 15:09:50.988897] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.689 [2024-07-14 15:09:50.989193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.689 [2024-07-14 15:09:50.989490] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.689 [2024-07-14 15:09:50.989525] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.689 [2024-07-14 15:09:50.989547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.949 [2024-07-14 15:09:50.993844] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.949 [2024-07-14 15:09:51.002962] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.949 [2024-07-14 15:09:51.003583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.949 [2024-07-14 15:09:51.003632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.949 [2024-07-14 15:09:51.003672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.949 [2024-07-14 15:09:51.003983] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.949 [2024-07-14 15:09:51.004278] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.949 [2024-07-14 15:09:51.004311] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.949 [2024-07-14 15:09:51.004335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.949 [2024-07-14 15:09:51.008600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.949 [2024-07-14 15:09:51.017571] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.949 [2024-07-14 15:09:51.018031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.949 [2024-07-14 15:09:51.018072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.949 [2024-07-14 15:09:51.018097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.949 [2024-07-14 15:09:51.018403] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.949 [2024-07-14 15:09:51.018710] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.949 [2024-07-14 15:09:51.018743] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.949 [2024-07-14 15:09:51.018766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.949 [2024-07-14 15:09:51.022987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.949 [2024-07-14 15:09:51.032243] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.949 [2024-07-14 15:09:51.032708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.949 [2024-07-14 15:09:51.032748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.949 [2024-07-14 15:09:51.032775] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.949 [2024-07-14 15:09:51.033074] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.949 [2024-07-14 15:09:51.033367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.949 [2024-07-14 15:09:51.033399] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.949 [2024-07-14 15:09:51.033421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.949 [2024-07-14 15:09:51.037628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.949 [2024-07-14 15:09:51.046725] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.949 [2024-07-14 15:09:51.047191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.949 [2024-07-14 15:09:51.047231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.949 [2024-07-14 15:09:51.047256] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.949 [2024-07-14 15:09:51.047538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.949 [2024-07-14 15:09:51.047824] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.949 [2024-07-14 15:09:51.047855] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.949 [2024-07-14 15:09:51.047885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.949 [2024-07-14 15:09:51.052001] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.949 [2024-07-14 15:09:51.061295] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.949 [2024-07-14 15:09:51.061762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.949 [2024-07-14 15:09:51.061803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.949 [2024-07-14 15:09:51.061829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.949 [2024-07-14 15:09:51.062124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.949 [2024-07-14 15:09:51.062411] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.949 [2024-07-14 15:09:51.062455] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.949 [2024-07-14 15:09:51.062478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.949 [2024-07-14 15:09:51.066625] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.949 [2024-07-14 15:09:51.075841] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.949 [2024-07-14 15:09:51.076322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.949 [2024-07-14 15:09:51.076364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.949 [2024-07-14 15:09:51.076390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.949 [2024-07-14 15:09:51.076673] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.949 [2024-07-14 15:09:51.076975] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.949 [2024-07-14 15:09:51.077007] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.949 [2024-07-14 15:09:51.077029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.949 [2024-07-14 15:09:51.081163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.949 [2024-07-14 15:09:51.090430] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.949 [2024-07-14 15:09:51.090900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.949 [2024-07-14 15:09:51.090941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.949 [2024-07-14 15:09:51.090967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.949 [2024-07-14 15:09:51.091252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.949 [2024-07-14 15:09:51.091538] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.949 [2024-07-14 15:09:51.091569] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.949 [2024-07-14 15:09:51.091591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.949 [2024-07-14 15:09:51.095729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.949 [2024-07-14 15:09:51.105006] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.949 [2024-07-14 15:09:51.105448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.949 [2024-07-14 15:09:51.105489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.949 [2024-07-14 15:09:51.105515] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.949 [2024-07-14 15:09:51.105801] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.949 [2024-07-14 15:09:51.106101] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.949 [2024-07-14 15:09:51.106132] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.949 [2024-07-14 15:09:51.106154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.949 [2024-07-14 15:09:51.110287] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.949 [2024-07-14 15:09:51.119557] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.949 [2024-07-14 15:09:51.120048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.949 [2024-07-14 15:09:51.120090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.949 [2024-07-14 15:09:51.120123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.949 [2024-07-14 15:09:51.120412] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.949 [2024-07-14 15:09:51.120701] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.949 [2024-07-14 15:09:51.120733] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.949 [2024-07-14 15:09:51.120755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.949 [2024-07-14 15:09:51.124936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.949 [2024-07-14 15:09:51.134306] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.949 [2024-07-14 15:09:51.134905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.949 [2024-07-14 15:09:51.134954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.949 [2024-07-14 15:09:51.134983] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.949 [2024-07-14 15:09:51.135277] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.949 [2024-07-14 15:09:51.135571] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.949 [2024-07-14 15:09:51.135603] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.949 [2024-07-14 15:09:51.135627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.949 [2024-07-14 15:09:51.139780] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.949 [2024-07-14 15:09:51.148904] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.949 [2024-07-14 15:09:51.149411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.949 [2024-07-14 15:09:51.149452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.949 [2024-07-14 15:09:51.149477] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.949 [2024-07-14 15:09:51.149766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.950 [2024-07-14 15:09:51.150068] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.950 [2024-07-14 15:09:51.150101] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.950 [2024-07-14 15:09:51.150123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.950 [2024-07-14 15:09:51.154287] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.950 [2024-07-14 15:09:51.163417] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.950 [2024-07-14 15:09:51.163890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.950 [2024-07-14 15:09:51.163932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.950 [2024-07-14 15:09:51.163958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.950 [2024-07-14 15:09:51.164244] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.950 [2024-07-14 15:09:51.164541] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.950 [2024-07-14 15:09:51.164572] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.950 [2024-07-14 15:09:51.164595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.950 [2024-07-14 15:09:51.168756] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.950 [2024-07-14 15:09:51.178029] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.950 [2024-07-14 15:09:51.178491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.950 [2024-07-14 15:09:51.178532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.950 [2024-07-14 15:09:51.178558] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.950 [2024-07-14 15:09:51.178841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.950 [2024-07-14 15:09:51.179137] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.950 [2024-07-14 15:09:51.179175] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.950 [2024-07-14 15:09:51.179197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.950 [2024-07-14 15:09:51.183312] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.950 [2024-07-14 15:09:51.192536] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.950 [2024-07-14 15:09:51.193004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.950 [2024-07-14 15:09:51.193046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.950 [2024-07-14 15:09:51.193072] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.950 [2024-07-14 15:09:51.193357] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.950 [2024-07-14 15:09:51.193644] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.950 [2024-07-14 15:09:51.193675] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.950 [2024-07-14 15:09:51.193698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.950 [2024-07-14 15:09:51.197815] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.950 [2024-07-14 15:09:51.207080] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.950 [2024-07-14 15:09:51.207519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.950 [2024-07-14 15:09:51.207560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.950 [2024-07-14 15:09:51.207586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.950 [2024-07-14 15:09:51.207872] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.950 [2024-07-14 15:09:51.208169] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.950 [2024-07-14 15:09:51.208201] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.950 [2024-07-14 15:09:51.208223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.950 [2024-07-14 15:09:51.212367] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.950 [2024-07-14 15:09:51.221632] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.950 [2024-07-14 15:09:51.222083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.950 [2024-07-14 15:09:51.222124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.950 [2024-07-14 15:09:51.222149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.950 [2024-07-14 15:09:51.222434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.950 [2024-07-14 15:09:51.222724] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.950 [2024-07-14 15:09:51.222756] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.950 [2024-07-14 15:09:51.222778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.950 [2024-07-14 15:09:51.226931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.950 [2024-07-14 15:09:51.236193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.950 [2024-07-14 15:09:51.236670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.950 [2024-07-14 15:09:51.236711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.950 [2024-07-14 15:09:51.236737] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.950 [2024-07-14 15:09:51.237035] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.950 [2024-07-14 15:09:51.237324] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.950 [2024-07-14 15:09:51.237355] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.950 [2024-07-14 15:09:51.237376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.950 [2024-07-14 15:09:51.241566] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.950 [2024-07-14 15:09:51.249369] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:11.950 [2024-07-14 15:09:51.249420] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:11.950 [2024-07-14 15:09:51.249455] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:11.950 [2024-07-14 15:09:51.249476] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:11.950 [2024-07-14 15:09:51.249506] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:11.950 [2024-07-14 15:09:51.249706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:37:11.950 [2024-07-14 15:09:51.249772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:11.950 [2024-07-14 15:09:51.249781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:37:11.950 [2024-07-14 15:09:51.250699] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.950 [2024-07-14 15:09:51.251152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.950 [2024-07-14 15:09:51.251201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.950 [2024-07-14 15:09:51.251227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.950 [2024-07-14 15:09:51.251522] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.950 [2024-07-14 15:09:51.251812] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.950 [2024-07-14 15:09:51.251843] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.950 [2024-07-14 15:09:51.251866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.210 [2024-07-14 15:09:51.256151] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.210 [2024-07-14 15:09:51.265369] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.210 [2024-07-14 15:09:51.266021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.210 [2024-07-14 15:09:51.266074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.210 [2024-07-14 15:09:51.266105] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.210 [2024-07-14 15:09:51.266408] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.210 [2024-07-14 15:09:51.266708] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.210 [2024-07-14 15:09:51.266740] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.210 [2024-07-14 15:09:51.266765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.210 [2024-07-14 15:09:51.271027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.210 [2024-07-14 15:09:51.280194] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.210 [2024-07-14 15:09:51.280633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.210 [2024-07-14 15:09:51.280674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.210 [2024-07-14 15:09:51.280700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.210 [2024-07-14 15:09:51.281004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.210 [2024-07-14 15:09:51.281305] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.210 [2024-07-14 15:09:51.281337] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.210 [2024-07-14 15:09:51.281360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.210 [2024-07-14 15:09:51.285542] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.210 [2024-07-14 15:09:51.294892] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.210 [2024-07-14 15:09:51.295364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.210 [2024-07-14 15:09:51.295404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.210 [2024-07-14 15:09:51.295430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.210 [2024-07-14 15:09:51.295717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.210 [2024-07-14 15:09:51.296020] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.210 [2024-07-14 15:09:51.296052] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.210 [2024-07-14 15:09:51.296081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.210 [2024-07-14 15:09:51.300278] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.210 [2024-07-14 15:09:51.309352] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.210 [2024-07-14 15:09:51.309809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.210 [2024-07-14 15:09:51.309850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.210 [2024-07-14 15:09:51.309885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.210 [2024-07-14 15:09:51.310174] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.210 [2024-07-14 15:09:51.310461] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.210 [2024-07-14 15:09:51.310492] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.210 [2024-07-14 15:09:51.310513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.210 [2024-07-14 15:09:51.314683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.210 [2024-07-14 15:09:51.323950] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.210 [2024-07-14 15:09:51.324416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.210 [2024-07-14 15:09:51.324456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.210 [2024-07-14 15:09:51.324482] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.210 [2024-07-14 15:09:51.324766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.210 [2024-07-14 15:09:51.325067] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.210 [2024-07-14 15:09:51.325099] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.210 [2024-07-14 15:09:51.325121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.210 [2024-07-14 15:09:51.329261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.211 [2024-07-14 15:09:51.338602] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.211 [2024-07-14 15:09:51.339260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.211 [2024-07-14 15:09:51.339318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.211 [2024-07-14 15:09:51.339348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.211 [2024-07-14 15:09:51.339648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.211 [2024-07-14 15:09:51.339956] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.211 [2024-07-14 15:09:51.339990] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.211 [2024-07-14 15:09:51.340015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.211 [2024-07-14 15:09:51.344216] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.211 [2024-07-14 15:09:51.353353] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.211 [2024-07-14 15:09:51.354055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.211 [2024-07-14 15:09:51.354113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.211 [2024-07-14 15:09:51.354143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.211 [2024-07-14 15:09:51.354439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.211 [2024-07-14 15:09:51.354736] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.211 [2024-07-14 15:09:51.354769] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.211 [2024-07-14 15:09:51.354794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.211 [2024-07-14 15:09:51.359017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.211 [2024-07-14 15:09:51.368109] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.211 [2024-07-14 15:09:51.368737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.211 [2024-07-14 15:09:51.368790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.211 [2024-07-14 15:09:51.368819] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.211 [2024-07-14 15:09:51.369129] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.211 [2024-07-14 15:09:51.369425] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.211 [2024-07-14 15:09:51.369458] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.211 [2024-07-14 15:09:51.369483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.211 [2024-07-14 15:09:51.373658] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.211 [2024-07-14 15:09:51.382357] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.211 [2024-07-14 15:09:51.382794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.211 [2024-07-14 15:09:51.382831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.211 [2024-07-14 15:09:51.382854] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.211 [2024-07-14 15:09:51.383119] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.211 [2024-07-14 15:09:51.383391] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.211 [2024-07-14 15:09:51.383418] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.211 [2024-07-14 15:09:51.383438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.211 [2024-07-14 15:09:51.387204] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.211 [2024-07-14 15:09:51.396551] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.211 [2024-07-14 15:09:51.396980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.211 [2024-07-14 15:09:51.397017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.211 [2024-07-14 15:09:51.397040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.211 [2024-07-14 15:09:51.397319] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.211 [2024-07-14 15:09:51.397573] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.211 [2024-07-14 15:09:51.397600] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.211 [2024-07-14 15:09:51.397620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.211 [2024-07-14 15:09:51.401335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.211 [2024-07-14 15:09:51.410665] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.211 [2024-07-14 15:09:51.411130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.211 [2024-07-14 15:09:51.411167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.211 [2024-07-14 15:09:51.411190] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.211 [2024-07-14 15:09:51.411460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.211 [2024-07-14 15:09:51.411710] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.211 [2024-07-14 15:09:51.411737] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.211 [2024-07-14 15:09:51.411757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.211 [2024-07-14 15:09:51.415453] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.211 [2024-07-14 15:09:51.424708] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.211 [2024-07-14 15:09:51.425139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.211 [2024-07-14 15:09:51.425176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.211 [2024-07-14 15:09:51.425199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.211 [2024-07-14 15:09:51.425469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.211 [2024-07-14 15:09:51.425718] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.211 [2024-07-14 15:09:51.425745] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.211 [2024-07-14 15:09:51.425764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.211 [2024-07-14 15:09:51.429413] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.211 [2024-07-14 15:09:51.438613] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.211 [2024-07-14 15:09:51.439029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.211 [2024-07-14 15:09:51.439065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.211 [2024-07-14 15:09:51.439088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.211 [2024-07-14 15:09:51.439357] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.211 [2024-07-14 15:09:51.439607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.211 [2024-07-14 15:09:51.439634] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.211 [2024-07-14 15:09:51.439658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.211 [2024-07-14 15:09:51.443386] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.211 [2024-07-14 15:09:51.452579] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.211 [2024-07-14 15:09:51.453023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.211 [2024-07-14 15:09:51.453060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.211 [2024-07-14 15:09:51.453082] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.211 [2024-07-14 15:09:51.453352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.211 [2024-07-14 15:09:51.453600] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.211 [2024-07-14 15:09:51.453627] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.211 [2024-07-14 15:09:51.453646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.211 [2024-07-14 15:09:51.457304] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.211 [2024-07-14 15:09:51.466568] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.211 [2024-07-14 15:09:51.466999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.211 [2024-07-14 15:09:51.467036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.211 [2024-07-14 15:09:51.467059] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.211 [2024-07-14 15:09:51.467329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.211 [2024-07-14 15:09:51.467578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.211 [2024-07-14 15:09:51.467606] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.211 [2024-07-14 15:09:51.467625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.211 [2024-07-14 15:09:51.471285] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.211 [2024-07-14 15:09:51.480554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.211 [2024-07-14 15:09:51.481022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.211 [2024-07-14 15:09:51.481060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.211 [2024-07-14 15:09:51.481084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.211 [2024-07-14 15:09:51.481358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.211 [2024-07-14 15:09:51.481613] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.211 [2024-07-14 15:09:51.481641] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.212 [2024-07-14 15:09:51.481661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.212 [2024-07-14 15:09:51.485388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.212 [2024-07-14 15:09:51.494637] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.212 [2024-07-14 15:09:51.495300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.212 [2024-07-14 15:09:51.495349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.212 [2024-07-14 15:09:51.495377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.212 [2024-07-14 15:09:51.495656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.212 [2024-07-14 15:09:51.495945] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.212 [2024-07-14 15:09:51.495975] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.212 [2024-07-14 15:09:51.495998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.212 [2024-07-14 15:09:51.499732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.212 [2024-07-14 15:09:51.508731] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.212 [2024-07-14 15:09:51.509177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.212 [2024-07-14 15:09:51.509215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.212 [2024-07-14 15:09:51.509238] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.212 [2024-07-14 15:09:51.509510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.212 [2024-07-14 15:09:51.509765] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.212 [2024-07-14 15:09:51.509792] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.212 [2024-07-14 15:09:51.509811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.212 [2024-07-14 15:09:51.513653] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.472 [2024-07-14 15:09:51.523113] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.472 [2024-07-14 15:09:51.523551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.472 [2024-07-14 15:09:51.523588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.472 [2024-07-14 15:09:51.523611] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.472 [2024-07-14 15:09:51.523909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.472 [2024-07-14 15:09:51.524172] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.472 [2024-07-14 15:09:51.524216] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.472 [2024-07-14 15:09:51.524236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.472 [2024-07-14 15:09:51.527964] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.472 [2024-07-14 15:09:51.537046] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.472 [2024-07-14 15:09:51.537467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.472 [2024-07-14 15:09:51.537503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.472 [2024-07-14 15:09:51.537526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.472 [2024-07-14 15:09:51.537802] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.472 [2024-07-14 15:09:51.538061] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.472 [2024-07-14 15:09:51.538089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.472 [2024-07-14 15:09:51.538109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.472 [2024-07-14 15:09:51.541766] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.472 [2024-07-14 15:09:51.551098] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.472 [2024-07-14 15:09:51.551534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.472 [2024-07-14 15:09:51.551569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.472 [2024-07-14 15:09:51.551593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.472 [2024-07-14 15:09:51.551863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.472 [2024-07-14 15:09:51.552148] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.472 [2024-07-14 15:09:51.552176] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.472 [2024-07-14 15:09:51.552212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.472 [2024-07-14 15:09:51.555901] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.472 [2024-07-14 15:09:51.565099] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.472 [2024-07-14 15:09:51.565534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.472 [2024-07-14 15:09:51.565570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.472 [2024-07-14 15:09:51.565593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.472 [2024-07-14 15:09:51.565854] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.472 [2024-07-14 15:09:51.566120] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.472 [2024-07-14 15:09:51.566148] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.472 [2024-07-14 15:09:51.566168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.472 [2024-07-14 15:09:51.569858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.472 [2024-07-14 15:09:51.579181] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.472 [2024-07-14 15:09:51.579634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.472 [2024-07-14 15:09:51.579673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.472 [2024-07-14 15:09:51.579697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.472 [2024-07-14 15:09:51.579964] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.472 [2024-07-14 15:09:51.580240] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.472 [2024-07-14 15:09:51.580268] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.472 [2024-07-14 15:09:51.580294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.472 [2024-07-14 15:09:51.584047] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.472 [2024-07-14 15:09:51.593157] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.472 [2024-07-14 15:09:51.593606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.472 [2024-07-14 15:09:51.593644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.472 [2024-07-14 15:09:51.593668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.472 [2024-07-14 15:09:51.593970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.472 [2024-07-14 15:09:51.594245] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.472 [2024-07-14 15:09:51.594273] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.472 [2024-07-14 15:09:51.594293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.472 [2024-07-14 15:09:51.598005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.472 [2024-07-14 15:09:51.607181] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.472 [2024-07-14 15:09:51.607622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.472 [2024-07-14 15:09:51.607659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.472 [2024-07-14 15:09:51.607682] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.472 [2024-07-14 15:09:51.607980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.472 [2024-07-14 15:09:51.608241] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.472 [2024-07-14 15:09:51.608282] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.472 [2024-07-14 15:09:51.608301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.473 [2024-07-14 15:09:51.611981] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.473 [2024-07-14 15:09:51.621096] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.473 [2024-07-14 15:09:51.621529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.473 [2024-07-14 15:09:51.621566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.473 [2024-07-14 15:09:51.621589] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.473 [2024-07-14 15:09:51.621884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.473 [2024-07-14 15:09:51.622145] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.473 [2024-07-14 15:09:51.622188] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.473 [2024-07-14 15:09:51.622208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.473 [2024-07-14 15:09:51.625855] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.473 [2024-07-14 15:09:51.635172] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.473 [2024-07-14 15:09:51.635594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.473 [2024-07-14 15:09:51.635631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.473 [2024-07-14 15:09:51.635654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.473 [2024-07-14 15:09:51.635948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.473 [2024-07-14 15:09:51.636207] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.473 [2024-07-14 15:09:51.636251] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.473 [2024-07-14 15:09:51.636271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.473 [2024-07-14 15:09:51.639944] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.473 [2024-07-14 15:09:51.649131] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.473 [2024-07-14 15:09:51.649553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.473 [2024-07-14 15:09:51.649590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.473 [2024-07-14 15:09:51.649613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.473 [2024-07-14 15:09:51.649908] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.473 [2024-07-14 15:09:51.650166] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.473 [2024-07-14 15:09:51.650210] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.473 [2024-07-14 15:09:51.650229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.473 [2024-07-14 15:09:51.653858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.473 [2024-07-14 15:09:51.663106] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.473 [2024-07-14 15:09:51.663555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.473 [2024-07-14 15:09:51.663591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.473 [2024-07-14 15:09:51.663614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.473 [2024-07-14 15:09:51.663907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.473 [2024-07-14 15:09:51.664164] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.473 [2024-07-14 15:09:51.664208] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.473 [2024-07-14 15:09:51.664227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.473 [2024-07-14 15:09:51.667849] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.473 [2024-07-14 15:09:51.677043] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.473 [2024-07-14 15:09:51.677454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.473 [2024-07-14 15:09:51.677490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.473 [2024-07-14 15:09:51.677527] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.473 [2024-07-14 15:09:51.677800] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.473 [2024-07-14 15:09:51.678082] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.473 [2024-07-14 15:09:51.678111] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.473 [2024-07-14 15:09:51.678131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.473 [2024-07-14 15:09:51.681749] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.473 [2024-07-14 15:09:51.690930] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.473 [2024-07-14 15:09:51.691357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.473 [2024-07-14 15:09:51.691394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.473 [2024-07-14 15:09:51.691416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.473 [2024-07-14 15:09:51.691684] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.473 [2024-07-14 15:09:51.691941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.473 [2024-07-14 15:09:51.691969] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.473 [2024-07-14 15:09:51.691988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.473 [2024-07-14 15:09:51.695618] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.473 [2024-07-14 15:09:51.704874] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.473 [2024-07-14 15:09:51.705278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.473 [2024-07-14 15:09:51.705315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.473 [2024-07-14 15:09:51.705337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.473 [2024-07-14 15:09:51.705605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.473 [2024-07-14 15:09:51.705860] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.473 [2024-07-14 15:09:51.705915] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.473 [2024-07-14 15:09:51.705936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.473 [2024-07-14 15:09:51.709573] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.473 [2024-07-14 15:09:51.719003] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.473 [2024-07-14 15:09:51.719427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.473 [2024-07-14 15:09:51.719464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.473 [2024-07-14 15:09:51.719487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.473 [2024-07-14 15:09:51.719754] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.473 [2024-07-14 15:09:51.720045] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.473 [2024-07-14 15:09:51.720075] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.473 [2024-07-14 15:09:51.720101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.473 [2024-07-14 15:09:51.723842] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.473 [2024-07-14 15:09:51.733142] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.473 [2024-07-14 15:09:51.733605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.473 [2024-07-14 15:09:51.733642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.473 [2024-07-14 15:09:51.733665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.473 [2024-07-14 15:09:51.733930] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.473 [2024-07-14 15:09:51.734188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.473 [2024-07-14 15:09:51.734216] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.473 [2024-07-14 15:09:51.734236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.473 15:09:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:12.473 15:09:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:37:12.473 15:09:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:12.473 15:09:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:12.473 15:09:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:12.473 [2024-07-14 15:09:51.738020] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.473 [2024-07-14 15:09:51.747395] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.473 [2024-07-14 15:09:51.747847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.473 [2024-07-14 15:09:51.747902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.473 [2024-07-14 15:09:51.747926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.473 [2024-07-14 15:09:51.748207] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.473 [2024-07-14 15:09:51.748470] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.473 [2024-07-14 15:09:51.748497] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.473 [2024-07-14 15:09:51.748516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.473 [2024-07-14 15:09:51.752351] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.473 15:09:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:12.473 15:09:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:12.473 15:09:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:12.473 15:09:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:12.473 [2024-07-14 15:09:51.756688] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:12.474 [2024-07-14 15:09:51.761470] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.474 [2024-07-14 15:09:51.761964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.474 [2024-07-14 15:09:51.762002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.474 [2024-07-14 15:09:51.762030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.474 [2024-07-14 15:09:51.762318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.474 [2024-07-14 15:09:51.762562] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.474 [2024-07-14 15:09:51.762589] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.474 [2024-07-14 15:09:51.762607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.474 [2024-07-14 15:09:51.766273] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.474 [2024-07-14 15:09:51.775715] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.474 15:09:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:12.474 15:09:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:12.474 [2024-07-14 15:09:51.776122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.474 [2024-07-14 15:09:51.776159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.474 [2024-07-14 15:09:51.776182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.474 15:09:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:12.474 15:09:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:12.474 [2024-07-14 15:09:51.776476] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.474 [2024-07-14 15:09:51.776763] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.474 [2024-07-14 15:09:51.776791] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.474 [2024-07-14 15:09:51.776811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.732 [2024-07-14 15:09:51.780742] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.732 [2024-07-14 15:09:51.789911] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.732 [2024-07-14 15:09:51.790509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.732 [2024-07-14 15:09:51.790556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.732 [2024-07-14 15:09:51.790583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.732 [2024-07-14 15:09:51.790900] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.732 [2024-07-14 15:09:51.791176] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.732 [2024-07-14 15:09:51.791220] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.732 [2024-07-14 15:09:51.791244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.732 [2024-07-14 15:09:51.795048] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.732 [2024-07-14 15:09:51.804257] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.732 [2024-07-14 15:09:51.804763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.732 [2024-07-14 15:09:51.804805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.732 [2024-07-14 15:09:51.804836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.732 [2024-07-14 15:09:51.805112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.732 [2024-07-14 15:09:51.805384] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.732 [2024-07-14 15:09:51.805412] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.732 [2024-07-14 15:09:51.805433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.732 [2024-07-14 15:09:51.809159] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.732 [2024-07-14 15:09:51.818264] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.732 [2024-07-14 15:09:51.818694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.732 [2024-07-14 15:09:51.818731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.732 [2024-07-14 15:09:51.818754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.733 [2024-07-14 15:09:51.819211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.733 [2024-07-14 15:09:51.819472] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.733 [2024-07-14 15:09:51.819501] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.733 [2024-07-14 15:09:51.819521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.733 [2024-07-14 15:09:51.823258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.733 [2024-07-14 15:09:51.832253] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.733 [2024-07-14 15:09:51.832672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.733 [2024-07-14 15:09:51.832709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.733 [2024-07-14 15:09:51.832732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.733 [2024-07-14 15:09:51.832997] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.733 [2024-07-14 15:09:51.833273] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.733 [2024-07-14 15:09:51.833301] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.733 [2024-07-14 15:09:51.833320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.733 [2024-07-14 15:09:51.837009] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.733 [2024-07-14 15:09:51.846147] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.733 [2024-07-14 15:09:51.846563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.733 [2024-07-14 15:09:51.846600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.733 [2024-07-14 15:09:51.846622] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.733 [2024-07-14 15:09:51.846919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.733 [2024-07-14 15:09:51.847185] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.733 [2024-07-14 15:09:51.847233] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.733 [2024-07-14 15:09:51.847254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.733 [2024-07-14 15:09:51.850948] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.733 Malloc0 00:37:12.733 15:09:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:12.733 15:09:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:12.733 15:09:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:12.733 15:09:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:12.733 15:09:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:12.733 15:09:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:12.733 15:09:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:12.733 15:09:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:12.733 [2024-07-14 15:09:51.860327] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.733 [2024-07-14 15:09:51.860736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.733 [2024-07-14 15:09:51.860773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.733 [2024-07-14 15:09:51.860796] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.733 [2024-07-14 15:09:51.861062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.733 [2024-07-14 15:09:51.861333] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.733 [2024-07-14 15:09:51.861360] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.733 [2024-07-14 15:09:51.861379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.733 [2024-07-14 15:09:51.865118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.733 15:09:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:12.733 15:09:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:12.733 15:09:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:12.733 15:09:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:12.733 [2024-07-14 15:09:51.870947] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:12.733 [2024-07-14 15:09:51.874448] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.733 15:09:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:12.733 15:09:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2060729 00:37:12.733 [2024-07-14 15:09:52.002181] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:37:22.739 00:37:22.739 Latency(us) 00:37:22.739 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:22.739 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:22.739 Verification LBA range: start 0x0 length 0x4000 00:37:22.739 Nvme1n1 : 15.01 4525.15 17.68 9281.80 0.00 9240.55 1116.54 31263.10 00:37:22.739 =================================================================================================================== 00:37:22.739 Total : 4525.15 17.68 9281.80 0.00 9240.55 1116.54 31263.10 00:37:22.739 15:10:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:37:22.739 15:10:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:22.739 15:10:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.739 15:10:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:22.739 15:10:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:22.739 15:10:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:37:22.739 15:10:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:37:22.739 15:10:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:22.739 15:10:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:37:22.739 15:10:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:22.739 15:10:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:37:22.739 15:10:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:22.739 15:10:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:22.739 rmmod nvme_tcp 00:37:22.739 rmmod nvme_fabrics 00:37:22.739 rmmod nvme_keyring 00:37:22.739 15:10:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:22.739 15:10:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:37:22.739 15:10:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:37:22.739 15:10:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 2061392 ']' 00:37:22.739 15:10:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 2061392 00:37:22.739 15:10:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 2061392 ']' 00:37:22.739 15:10:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 2061392 00:37:22.739 15:10:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:37:22.739 15:10:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:22.739 15:10:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2061392 00:37:22.739 15:10:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:22.739 15:10:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:22.739 15:10:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2061392' 00:37:22.739 killing process with pid 2061392 00:37:22.739 15:10:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 2061392 00:37:22.739 15:10:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 2061392 00:37:24.115 15:10:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:24.115 15:10:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:24.115 15:10:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:24.115 15:10:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:24.115 15:10:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:24.115 15:10:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:24.115 15:10:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:24.115 15:10:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:26.021 15:10:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:26.021 00:37:26.021 real 0m26.708s 00:37:26.021 user 1m13.833s 00:37:26.021 sys 0m4.366s 00:37:26.021 15:10:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:26.021 15:10:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:26.021 ************************************ 00:37:26.021 END TEST nvmf_bdevperf 00:37:26.021 ************************************ 00:37:26.021 15:10:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:37:26.021 15:10:05 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:37:26.021 15:10:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:37:26.021 15:10:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:26.021 15:10:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:26.021 ************************************ 00:37:26.021 START TEST nvmf_target_disconnect 00:37:26.021 ************************************ 00:37:26.021 15:10:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:37:26.021 * Looking for test storage... 00:37:26.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:26.021 15:10:05 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:26.021 15:10:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:37:26.021 15:10:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:26.021 15:10:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:26.021 15:10:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:26.021 15:10:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:26.022 15:10:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:26.022 15:10:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:26.022 15:10:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:26.022 15:10:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:26.022 15:10:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:26.022 15:10:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:26.022 15:10:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:26.022 15:10:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:26.022 15:10:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:26.022 15:10:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:26.022 15:10:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:26.022 15:10:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:26.022 15:10:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:26.022 15:10:05 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:26.022 15:10:05 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:26.022 15:10:05 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:26.022 15:10:05 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:26.022 15:10:05 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:26.022 15:10:05 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:26.022 15:10:05 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:37:26.022 15:10:05 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:26.022 15:10:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:37:26.022 15:10:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:26.022 15:10:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:26.022 15:10:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:26.022 15:10:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:26.022 15:10:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:26.022 15:10:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:26.022 15:10:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:26.022 15:10:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:26.022 15:10:05 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:37:26.022 15:10:05 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:37:26.022 15:10:05 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:37:26.022 15:10:05 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:37:26.022 15:10:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:26.022 15:10:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:26.022 15:10:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:26.022 15:10:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:26.022 15:10:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:26.022 15:10:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:26.022 15:10:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:26.022 15:10:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:26.022 15:10:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:37:26.022 15:10:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:26.022 15:10:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:37:26.022 15:10:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:27.926 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:27.926 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:27.926 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:27.926 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:37:27.926 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:28.185 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:28.185 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:28.186 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:37:28.186 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:28.186 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:28.186 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:28.186 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:37:28.186 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:28.186 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:37:28.186 00:37:28.186 --- 10.0.0.2 ping statistics --- 00:37:28.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:28.186 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:37:28.186 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:28.186 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:28.186 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:37:28.186 00:37:28.186 --- 10.0.0.1 ping statistics --- 00:37:28.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:28.186 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:37:28.186 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:28.186 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:37:28.186 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:28.186 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:28.186 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:28.186 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:28.186 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:28.186 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:28.186 15:10:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:28.186 15:10:07 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:37:28.186 15:10:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:28.186 15:10:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:28.186 15:10:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:28.186 ************************************ 00:37:28.186 START TEST nvmf_target_disconnect_tc1 00:37:28.186 ************************************ 00:37:28.186 15:10:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:37:28.186 15:10:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:28.186 15:10:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:37:28.186 15:10:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:28.186 15:10:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:28.186 15:10:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:28.186 15:10:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:28.186 15:10:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:28.186 15:10:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:28.186 15:10:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:28.186 15:10:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:28.186 15:10:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:37:28.186 15:10:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:28.444 EAL: No free 2048 kB hugepages reported on node 1 00:37:28.444 [2024-07-14 15:10:07.559705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.444 [2024-07-14 15:10:07.559847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:37:28.444 [2024-07-14 15:10:07.559962] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:37:28.444 [2024-07-14 15:10:07.560000] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:28.444 [2024-07-14 15:10:07.560026] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:37:28.444 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:37:28.444 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:37:28.444 Initializing NVMe Controllers 00:37:28.444 15:10:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:37:28.444 15:10:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:28.444 15:10:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:28.444 15:10:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:28.444 00:37:28.444 real 0m0.237s 00:37:28.444 user 0m0.099s 00:37:28.444 sys 0m0.137s 00:37:28.444 15:10:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:28.444 15:10:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:37:28.444 ************************************ 00:37:28.444 END TEST nvmf_target_disconnect_tc1 00:37:28.444 ************************************ 00:37:28.444 15:10:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:37:28.444 15:10:07 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:37:28.444 15:10:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:28.444 15:10:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:28.444 15:10:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:28.444 ************************************ 00:37:28.444 START TEST nvmf_target_disconnect_tc2 00:37:28.444 ************************************ 00:37:28.444 15:10:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:37:28.444 15:10:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:37:28.444 15:10:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:37:28.444 15:10:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:28.444 15:10:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:28.444 15:10:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:28.444 15:10:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2064807 00:37:28.444 15:10:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:37:28.444 15:10:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2064807 00:37:28.444 15:10:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2064807 ']' 00:37:28.444 15:10:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:28.444 15:10:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:28.444 15:10:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:28.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:28.444 15:10:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:28.444 15:10:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:28.703 [2024-07-14 15:10:07.752624] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:37:28.703 [2024-07-14 15:10:07.752793] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:28.703 EAL: No free 2048 kB hugepages reported on node 1 00:37:28.703 [2024-07-14 15:10:07.905622] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:28.963 [2024-07-14 15:10:08.137738] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:28.963 [2024-07-14 15:10:08.137802] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:28.963 [2024-07-14 15:10:08.137842] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:28.963 [2024-07-14 15:10:08.137860] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:28.963 [2024-07-14 15:10:08.137904] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:28.963 [2024-07-14 15:10:08.138031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:37:28.963 [2024-07-14 15:10:08.138080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:37:28.963 [2024-07-14 15:10:08.138127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:37:28.963 [2024-07-14 15:10:08.138118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:37:29.529 15:10:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:29.529 15:10:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:37:29.529 15:10:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:29.529 15:10:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:29.529 15:10:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:29.529 15:10:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:29.529 15:10:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:29.529 15:10:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:29.529 15:10:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:29.529 Malloc0 00:37:29.529 15:10:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:29.529 15:10:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:37:29.529 15:10:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:29.529 15:10:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:29.529 [2024-07-14 15:10:08.755874] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:29.529 15:10:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:29.529 15:10:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:29.529 15:10:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:29.529 15:10:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:29.529 15:10:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:29.529 15:10:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:29.529 15:10:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:29.529 15:10:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:29.529 15:10:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:29.529 15:10:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:29.529 15:10:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:29.529 15:10:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:29.529 [2024-07-14 15:10:08.786096] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:29.529 15:10:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:29.529 15:10:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:29.529 15:10:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:29.529 15:10:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:29.529 15:10:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:29.529 15:10:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2064957 00:37:29.529 15:10:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:37:29.529 15:10:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:29.789 EAL: No free 2048 kB hugepages reported on node 1 00:37:31.708 15:10:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2064807 00:37:31.708 15:10:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:37:31.708 Read completed with error (sct=0, sc=8) 00:37:31.708 starting I/O failed 00:37:31.708 Read completed with error (sct=0, sc=8) 00:37:31.708 starting I/O failed 00:37:31.708 Read completed with error (sct=0, sc=8) 00:37:31.708 starting I/O failed 00:37:31.708 Read completed with error (sct=0, sc=8) 00:37:31.708 starting I/O failed 00:37:31.708 Read completed with error (sct=0, sc=8) 00:37:31.708 starting I/O failed 00:37:31.708 Read completed with error (sct=0, sc=8) 00:37:31.708 starting I/O failed 00:37:31.708 Write completed with error (sct=0, sc=8) 00:37:31.708 starting I/O failed 00:37:31.708 Write completed with error (sct=0, sc=8) 00:37:31.708 starting I/O failed 00:37:31.708 Write completed with error (sct=0, sc=8) 00:37:31.708 starting I/O failed 00:37:31.708 Write completed with error (sct=0, sc=8) 00:37:31.708 starting I/O failed 00:37:31.708 Read completed with error (sct=0, sc=8) 00:37:31.708 starting I/O failed 00:37:31.708 Read completed with error (sct=0, sc=8) 00:37:31.708 starting I/O failed 00:37:31.708 Write completed with error (sct=0, sc=8) 00:37:31.708 starting I/O failed 00:37:31.708 Write completed with error (sct=0, sc=8) 00:37:31.708 starting I/O failed 00:37:31.708 Read completed with error (sct=0, sc=8) 00:37:31.708 starting I/O failed 00:37:31.708 Read completed with error (sct=0, sc=8) 00:37:31.708 starting I/O failed 00:37:31.708 Write completed with error (sct=0, sc=8) 00:37:31.708 starting I/O failed 00:37:31.708 Write completed with error (sct=0, sc=8) 00:37:31.708 starting I/O failed 00:37:31.708 Write completed with error (sct=0, sc=8) 00:37:31.708 starting I/O failed 00:37:31.708 Read completed with error (sct=0, sc=8) 00:37:31.708 starting I/O failed 00:37:31.708 Read completed with error (sct=0, sc=8) 00:37:31.708 starting I/O failed 00:37:31.708 Read completed with error (sct=0, sc=8) 00:37:31.708 starting I/O failed 00:37:31.708 Read completed with error (sct=0, sc=8) 00:37:31.708 starting I/O failed 00:37:31.708 Write completed with error (sct=0, sc=8) 00:37:31.708 starting I/O failed 00:37:31.708 Read completed with error (sct=0, sc=8) 00:37:31.708 starting I/O failed 00:37:31.708 Write completed with error (sct=0, sc=8) 00:37:31.708 starting I/O failed 00:37:31.708 Write completed with error (sct=0, sc=8) 00:37:31.708 starting I/O failed 00:37:31.708 Read completed with error (sct=0, sc=8) 00:37:31.708 starting I/O failed 00:37:31.708 Read completed with error (sct=0, sc=8) 00:37:31.708 starting I/O failed 00:37:31.708 Write completed with error (sct=0, sc=8) 00:37:31.708 starting I/O failed 00:37:31.708 Write completed with error (sct=0, sc=8) 00:37:31.708 starting I/O failed 00:37:31.708 Write completed with error (sct=0, sc=8) 00:37:31.708 starting I/O failed 00:37:31.708 Read completed with error (sct=0, sc=8) 00:37:31.708 starting I/O failed 00:37:31.708 Read completed with error (sct=0, sc=8) 00:37:31.708 starting I/O failed 00:37:31.708 Read completed with error (sct=0, sc=8) 00:37:31.708 starting I/O failed 00:37:31.708 Read completed with error (sct=0, sc=8) 00:37:31.708 starting I/O failed 00:37:31.708 Read completed with error (sct=0, sc=8) 00:37:31.708 starting I/O failed 00:37:31.708 [2024-07-14 15:10:10.826701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:37:31.708 Read completed with error (sct=0, sc=8) 00:37:31.708 starting I/O failed 00:37:31.708 Read completed with error (sct=0, sc=8) 00:37:31.708 starting I/O failed 00:37:31.708 Read completed with error (sct=0, sc=8) 00:37:31.708 starting I/O failed 00:37:31.708 Read completed with error (sct=0, sc=8) 00:37:31.708 starting I/O failed 00:37:31.708 Read completed with error (sct=0, sc=8) 00:37:31.708 starting I/O failed 00:37:31.708 Read completed with error (sct=0, sc=8) 00:37:31.708 starting I/O failed 00:37:31.708 Read completed with error (sct=0, sc=8) 00:37:31.708 starting I/O failed 00:37:31.708 Read completed with error (sct=0, sc=8) 00:37:31.708 starting I/O failed 00:37:31.708 Read completed with error (sct=0, sc=8) 00:37:31.708 starting I/O failed 00:37:31.708 Write completed with error (sct=0, sc=8) 00:37:31.708 starting I/O failed 00:37:31.709 Read completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Read completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Read completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Read completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Read completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Write completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Read completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Write completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Write completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Read completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Write completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Read completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Read completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Write completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Write completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Read completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Read completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 [2024-07-14 15:10:10.827389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:31.709 Read completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Read completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Read completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Read completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Read completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Write completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Read completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Read completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Write completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Read completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Read completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Write completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Read completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Read completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Write completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Write completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Write completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Write completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Write completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Read completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Read completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Read completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Write completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Read completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Write completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Write completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Read completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Write completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Read completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Read completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Read completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Read completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 [2024-07-14 15:10:10.828070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:31.709 Read completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Read completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Read completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Write completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Read completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Read completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Read completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Write completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Write completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Read completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Read completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Read completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Write completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Write completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Write completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Read completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Write completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Read completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Write completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Read completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Read completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Read completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Read completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Write completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Write completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Write completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Write completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Write completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Write completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Read completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Write completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 Read completed with error (sct=0, sc=8) 00:37:31.709 starting I/O failed 00:37:31.709 [2024-07-14 15:10:10.828701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:31.709 [2024-07-14 15:10:10.828957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.709 [2024-07-14 15:10:10.829009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.709 qpair failed and we were unable to recover it. 00:37:31.709 [2024-07-14 15:10:10.829191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.709 [2024-07-14 15:10:10.829243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.709 qpair failed and we were unable to recover it. 00:37:31.709 [2024-07-14 15:10:10.829448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.709 [2024-07-14 15:10:10.829488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.709 qpair failed and we were unable to recover it. 00:37:31.709 [2024-07-14 15:10:10.829656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.709 [2024-07-14 15:10:10.829709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.709 qpair failed and we were unable to recover it. 00:37:31.709 [2024-07-14 15:10:10.829874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.709 [2024-07-14 15:10:10.829914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.709 qpair failed and we were unable to recover it. 00:37:31.709 [2024-07-14 15:10:10.830041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.709 [2024-07-14 15:10:10.830074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.709 qpair failed and we were unable to recover it. 00:37:31.709 [2024-07-14 15:10:10.830272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.709 [2024-07-14 15:10:10.830330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.709 qpair failed and we were unable to recover it. 00:37:31.709 [2024-07-14 15:10:10.830470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.709 [2024-07-14 15:10:10.830505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.709 qpair failed and we were unable to recover it. 00:37:31.709 [2024-07-14 15:10:10.830670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.709 [2024-07-14 15:10:10.830708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.709 qpair failed and we were unable to recover it. 00:37:31.709 [2024-07-14 15:10:10.830887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.709 [2024-07-14 15:10:10.830921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.709 qpair failed and we were unable to recover it. 00:37:31.709 [2024-07-14 15:10:10.831035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.709 [2024-07-14 15:10:10.831069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.709 qpair failed and we were unable to recover it. 00:37:31.709 [2024-07-14 15:10:10.831242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.709 [2024-07-14 15:10:10.831292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.709 qpair failed and we were unable to recover it. 00:37:31.709 [2024-07-14 15:10:10.831444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.710 [2024-07-14 15:10:10.831481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.710 qpair failed and we were unable to recover it. 00:37:31.710 [2024-07-14 15:10:10.831624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.710 [2024-07-14 15:10:10.831674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.710 qpair failed and we were unable to recover it. 00:37:31.710 [2024-07-14 15:10:10.831851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.710 [2024-07-14 15:10:10.831906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.710 qpair failed and we were unable to recover it. 00:37:31.710 [2024-07-14 15:10:10.832021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.710 [2024-07-14 15:10:10.832056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.710 qpair failed and we were unable to recover it. 00:37:31.710 [2024-07-14 15:10:10.832233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.710 [2024-07-14 15:10:10.832267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.710 qpair failed and we were unable to recover it. 00:37:31.710 [2024-07-14 15:10:10.832408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.710 [2024-07-14 15:10:10.832442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.710 qpair failed and we were unable to recover it. 00:37:31.710 [2024-07-14 15:10:10.832652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.710 [2024-07-14 15:10:10.832685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.710 qpair failed and we were unable to recover it. 00:37:31.710 [2024-07-14 15:10:10.832813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.710 [2024-07-14 15:10:10.832846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.710 qpair failed and we were unable to recover it. 00:37:31.710 [2024-07-14 15:10:10.833007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.710 [2024-07-14 15:10:10.833041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.710 qpair failed and we were unable to recover it. 00:37:31.710 [2024-07-14 15:10:10.833164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.710 [2024-07-14 15:10:10.833197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.710 qpair failed and we were unable to recover it. 00:37:31.710 [2024-07-14 15:10:10.833316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.710 [2024-07-14 15:10:10.833349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.710 qpair failed and we were unable to recover it. 00:37:31.710 [2024-07-14 15:10:10.833520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.710 [2024-07-14 15:10:10.833554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.710 qpair failed and we were unable to recover it. 00:37:31.710 [2024-07-14 15:10:10.833696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.710 [2024-07-14 15:10:10.833733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.710 qpair failed and we were unable to recover it. 00:37:31.710 [2024-07-14 15:10:10.833913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.710 [2024-07-14 15:10:10.833948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.710 qpair failed and we were unable to recover it. 00:37:31.710 [2024-07-14 15:10:10.834060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.710 [2024-07-14 15:10:10.834095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.710 qpair failed and we were unable to recover it. 00:37:31.710 [2024-07-14 15:10:10.834214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.710 [2024-07-14 15:10:10.834247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.710 qpair failed and we were unable to recover it. 00:37:31.710 [2024-07-14 15:10:10.834383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.710 [2024-07-14 15:10:10.834417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.710 qpair failed and we were unable to recover it. 00:37:31.710 [2024-07-14 15:10:10.834597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.710 [2024-07-14 15:10:10.834635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.710 qpair failed and we were unable to recover it. 00:37:31.710 [2024-07-14 15:10:10.834816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.710 [2024-07-14 15:10:10.834852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.710 qpair failed and we were unable to recover it. 00:37:31.710 [2024-07-14 15:10:10.834992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.710 [2024-07-14 15:10:10.835026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.710 qpair failed and we were unable to recover it. 00:37:31.710 [2024-07-14 15:10:10.835283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.710 [2024-07-14 15:10:10.835341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.710 qpair failed and we were unable to recover it. 00:37:31.710 [2024-07-14 15:10:10.835608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.710 [2024-07-14 15:10:10.835665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.710 qpair failed and we were unable to recover it. 00:37:31.710 [2024-07-14 15:10:10.835805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.710 [2024-07-14 15:10:10.835838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.710 qpair failed and we were unable to recover it. 00:37:31.710 [2024-07-14 15:10:10.836001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.710 [2024-07-14 15:10:10.836037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.710 qpair failed and we were unable to recover it. 00:37:31.710 [2024-07-14 15:10:10.836187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.710 [2024-07-14 15:10:10.836241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.710 qpair failed and we were unable to recover it. 00:37:31.710 [2024-07-14 15:10:10.836437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.710 [2024-07-14 15:10:10.836488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.710 qpair failed and we were unable to recover it. 00:37:31.710 [2024-07-14 15:10:10.836709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.710 [2024-07-14 15:10:10.836807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.710 qpair failed and we were unable to recover it. 00:37:31.710 [2024-07-14 15:10:10.837907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.710 [2024-07-14 15:10:10.837962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.710 qpair failed and we were unable to recover it. 00:37:31.710 [2024-07-14 15:10:10.838129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.710 [2024-07-14 15:10:10.838178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.710 qpair failed and we were unable to recover it. 00:37:31.710 [2024-07-14 15:10:10.838290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.710 [2024-07-14 15:10:10.838325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.710 qpair failed and we were unable to recover it. 00:37:31.710 [2024-07-14 15:10:10.838468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.710 [2024-07-14 15:10:10.838526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.710 qpair failed and we were unable to recover it. 00:37:31.710 [2024-07-14 15:10:10.838650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.710 [2024-07-14 15:10:10.838684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.710 qpair failed and we were unable to recover it. 00:37:31.710 [2024-07-14 15:10:10.838803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.710 [2024-07-14 15:10:10.838838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.710 qpair failed and we were unable to recover it. 00:37:31.710 [2024-07-14 15:10:10.839009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.710 [2024-07-14 15:10:10.839058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.710 qpair failed and we were unable to recover it. 00:37:31.710 [2024-07-14 15:10:10.839211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.710 [2024-07-14 15:10:10.839246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.710 qpair failed and we were unable to recover it. 00:37:31.710 [2024-07-14 15:10:10.839385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.710 [2024-07-14 15:10:10.839419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.710 qpair failed and we were unable to recover it. 00:37:31.710 [2024-07-14 15:10:10.839559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.710 [2024-07-14 15:10:10.839593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.710 qpair failed and we were unable to recover it. 00:37:31.710 [2024-07-14 15:10:10.839712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.710 [2024-07-14 15:10:10.839746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.710 qpair failed and we were unable to recover it. 00:37:31.710 [2024-07-14 15:10:10.839894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.710 [2024-07-14 15:10:10.839928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.710 qpair failed and we were unable to recover it. 00:37:31.710 [2024-07-14 15:10:10.840061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.710 [2024-07-14 15:10:10.840096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.710 qpair failed and we were unable to recover it. 00:37:31.710 [2024-07-14 15:10:10.840240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.710 [2024-07-14 15:10:10.840275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.710 qpair failed and we were unable to recover it. 00:37:31.710 [2024-07-14 15:10:10.840414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.711 [2024-07-14 15:10:10.840447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.711 qpair failed and we were unable to recover it. 00:37:31.711 [2024-07-14 15:10:10.840613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.711 [2024-07-14 15:10:10.840649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.711 qpair failed and we were unable to recover it. 00:37:31.711 [2024-07-14 15:10:10.840798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.711 [2024-07-14 15:10:10.840856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.711 qpair failed and we were unable to recover it. 00:37:31.711 [2024-07-14 15:10:10.841029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.711 [2024-07-14 15:10:10.841077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.711 qpair failed and we were unable to recover it. 00:37:31.711 [2024-07-14 15:10:10.841222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.711 [2024-07-14 15:10:10.841257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.711 qpair failed and we were unable to recover it. 00:37:31.711 [2024-07-14 15:10:10.841395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.711 [2024-07-14 15:10:10.841429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.711 qpair failed and we were unable to recover it. 00:37:31.711 [2024-07-14 15:10:10.841542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.711 [2024-07-14 15:10:10.841576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.711 qpair failed and we were unable to recover it. 00:37:31.711 [2024-07-14 15:10:10.841741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.711 [2024-07-14 15:10:10.841774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.711 qpair failed and we were unable to recover it. 00:37:31.711 [2024-07-14 15:10:10.841969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.711 [2024-07-14 15:10:10.842003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.711 qpair failed and we were unable to recover it. 00:37:31.711 [2024-07-14 15:10:10.842104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.711 [2024-07-14 15:10:10.842137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.711 qpair failed and we were unable to recover it. 00:37:31.711 [2024-07-14 15:10:10.842288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.711 [2024-07-14 15:10:10.842323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.711 qpair failed and we were unable to recover it. 00:37:31.711 [2024-07-14 15:10:10.842494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.711 [2024-07-14 15:10:10.842528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.711 qpair failed and we were unable to recover it. 00:37:31.711 [2024-07-14 15:10:10.842647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.711 [2024-07-14 15:10:10.842683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.711 qpair failed and we were unable to recover it. 00:37:31.711 [2024-07-14 15:10:10.842874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.711 [2024-07-14 15:10:10.842921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.711 qpair failed and we were unable to recover it. 00:37:31.711 [2024-07-14 15:10:10.843089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.711 [2024-07-14 15:10:10.843124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.711 qpair failed and we were unable to recover it. 00:37:31.711 [2024-07-14 15:10:10.843291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.711 [2024-07-14 15:10:10.843325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.711 qpair failed and we were unable to recover it. 00:37:31.711 [2024-07-14 15:10:10.843470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.711 [2024-07-14 15:10:10.843503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.711 qpair failed and we were unable to recover it. 00:37:31.711 [2024-07-14 15:10:10.843645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.711 [2024-07-14 15:10:10.843678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.711 qpair failed and we were unable to recover it. 00:37:31.711 [2024-07-14 15:10:10.843818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.711 [2024-07-14 15:10:10.843870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.711 qpair failed and we were unable to recover it. 00:37:31.711 [2024-07-14 15:10:10.844066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.711 [2024-07-14 15:10:10.844100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.711 qpair failed and we were unable to recover it. 00:37:31.711 [2024-07-14 15:10:10.844209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.711 [2024-07-14 15:10:10.844243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.711 qpair failed and we were unable to recover it. 00:37:31.711 [2024-07-14 15:10:10.844387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.711 [2024-07-14 15:10:10.844438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.711 qpair failed and we were unable to recover it. 00:37:31.711 [2024-07-14 15:10:10.844600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.711 [2024-07-14 15:10:10.844634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.711 qpair failed and we were unable to recover it. 00:37:31.711 [2024-07-14 15:10:10.844797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.711 [2024-07-14 15:10:10.844831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.711 qpair failed and we were unable to recover it. 00:37:31.711 [2024-07-14 15:10:10.844965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.711 [2024-07-14 15:10:10.844999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.711 qpair failed and we were unable to recover it. 00:37:31.711 [2024-07-14 15:10:10.845131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.711 [2024-07-14 15:10:10.845179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.711 qpair failed and we were unable to recover it. 00:37:31.711 [2024-07-14 15:10:10.845334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.711 [2024-07-14 15:10:10.845371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.711 qpair failed and we were unable to recover it. 00:37:31.711 [2024-07-14 15:10:10.845534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.711 [2024-07-14 15:10:10.845584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.711 qpair failed and we were unable to recover it. 00:37:31.711 [2024-07-14 15:10:10.845737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.711 [2024-07-14 15:10:10.845774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.711 qpair failed and we were unable to recover it. 00:37:31.711 [2024-07-14 15:10:10.845936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.711 [2024-07-14 15:10:10.845976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.711 qpair failed and we were unable to recover it. 00:37:31.711 [2024-07-14 15:10:10.846089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.711 [2024-07-14 15:10:10.846122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.711 qpair failed and we were unable to recover it. 00:37:31.711 [2024-07-14 15:10:10.846261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.711 [2024-07-14 15:10:10.846294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.711 qpair failed and we were unable to recover it. 00:37:31.711 [2024-07-14 15:10:10.846407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.711 [2024-07-14 15:10:10.846441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.711 qpair failed and we were unable to recover it. 00:37:31.711 [2024-07-14 15:10:10.846583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.711 [2024-07-14 15:10:10.846617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.711 qpair failed and we were unable to recover it. 00:37:31.711 [2024-07-14 15:10:10.846746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.711 [2024-07-14 15:10:10.846779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.711 qpair failed and we were unable to recover it. 00:37:31.711 [2024-07-14 15:10:10.846918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.711 [2024-07-14 15:10:10.846952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.711 qpair failed and we were unable to recover it. 00:37:31.711 [2024-07-14 15:10:10.847058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.711 [2024-07-14 15:10:10.847091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.711 qpair failed and we were unable to recover it. 00:37:31.711 [2024-07-14 15:10:10.847250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.711 [2024-07-14 15:10:10.847287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.711 qpair failed and we were unable to recover it. 00:37:31.711 [2024-07-14 15:10:10.847472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.711 [2024-07-14 15:10:10.847505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.711 qpair failed and we were unable to recover it. 00:37:31.711 [2024-07-14 15:10:10.847670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.711 [2024-07-14 15:10:10.847704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.711 qpair failed and we were unable to recover it. 00:37:31.711 [2024-07-14 15:10:10.847892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.711 [2024-07-14 15:10:10.847958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.712 qpair failed and we were unable to recover it. 00:37:31.712 [2024-07-14 15:10:10.848105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.712 [2024-07-14 15:10:10.848141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.712 qpair failed and we were unable to recover it. 00:37:31.712 [2024-07-14 15:10:10.848312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.712 [2024-07-14 15:10:10.848378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.712 qpair failed and we were unable to recover it. 00:37:31.712 [2024-07-14 15:10:10.848539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.712 [2024-07-14 15:10:10.848572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.712 qpair failed and we were unable to recover it. 00:37:31.712 [2024-07-14 15:10:10.848711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.712 [2024-07-14 15:10:10.848745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.712 qpair failed and we were unable to recover it. 00:37:31.712 [2024-07-14 15:10:10.848905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.712 [2024-07-14 15:10:10.848953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.712 qpair failed and we were unable to recover it. 00:37:31.712 [2024-07-14 15:10:10.849130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.712 [2024-07-14 15:10:10.849165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.712 qpair failed and we were unable to recover it. 00:37:31.712 [2024-07-14 15:10:10.849285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.712 [2024-07-14 15:10:10.849319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.712 qpair failed and we were unable to recover it. 00:37:31.712 [2024-07-14 15:10:10.849485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.712 [2024-07-14 15:10:10.849519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.712 qpair failed and we were unable to recover it. 00:37:31.712 [2024-07-14 15:10:10.849680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.712 [2024-07-14 15:10:10.849717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.712 qpair failed and we were unable to recover it. 00:37:31.712 [2024-07-14 15:10:10.849846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.712 [2024-07-14 15:10:10.849887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.712 qpair failed and we were unable to recover it. 00:37:31.712 [2024-07-14 15:10:10.850005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.712 [2024-07-14 15:10:10.850039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.712 qpair failed and we were unable to recover it. 00:37:31.712 [2024-07-14 15:10:10.850202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.712 [2024-07-14 15:10:10.850250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.712 qpair failed and we were unable to recover it. 00:37:31.712 [2024-07-14 15:10:10.850431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.712 [2024-07-14 15:10:10.850468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.712 qpair failed and we were unable to recover it. 00:37:31.712 [2024-07-14 15:10:10.850594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.712 [2024-07-14 15:10:10.850643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.712 qpair failed and we were unable to recover it. 00:37:31.712 [2024-07-14 15:10:10.850819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.712 [2024-07-14 15:10:10.850853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.712 qpair failed and we were unable to recover it. 00:37:31.712 [2024-07-14 15:10:10.850982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.712 [2024-07-14 15:10:10.851015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.712 qpair failed and we were unable to recover it. 00:37:31.712 [2024-07-14 15:10:10.851118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.712 [2024-07-14 15:10:10.851151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.712 qpair failed and we were unable to recover it. 00:37:31.712 [2024-07-14 15:10:10.851343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.712 [2024-07-14 15:10:10.851380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.712 qpair failed and we were unable to recover it. 00:37:31.712 [2024-07-14 15:10:10.851558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.712 [2024-07-14 15:10:10.851592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.712 qpair failed and we were unable to recover it. 00:37:31.712 [2024-07-14 15:10:10.851721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.712 [2024-07-14 15:10:10.851755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.712 qpair failed and we were unable to recover it. 00:37:31.712 [2024-07-14 15:10:10.851973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.712 [2024-07-14 15:10:10.852011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.712 qpair failed and we were unable to recover it. 00:37:31.712 [2024-07-14 15:10:10.852110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.712 [2024-07-14 15:10:10.852144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.712 qpair failed and we were unable to recover it. 00:37:31.712 [2024-07-14 15:10:10.852287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.712 [2024-07-14 15:10:10.852321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.712 qpair failed and we were unable to recover it. 00:37:31.712 [2024-07-14 15:10:10.852479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.712 [2024-07-14 15:10:10.852536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.712 qpair failed and we were unable to recover it. 00:37:31.712 [2024-07-14 15:10:10.852667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.712 [2024-07-14 15:10:10.852700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.712 qpair failed and we were unable to recover it. 00:37:31.712 [2024-07-14 15:10:10.852856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.712 [2024-07-14 15:10:10.852898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.712 qpair failed and we were unable to recover it. 00:37:31.712 [2024-07-14 15:10:10.853031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.712 [2024-07-14 15:10:10.853065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.712 qpair failed and we were unable to recover it. 00:37:31.712 [2024-07-14 15:10:10.853183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.712 [2024-07-14 15:10:10.853217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.712 qpair failed and we were unable to recover it. 00:37:31.712 [2024-07-14 15:10:10.853351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.712 [2024-07-14 15:10:10.853389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.712 qpair failed and we were unable to recover it. 00:37:31.712 [2024-07-14 15:10:10.853497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.712 [2024-07-14 15:10:10.853530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.712 qpair failed and we were unable to recover it. 00:37:31.712 [2024-07-14 15:10:10.853695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.712 [2024-07-14 15:10:10.853728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.712 qpair failed and we were unable to recover it. 00:37:31.712 [2024-07-14 15:10:10.853911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.712 [2024-07-14 15:10:10.853946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.712 qpair failed and we were unable to recover it. 00:37:31.712 [2024-07-14 15:10:10.854082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.712 [2024-07-14 15:10:10.854115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.712 qpair failed and we were unable to recover it. 00:37:31.712 [2024-07-14 15:10:10.854247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.712 [2024-07-14 15:10:10.854280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.712 qpair failed and we were unable to recover it. 00:37:31.712 [2024-07-14 15:10:10.854383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.712 [2024-07-14 15:10:10.854416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.713 qpair failed and we were unable to recover it. 00:37:31.713 [2024-07-14 15:10:10.854558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.713 [2024-07-14 15:10:10.854591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.713 qpair failed and we were unable to recover it. 00:37:31.713 [2024-07-14 15:10:10.854763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.713 [2024-07-14 15:10:10.854796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.713 qpair failed and we were unable to recover it. 00:37:31.713 [2024-07-14 15:10:10.854938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.713 [2024-07-14 15:10:10.854972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.713 qpair failed and we were unable to recover it. 00:37:31.713 [2024-07-14 15:10:10.855114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.713 [2024-07-14 15:10:10.855150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.713 qpair failed and we were unable to recover it. 00:37:31.713 [2024-07-14 15:10:10.855311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.713 [2024-07-14 15:10:10.855345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.713 qpair failed and we were unable to recover it. 00:37:31.713 [2024-07-14 15:10:10.855475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.713 [2024-07-14 15:10:10.855509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.713 qpair failed and we were unable to recover it. 00:37:31.713 [2024-07-14 15:10:10.855642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.713 [2024-07-14 15:10:10.855676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.713 qpair failed and we were unable to recover it. 00:37:31.713 [2024-07-14 15:10:10.855822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.713 [2024-07-14 15:10:10.855856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.713 qpair failed and we were unable to recover it. 00:37:31.713 [2024-07-14 15:10:10.856004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.713 [2024-07-14 15:10:10.856038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.713 qpair failed and we were unable to recover it. 00:37:31.713 [2024-07-14 15:10:10.856222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.713 [2024-07-14 15:10:10.856256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.713 qpair failed and we were unable to recover it. 00:37:31.713 [2024-07-14 15:10:10.856456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.713 [2024-07-14 15:10:10.856490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.713 qpair failed and we were unable to recover it. 00:37:31.713 [2024-07-14 15:10:10.856640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.713 [2024-07-14 15:10:10.856673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.713 qpair failed and we were unable to recover it. 00:37:31.713 [2024-07-14 15:10:10.856834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.713 [2024-07-14 15:10:10.856893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.713 qpair failed and we were unable to recover it. 00:37:31.713 [2024-07-14 15:10:10.857043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.713 [2024-07-14 15:10:10.857076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.713 qpair failed and we were unable to recover it. 00:37:31.713 [2024-07-14 15:10:10.857233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.713 [2024-07-14 15:10:10.857270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.713 qpair failed and we were unable to recover it. 00:37:31.713 [2024-07-14 15:10:10.857411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.713 [2024-07-14 15:10:10.857464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.713 qpair failed and we were unable to recover it. 00:37:31.713 [2024-07-14 15:10:10.857624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.713 [2024-07-14 15:10:10.857657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.713 qpair failed and we were unable to recover it. 00:37:31.713 [2024-07-14 15:10:10.857844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.713 [2024-07-14 15:10:10.857889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.713 qpair failed and we were unable to recover it. 00:37:31.713 [2024-07-14 15:10:10.858024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.713 [2024-07-14 15:10:10.858059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.713 qpair failed and we were unable to recover it. 00:37:31.713 [2024-07-14 15:10:10.858166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.713 [2024-07-14 15:10:10.858200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.713 qpair failed and we were unable to recover it. 00:37:31.713 [2024-07-14 15:10:10.858353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.713 [2024-07-14 15:10:10.858386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.713 qpair failed and we were unable to recover it. 00:37:31.713 [2024-07-14 15:10:10.858514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.713 [2024-07-14 15:10:10.858559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.713 qpair failed and we were unable to recover it. 00:37:31.713 [2024-07-14 15:10:10.858728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.713 [2024-07-14 15:10:10.858761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.713 qpair failed and we were unable to recover it. 00:37:31.713 [2024-07-14 15:10:10.858866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.713 [2024-07-14 15:10:10.858906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.713 qpair failed and we were unable to recover it. 00:37:31.713 [2024-07-14 15:10:10.859050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.713 [2024-07-14 15:10:10.859085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.713 qpair failed and we were unable to recover it. 00:37:31.713 [2024-07-14 15:10:10.859246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.713 [2024-07-14 15:10:10.859280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.713 qpair failed and we were unable to recover it. 00:37:31.713 [2024-07-14 15:10:10.859387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.713 [2024-07-14 15:10:10.859421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.713 qpair failed and we were unable to recover it. 00:37:31.713 [2024-07-14 15:10:10.859526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.713 [2024-07-14 15:10:10.859559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.713 qpair failed and we were unable to recover it. 00:37:31.713 [2024-07-14 15:10:10.859676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.713 [2024-07-14 15:10:10.859709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.713 qpair failed and we were unable to recover it. 00:37:31.713 [2024-07-14 15:10:10.859824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.713 [2024-07-14 15:10:10.859858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.713 qpair failed and we were unable to recover it. 00:37:31.713 [2024-07-14 15:10:10.860011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.713 [2024-07-14 15:10:10.860045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.713 qpair failed and we were unable to recover it. 00:37:31.713 [2024-07-14 15:10:10.860209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.713 [2024-07-14 15:10:10.860243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.713 qpair failed and we were unable to recover it. 00:37:31.713 [2024-07-14 15:10:10.860379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.713 [2024-07-14 15:10:10.860413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.713 qpair failed and we were unable to recover it. 00:37:31.713 [2024-07-14 15:10:10.860573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.713 [2024-07-14 15:10:10.860626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.713 qpair failed and we were unable to recover it. 00:37:31.713 [2024-07-14 15:10:10.860791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.713 [2024-07-14 15:10:10.860824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.713 qpair failed and we were unable to recover it. 00:37:31.713 [2024-07-14 15:10:10.860968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.713 [2024-07-14 15:10:10.861002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.713 qpair failed and we were unable to recover it. 00:37:31.713 [2024-07-14 15:10:10.861156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.713 [2024-07-14 15:10:10.861190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.713 qpair failed and we were unable to recover it. 00:37:31.713 [2024-07-14 15:10:10.861327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.714 [2024-07-14 15:10:10.861360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.714 qpair failed and we were unable to recover it. 00:37:31.714 [2024-07-14 15:10:10.861497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.714 [2024-07-14 15:10:10.861548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.714 qpair failed and we were unable to recover it. 00:37:31.714 [2024-07-14 15:10:10.861772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.714 [2024-07-14 15:10:10.861819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.714 qpair failed and we were unable to recover it. 00:37:31.714 [2024-07-14 15:10:10.861972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.714 [2024-07-14 15:10:10.862008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.714 qpair failed and we were unable to recover it. 00:37:31.714 [2024-07-14 15:10:10.862124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.714 [2024-07-14 15:10:10.862158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.714 qpair failed and we were unable to recover it. 00:37:31.714 [2024-07-14 15:10:10.862292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.714 [2024-07-14 15:10:10.862330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.714 qpair failed and we were unable to recover it. 00:37:31.714 [2024-07-14 15:10:10.862494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.714 [2024-07-14 15:10:10.862528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.714 qpair failed and we were unable to recover it. 00:37:31.714 [2024-07-14 15:10:10.862668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.714 [2024-07-14 15:10:10.862701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.714 qpair failed and we were unable to recover it. 00:37:31.714 [2024-07-14 15:10:10.862816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.714 [2024-07-14 15:10:10.862850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.714 qpair failed and we were unable to recover it. 00:37:31.714 [2024-07-14 15:10:10.863015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.714 [2024-07-14 15:10:10.863062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.714 qpair failed and we were unable to recover it. 00:37:31.714 [2024-07-14 15:10:10.863240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.714 [2024-07-14 15:10:10.863276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.714 qpair failed and we were unable to recover it. 00:37:31.714 [2024-07-14 15:10:10.863415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.714 [2024-07-14 15:10:10.863450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.714 qpair failed and we were unable to recover it. 00:37:31.714 [2024-07-14 15:10:10.863586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.714 [2024-07-14 15:10:10.863620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.714 qpair failed and we were unable to recover it. 00:37:31.714 [2024-07-14 15:10:10.863790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.714 [2024-07-14 15:10:10.863824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.714 qpair failed and we were unable to recover it. 00:37:31.714 [2024-07-14 15:10:10.863968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.714 [2024-07-14 15:10:10.864003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.714 qpair failed and we were unable to recover it. 00:37:31.714 [2024-07-14 15:10:10.864126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.714 [2024-07-14 15:10:10.864160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.714 qpair failed and we were unable to recover it. 00:37:31.714 [2024-07-14 15:10:10.864293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.714 [2024-07-14 15:10:10.864326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.714 qpair failed and we were unable to recover it. 00:37:31.714 [2024-07-14 15:10:10.864511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.714 [2024-07-14 15:10:10.864587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.714 qpair failed and we were unable to recover it. 00:37:31.714 [2024-07-14 15:10:10.864755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.714 [2024-07-14 15:10:10.864790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.714 qpair failed and we were unable to recover it. 00:37:31.714 [2024-07-14 15:10:10.864937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.714 [2024-07-14 15:10:10.864971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.714 qpair failed and we were unable to recover it. 00:37:31.714 [2024-07-14 15:10:10.865109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.714 [2024-07-14 15:10:10.865143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.714 qpair failed and we were unable to recover it. 00:37:31.714 [2024-07-14 15:10:10.865310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.714 [2024-07-14 15:10:10.865343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.714 qpair failed and we were unable to recover it. 00:37:31.714 [2024-07-14 15:10:10.865480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.714 [2024-07-14 15:10:10.865514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.714 qpair failed and we were unable to recover it. 00:37:31.714 [2024-07-14 15:10:10.865679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.714 [2024-07-14 15:10:10.865723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.714 qpair failed and we were unable to recover it. 00:37:31.714 [2024-07-14 15:10:10.865930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.714 [2024-07-14 15:10:10.865979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.714 qpair failed and we were unable to recover it. 00:37:31.714 [2024-07-14 15:10:10.866105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.714 [2024-07-14 15:10:10.866141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.714 qpair failed and we were unable to recover it. 00:37:31.714 [2024-07-14 15:10:10.866297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.714 [2024-07-14 15:10:10.866350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.714 qpair failed and we were unable to recover it. 00:37:31.714 [2024-07-14 15:10:10.866484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.714 [2024-07-14 15:10:10.866517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.714 qpair failed and we were unable to recover it. 00:37:31.714 [2024-07-14 15:10:10.866631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.714 [2024-07-14 15:10:10.866665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.714 qpair failed and we were unable to recover it. 00:37:31.714 [2024-07-14 15:10:10.866802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.714 [2024-07-14 15:10:10.866835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.714 qpair failed and we were unable to recover it. 00:37:31.714 [2024-07-14 15:10:10.866954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.714 [2024-07-14 15:10:10.866990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.714 qpair failed and we were unable to recover it. 00:37:31.714 [2024-07-14 15:10:10.867129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.714 [2024-07-14 15:10:10.867163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.714 qpair failed and we were unable to recover it. 00:37:31.714 [2024-07-14 15:10:10.867297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.714 [2024-07-14 15:10:10.867331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.714 qpair failed and we were unable to recover it. 00:37:31.714 [2024-07-14 15:10:10.867445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.714 [2024-07-14 15:10:10.867479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.714 qpair failed and we were unable to recover it. 00:37:31.714 [2024-07-14 15:10:10.867656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.714 [2024-07-14 15:10:10.867689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.714 qpair failed and we were unable to recover it. 00:37:31.714 [2024-07-14 15:10:10.867826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.714 [2024-07-14 15:10:10.867859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.714 qpair failed and we were unable to recover it. 00:37:31.714 [2024-07-14 15:10:10.868009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.714 [2024-07-14 15:10:10.868043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.714 qpair failed and we were unable to recover it. 00:37:31.714 [2024-07-14 15:10:10.868159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.714 [2024-07-14 15:10:10.868210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.714 qpair failed and we were unable to recover it. 00:37:31.714 [2024-07-14 15:10:10.868366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.714 [2024-07-14 15:10:10.868400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.714 qpair failed and we were unable to recover it. 00:37:31.714 [2024-07-14 15:10:10.868516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.714 [2024-07-14 15:10:10.868549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.714 qpair failed and we were unable to recover it. 00:37:31.714 [2024-07-14 15:10:10.868709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.714 [2024-07-14 15:10:10.868742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.715 qpair failed and we were unable to recover it. 00:37:31.715 [2024-07-14 15:10:10.868859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.715 [2024-07-14 15:10:10.868905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.715 qpair failed and we were unable to recover it. 00:37:31.715 [2024-07-14 15:10:10.869074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.715 [2024-07-14 15:10:10.869107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.715 qpair failed and we were unable to recover it. 00:37:31.715 [2024-07-14 15:10:10.869273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.715 [2024-07-14 15:10:10.869306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.715 qpair failed and we were unable to recover it. 00:37:31.715 [2024-07-14 15:10:10.869412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.715 [2024-07-14 15:10:10.869445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.715 qpair failed and we were unable to recover it. 00:37:31.715 [2024-07-14 15:10:10.869566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.715 [2024-07-14 15:10:10.869599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.715 qpair failed and we were unable to recover it. 00:37:31.715 [2024-07-14 15:10:10.869703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.715 [2024-07-14 15:10:10.869736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.715 qpair failed and we were unable to recover it. 00:37:31.715 [2024-07-14 15:10:10.869896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.715 [2024-07-14 15:10:10.869930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.715 qpair failed and we were unable to recover it. 00:37:31.715 [2024-07-14 15:10:10.870074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.715 [2024-07-14 15:10:10.870107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.715 qpair failed and we were unable to recover it. 00:37:31.715 [2024-07-14 15:10:10.870275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.715 [2024-07-14 15:10:10.870324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.715 qpair failed and we were unable to recover it. 00:37:31.715 [2024-07-14 15:10:10.870493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.715 [2024-07-14 15:10:10.870526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.715 qpair failed and we were unable to recover it. 00:37:31.715 [2024-07-14 15:10:10.870687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.715 [2024-07-14 15:10:10.870721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.715 qpair failed and we were unable to recover it. 00:37:31.715 [2024-07-14 15:10:10.870882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.715 [2024-07-14 15:10:10.870916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.715 qpair failed and we were unable to recover it. 00:37:31.715 [2024-07-14 15:10:10.871061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.715 [2024-07-14 15:10:10.871094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.715 qpair failed and we were unable to recover it. 00:37:31.715 [2024-07-14 15:10:10.871221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.715 [2024-07-14 15:10:10.871273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.715 qpair failed and we were unable to recover it. 00:37:31.715 [2024-07-14 15:10:10.871421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.715 [2024-07-14 15:10:10.871458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.715 qpair failed and we were unable to recover it. 00:37:31.715 [2024-07-14 15:10:10.871615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.715 [2024-07-14 15:10:10.871648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.715 qpair failed and we were unable to recover it. 00:37:31.715 [2024-07-14 15:10:10.871791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.715 [2024-07-14 15:10:10.871824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.715 qpair failed and we were unable to recover it. 00:37:31.715 [2024-07-14 15:10:10.871992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.715 [2024-07-14 15:10:10.872026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.715 qpair failed and we were unable to recover it. 00:37:31.715 [2024-07-14 15:10:10.872164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.715 [2024-07-14 15:10:10.872197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.715 qpair failed and we were unable to recover it. 00:37:31.715 [2024-07-14 15:10:10.872338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.715 [2024-07-14 15:10:10.872372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.715 qpair failed and we were unable to recover it. 00:37:31.715 [2024-07-14 15:10:10.872506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.715 [2024-07-14 15:10:10.872558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.715 qpair failed and we were unable to recover it. 00:37:31.715 [2024-07-14 15:10:10.872658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.715 [2024-07-14 15:10:10.872692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.715 qpair failed and we were unable to recover it. 00:37:31.715 [2024-07-14 15:10:10.872858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.715 [2024-07-14 15:10:10.872926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.715 qpair failed and we were unable to recover it. 00:37:31.715 [2024-07-14 15:10:10.873089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.715 [2024-07-14 15:10:10.873122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.715 qpair failed and we were unable to recover it. 00:37:31.715 [2024-07-14 15:10:10.873263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.715 [2024-07-14 15:10:10.873297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.715 qpair failed and we were unable to recover it. 00:37:31.715 [2024-07-14 15:10:10.873440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.715 [2024-07-14 15:10:10.873476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.715 qpair failed and we were unable to recover it. 00:37:31.715 [2024-07-14 15:10:10.873595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.715 [2024-07-14 15:10:10.873631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.715 qpair failed and we were unable to recover it. 00:37:31.715 [2024-07-14 15:10:10.873790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.715 [2024-07-14 15:10:10.873824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.715 qpair failed and we were unable to recover it. 00:37:31.715 [2024-07-14 15:10:10.873974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.715 [2024-07-14 15:10:10.874008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.715 qpair failed and we were unable to recover it. 00:37:31.715 [2024-07-14 15:10:10.874146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.715 [2024-07-14 15:10:10.874199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.715 qpair failed and we were unable to recover it. 00:37:31.715 [2024-07-14 15:10:10.874380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.715 [2024-07-14 15:10:10.874413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.715 qpair failed and we were unable to recover it. 00:37:31.715 [2024-07-14 15:10:10.874538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.715 [2024-07-14 15:10:10.874576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.715 qpair failed and we were unable to recover it. 00:37:31.715 [2024-07-14 15:10:10.874735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.715 [2024-07-14 15:10:10.874767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.715 qpair failed and we were unable to recover it. 00:37:31.715 [2024-07-14 15:10:10.874926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.715 [2024-07-14 15:10:10.874960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.715 qpair failed and we were unable to recover it. 00:37:31.715 [2024-07-14 15:10:10.875099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.715 [2024-07-14 15:10:10.875132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.715 qpair failed and we were unable to recover it. 00:37:31.715 [2024-07-14 15:10:10.875269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.715 [2024-07-14 15:10:10.875302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.715 qpair failed and we were unable to recover it. 00:37:31.715 [2024-07-14 15:10:10.875445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.715 [2024-07-14 15:10:10.875479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.715 qpair failed and we were unable to recover it. 00:37:31.715 [2024-07-14 15:10:10.875619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.715 [2024-07-14 15:10:10.875652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.715 qpair failed and we were unable to recover it. 00:37:31.715 [2024-07-14 15:10:10.875808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.715 [2024-07-14 15:10:10.875846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.715 qpair failed and we were unable to recover it. 00:37:31.715 [2024-07-14 15:10:10.875991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.715 [2024-07-14 15:10:10.876024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.715 qpair failed and we were unable to recover it. 00:37:31.715 [2024-07-14 15:10:10.876131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.716 [2024-07-14 15:10:10.876165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.716 qpair failed and we were unable to recover it. 00:37:31.716 [2024-07-14 15:10:10.876325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.716 [2024-07-14 15:10:10.876362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.716 qpair failed and we were unable to recover it. 00:37:31.716 [2024-07-14 15:10:10.876488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.716 [2024-07-14 15:10:10.876521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.716 qpair failed and we were unable to recover it. 00:37:31.716 [2024-07-14 15:10:10.876682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.716 [2024-07-14 15:10:10.876715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.716 qpair failed and we were unable to recover it. 00:37:31.716 [2024-07-14 15:10:10.876832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.716 [2024-07-14 15:10:10.876865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.716 qpair failed and we were unable to recover it. 00:37:31.716 [2024-07-14 15:10:10.876973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.716 [2024-07-14 15:10:10.877007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.716 qpair failed and we were unable to recover it. 00:37:31.716 [2024-07-14 15:10:10.877121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.716 [2024-07-14 15:10:10.877154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.716 qpair failed and we were unable to recover it. 00:37:31.716 [2024-07-14 15:10:10.877310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.716 [2024-07-14 15:10:10.877346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.716 qpair failed and we were unable to recover it. 00:37:31.716 [2024-07-14 15:10:10.877473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.716 [2024-07-14 15:10:10.877507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.716 qpair failed and we were unable to recover it. 00:37:31.716 [2024-07-14 15:10:10.877641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.716 [2024-07-14 15:10:10.877674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.716 qpair failed and we were unable to recover it. 00:37:31.716 [2024-07-14 15:10:10.877875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.716 [2024-07-14 15:10:10.877914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.716 qpair failed and we were unable to recover it. 00:37:31.716 [2024-07-14 15:10:10.878040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.716 [2024-07-14 15:10:10.878073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.716 qpair failed and we were unable to recover it. 00:37:31.716 [2024-07-14 15:10:10.878216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.716 [2024-07-14 15:10:10.878269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.716 qpair failed and we were unable to recover it. 00:37:31.716 [2024-07-14 15:10:10.878407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.716 [2024-07-14 15:10:10.878443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.716 qpair failed and we were unable to recover it. 00:37:31.716 [2024-07-14 15:10:10.878621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.716 [2024-07-14 15:10:10.878654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.716 qpair failed and we were unable to recover it. 00:37:31.716 [2024-07-14 15:10:10.878785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.716 [2024-07-14 15:10:10.878818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.716 qpair failed and we were unable to recover it. 00:37:31.716 [2024-07-14 15:10:10.878988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.716 [2024-07-14 15:10:10.879026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.716 qpair failed and we were unable to recover it. 00:37:31.716 [2024-07-14 15:10:10.879183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.716 [2024-07-14 15:10:10.879216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.716 qpair failed and we were unable to recover it. 00:37:31.716 [2024-07-14 15:10:10.879343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.716 [2024-07-14 15:10:10.879393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.716 qpair failed and we were unable to recover it. 00:37:31.716 [2024-07-14 15:10:10.879537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.716 [2024-07-14 15:10:10.879574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.716 qpair failed and we were unable to recover it. 00:37:31.716 [2024-07-14 15:10:10.879721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.716 [2024-07-14 15:10:10.879755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.716 qpair failed and we were unable to recover it. 00:37:31.716 [2024-07-14 15:10:10.879899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.716 [2024-07-14 15:10:10.879953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.716 qpair failed and we were unable to recover it. 00:37:31.716 [2024-07-14 15:10:10.880062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.716 [2024-07-14 15:10:10.880103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.716 qpair failed and we were unable to recover it. 00:37:31.716 [2024-07-14 15:10:10.880233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.716 [2024-07-14 15:10:10.880265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.716 qpair failed and we were unable to recover it. 00:37:31.716 [2024-07-14 15:10:10.880431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.716 [2024-07-14 15:10:10.880464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.716 qpair failed and we were unable to recover it. 00:37:31.716 [2024-07-14 15:10:10.880622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.716 [2024-07-14 15:10:10.880659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.716 qpair failed and we were unable to recover it. 00:37:31.716 [2024-07-14 15:10:10.880822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.716 [2024-07-14 15:10:10.880855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.716 qpair failed and we were unable to recover it. 00:37:31.716 [2024-07-14 15:10:10.880978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.716 [2024-07-14 15:10:10.881012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.716 qpair failed and we were unable to recover it. 00:37:31.716 [2024-07-14 15:10:10.881191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.716 [2024-07-14 15:10:10.881225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.716 qpair failed and we were unable to recover it. 00:37:31.716 [2024-07-14 15:10:10.881361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.716 [2024-07-14 15:10:10.881394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.716 qpair failed and we were unable to recover it. 00:37:31.716 [2024-07-14 15:10:10.881525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.716 [2024-07-14 15:10:10.881573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.716 qpair failed and we were unable to recover it. 00:37:31.716 [2024-07-14 15:10:10.881705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.716 [2024-07-14 15:10:10.881741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.716 qpair failed and we were unable to recover it. 00:37:31.716 [2024-07-14 15:10:10.881911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.716 [2024-07-14 15:10:10.881944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.716 qpair failed and we were unable to recover it. 00:37:31.716 [2024-07-14 15:10:10.882044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.716 [2024-07-14 15:10:10.882078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.716 qpair failed and we were unable to recover it. 00:37:31.716 [2024-07-14 15:10:10.882212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.716 [2024-07-14 15:10:10.882245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.716 qpair failed and we were unable to recover it. 00:37:31.716 [2024-07-14 15:10:10.882381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.716 [2024-07-14 15:10:10.882415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.716 qpair failed and we were unable to recover it. 00:37:31.716 [2024-07-14 15:10:10.882552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.716 [2024-07-14 15:10:10.882586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.716 qpair failed and we were unable to recover it. 00:37:31.716 [2024-07-14 15:10:10.882699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.716 [2024-07-14 15:10:10.882735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.716 qpair failed and we were unable to recover it. 00:37:31.716 [2024-07-14 15:10:10.882905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.716 [2024-07-14 15:10:10.882940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.716 qpair failed and we were unable to recover it. 00:37:31.716 [2024-07-14 15:10:10.883111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.716 [2024-07-14 15:10:10.883145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.716 qpair failed and we were unable to recover it. 00:37:31.716 [2024-07-14 15:10:10.883279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.716 [2024-07-14 15:10:10.883328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.716 qpair failed and we were unable to recover it. 00:37:31.716 [2024-07-14 15:10:10.883489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.717 [2024-07-14 15:10:10.883522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.717 qpair failed and we were unable to recover it. 00:37:31.717 [2024-07-14 15:10:10.883652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.717 [2024-07-14 15:10:10.883685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.717 qpair failed and we were unable to recover it. 00:37:31.717 [2024-07-14 15:10:10.883792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.717 [2024-07-14 15:10:10.883826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.717 qpair failed and we were unable to recover it. 00:37:31.717 [2024-07-14 15:10:10.883948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.717 [2024-07-14 15:10:10.883983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.717 qpair failed and we were unable to recover it. 00:37:31.717 [2024-07-14 15:10:10.884116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.717 [2024-07-14 15:10:10.884149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.717 qpair failed and we were unable to recover it. 00:37:31.717 [2024-07-14 15:10:10.884313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.717 [2024-07-14 15:10:10.884362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.717 qpair failed and we were unable to recover it. 00:37:31.717 [2024-07-14 15:10:10.884481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.717 [2024-07-14 15:10:10.884514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.717 qpair failed and we were unable to recover it. 00:37:31.717 [2024-07-14 15:10:10.884657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.717 [2024-07-14 15:10:10.884690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.717 qpair failed and we were unable to recover it. 00:37:31.717 [2024-07-14 15:10:10.884870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.717 [2024-07-14 15:10:10.884923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.717 qpair failed and we were unable to recover it. 00:37:31.717 [2024-07-14 15:10:10.885082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.717 [2024-07-14 15:10:10.885115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.717 qpair failed and we were unable to recover it. 00:37:31.717 [2024-07-14 15:10:10.885249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.717 [2024-07-14 15:10:10.885282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.717 qpair failed and we were unable to recover it. 00:37:31.717 [2024-07-14 15:10:10.885437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.717 [2024-07-14 15:10:10.885474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.717 qpair failed and we were unable to recover it. 00:37:31.717 [2024-07-14 15:10:10.885601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.717 [2024-07-14 15:10:10.885633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.717 qpair failed and we were unable to recover it. 00:37:31.717 [2024-07-14 15:10:10.885795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.717 [2024-07-14 15:10:10.885829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.717 qpair failed and we were unable to recover it. 00:37:31.717 [2024-07-14 15:10:10.885974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.717 [2024-07-14 15:10:10.886011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.717 qpair failed and we were unable to recover it. 00:37:31.717 [2024-07-14 15:10:10.886166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.717 [2024-07-14 15:10:10.886200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.717 qpair failed and we were unable to recover it. 00:37:31.717 [2024-07-14 15:10:10.886336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.717 [2024-07-14 15:10:10.886369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.717 qpair failed and we were unable to recover it. 00:37:31.717 [2024-07-14 15:10:10.886502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.717 [2024-07-14 15:10:10.886539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.717 qpair failed and we were unable to recover it. 00:37:31.717 [2024-07-14 15:10:10.886698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.717 [2024-07-14 15:10:10.886731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.717 qpair failed and we were unable to recover it. 00:37:31.717 [2024-07-14 15:10:10.886895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.717 [2024-07-14 15:10:10.886932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.717 qpair failed and we were unable to recover it. 00:37:31.717 [2024-07-14 15:10:10.887083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.717 [2024-07-14 15:10:10.887121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.717 qpair failed and we were unable to recover it. 00:37:31.717 [2024-07-14 15:10:10.887277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.717 [2024-07-14 15:10:10.887314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.717 qpair failed and we were unable to recover it. 00:37:31.717 [2024-07-14 15:10:10.887457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.717 [2024-07-14 15:10:10.887510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.717 qpair failed and we were unable to recover it. 00:37:31.717 [2024-07-14 15:10:10.887701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.717 [2024-07-14 15:10:10.887740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.717 qpair failed and we were unable to recover it. 00:37:31.717 [2024-07-14 15:10:10.887925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.717 [2024-07-14 15:10:10.887959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.717 qpair failed and we were unable to recover it. 00:37:31.717 [2024-07-14 15:10:10.888090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.717 [2024-07-14 15:10:10.888123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.717 qpair failed and we were unable to recover it. 00:37:31.717 [2024-07-14 15:10:10.888282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.717 [2024-07-14 15:10:10.888315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.717 qpair failed and we were unable to recover it. 00:37:31.717 [2024-07-14 15:10:10.888490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.717 [2024-07-14 15:10:10.888524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.717 qpair failed and we were unable to recover it. 00:37:31.717 [2024-07-14 15:10:10.888669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.717 [2024-07-14 15:10:10.888705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.717 qpair failed and we were unable to recover it. 00:37:31.717 [2024-07-14 15:10:10.888888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.717 [2024-07-14 15:10:10.888923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.717 qpair failed and we were unable to recover it. 00:37:31.717 [2024-07-14 15:10:10.889053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.717 [2024-07-14 15:10:10.889086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.717 qpair failed and we were unable to recover it. 00:37:31.717 [2024-07-14 15:10:10.889190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.717 [2024-07-14 15:10:10.889224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.717 qpair failed and we were unable to recover it. 00:37:31.717 [2024-07-14 15:10:10.889361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.717 [2024-07-14 15:10:10.889414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.717 qpair failed and we were unable to recover it. 00:37:31.717 [2024-07-14 15:10:10.889519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.717 [2024-07-14 15:10:10.889552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.717 qpair failed and we were unable to recover it. 00:37:31.717 [2024-07-14 15:10:10.889717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.717 [2024-07-14 15:10:10.889750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.717 qpair failed and we were unable to recover it. 00:37:31.718 [2024-07-14 15:10:10.889921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.718 [2024-07-14 15:10:10.889972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.718 qpair failed and we were unable to recover it. 00:37:31.718 [2024-07-14 15:10:10.890136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.718 [2024-07-14 15:10:10.890170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.718 qpair failed and we were unable to recover it. 00:37:31.718 [2024-07-14 15:10:10.890312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.718 [2024-07-14 15:10:10.890345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.718 qpair failed and we were unable to recover it. 00:37:31.718 [2024-07-14 15:10:10.890529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.718 [2024-07-14 15:10:10.890565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.718 qpair failed and we were unable to recover it. 00:37:31.718 [2024-07-14 15:10:10.890705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.718 [2024-07-14 15:10:10.890738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.718 qpair failed and we were unable to recover it. 00:37:31.718 [2024-07-14 15:10:10.890883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.718 [2024-07-14 15:10:10.890917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.718 qpair failed and we were unable to recover it. 00:37:31.718 [2024-07-14 15:10:10.891050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.718 [2024-07-14 15:10:10.891084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.718 qpair failed and we were unable to recover it. 00:37:31.718 [2024-07-14 15:10:10.891223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.718 [2024-07-14 15:10:10.891256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.718 qpair failed and we were unable to recover it. 00:37:31.718 [2024-07-14 15:10:10.891369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.718 [2024-07-14 15:10:10.891419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.718 qpair failed and we were unable to recover it. 00:37:31.718 [2024-07-14 15:10:10.891547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.718 [2024-07-14 15:10:10.891599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.718 qpair failed and we were unable to recover it. 00:37:31.718 [2024-07-14 15:10:10.891764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.718 [2024-07-14 15:10:10.891798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.718 qpair failed and we were unable to recover it. 00:37:31.718 [2024-07-14 15:10:10.891958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.718 [2024-07-14 15:10:10.891992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.718 qpair failed and we were unable to recover it. 00:37:31.718 [2024-07-14 15:10:10.892115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.718 [2024-07-14 15:10:10.892148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.718 qpair failed and we were unable to recover it. 00:37:31.718 [2024-07-14 15:10:10.892279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.718 [2024-07-14 15:10:10.892312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.718 qpair failed and we were unable to recover it. 00:37:31.718 [2024-07-14 15:10:10.892469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.718 [2024-07-14 15:10:10.892502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.718 qpair failed and we were unable to recover it. 00:37:31.718 [2024-07-14 15:10:10.892635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.718 [2024-07-14 15:10:10.892668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.718 qpair failed and we were unable to recover it. 00:37:31.718 [2024-07-14 15:10:10.892780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.718 [2024-07-14 15:10:10.892814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.718 qpair failed and we were unable to recover it. 00:37:31.718 [2024-07-14 15:10:10.892923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.718 [2024-07-14 15:10:10.892957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.718 qpair failed and we were unable to recover it. 00:37:31.718 [2024-07-14 15:10:10.893101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.718 [2024-07-14 15:10:10.893140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.718 qpair failed and we were unable to recover it. 00:37:31.718 [2024-07-14 15:10:10.893269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.718 [2024-07-14 15:10:10.893301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.718 qpair failed and we were unable to recover it. 00:37:31.718 [2024-07-14 15:10:10.893416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.718 [2024-07-14 15:10:10.893449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.718 qpair failed and we were unable to recover it. 00:37:31.718 [2024-07-14 15:10:10.893646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.718 [2024-07-14 15:10:10.893680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.718 qpair failed and we were unable to recover it. 00:37:31.718 [2024-07-14 15:10:10.893818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.718 [2024-07-14 15:10:10.893851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.718 qpair failed and we were unable to recover it. 00:37:31.718 [2024-07-14 15:10:10.894020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.718 [2024-07-14 15:10:10.894054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.718 qpair failed and we were unable to recover it. 00:37:31.718 [2024-07-14 15:10:10.894202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.718 [2024-07-14 15:10:10.894240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.718 qpair failed and we were unable to recover it. 00:37:31.718 [2024-07-14 15:10:10.894371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.718 [2024-07-14 15:10:10.894404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.718 qpair failed and we were unable to recover it. 00:37:31.718 [2024-07-14 15:10:10.894542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.718 [2024-07-14 15:10:10.894579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.718 qpair failed and we were unable to recover it. 00:37:31.718 [2024-07-14 15:10:10.894748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.718 [2024-07-14 15:10:10.894781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.718 qpair failed and we were unable to recover it. 00:37:31.718 [2024-07-14 15:10:10.894894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.718 [2024-07-14 15:10:10.894928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.718 qpair failed and we were unable to recover it. 00:37:31.718 [2024-07-14 15:10:10.895025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.718 [2024-07-14 15:10:10.895059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.718 qpair failed and we were unable to recover it. 00:37:31.718 [2024-07-14 15:10:10.895221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.718 [2024-07-14 15:10:10.895254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.718 qpair failed and we were unable to recover it. 00:37:31.718 [2024-07-14 15:10:10.895412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.718 [2024-07-14 15:10:10.895445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.718 qpair failed and we were unable to recover it. 00:37:31.718 [2024-07-14 15:10:10.895585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.718 [2024-07-14 15:10:10.895636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.718 qpair failed and we were unable to recover it. 00:37:31.718 [2024-07-14 15:10:10.895790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.718 [2024-07-14 15:10:10.895839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.718 qpair failed and we were unable to recover it. 00:37:31.718 [2024-07-14 15:10:10.895953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.718 [2024-07-14 15:10:10.896018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.718 qpair failed and we were unable to recover it. 00:37:31.718 [2024-07-14 15:10:10.896182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.718 [2024-07-14 15:10:10.896233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.718 qpair failed and we were unable to recover it. 00:37:31.718 [2024-07-14 15:10:10.896419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.718 [2024-07-14 15:10:10.896452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.718 qpair failed and we were unable to recover it. 00:37:31.718 [2024-07-14 15:10:10.896585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.718 [2024-07-14 15:10:10.896618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.718 qpair failed and we were unable to recover it. 00:37:31.718 [2024-07-14 15:10:10.896776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.718 [2024-07-14 15:10:10.896826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.718 qpair failed and we were unable to recover it. 00:37:31.718 [2024-07-14 15:10:10.897024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.718 [2024-07-14 15:10:10.897057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.718 qpair failed and we were unable to recover it. 00:37:31.718 [2024-07-14 15:10:10.897206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.718 [2024-07-14 15:10:10.897239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.719 qpair failed and we were unable to recover it. 00:37:31.719 [2024-07-14 15:10:10.897345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.719 [2024-07-14 15:10:10.897379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.719 qpair failed and we were unable to recover it. 00:37:31.719 [2024-07-14 15:10:10.897512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.719 [2024-07-14 15:10:10.897545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.719 qpair failed and we were unable to recover it. 00:37:31.719 [2024-07-14 15:10:10.897714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.719 [2024-07-14 15:10:10.897747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.719 qpair failed and we were unable to recover it. 00:37:31.719 [2024-07-14 15:10:10.897911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.719 [2024-07-14 15:10:10.897962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.719 qpair failed and we were unable to recover it. 00:37:31.719 [2024-07-14 15:10:10.898106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.719 [2024-07-14 15:10:10.898139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.719 qpair failed and we were unable to recover it. 00:37:31.719 [2024-07-14 15:10:10.898266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.719 [2024-07-14 15:10:10.898299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.719 qpair failed and we were unable to recover it. 00:37:31.719 [2024-07-14 15:10:10.898457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.719 [2024-07-14 15:10:10.898513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.719 qpair failed and we were unable to recover it. 00:37:31.719 [2024-07-14 15:10:10.898658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.719 [2024-07-14 15:10:10.898695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.719 qpair failed and we were unable to recover it. 00:37:31.719 [2024-07-14 15:10:10.898838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.719 [2024-07-14 15:10:10.898871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.719 qpair failed and we were unable to recover it. 00:37:31.719 [2024-07-14 15:10:10.899012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.719 [2024-07-14 15:10:10.899063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.719 qpair failed and we were unable to recover it. 00:37:31.719 [2024-07-14 15:10:10.899253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.719 [2024-07-14 15:10:10.899287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.719 qpair failed and we were unable to recover it. 00:37:31.719 [2024-07-14 15:10:10.899396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.719 [2024-07-14 15:10:10.899429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.719 qpair failed and we were unable to recover it. 00:37:31.719 [2024-07-14 15:10:10.899560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.719 [2024-07-14 15:10:10.899593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.719 qpair failed and we were unable to recover it. 00:37:31.719 [2024-07-14 15:10:10.899763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.719 [2024-07-14 15:10:10.899811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.719 qpair failed and we were unable to recover it. 00:37:31.719 [2024-07-14 15:10:10.899953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.719 [2024-07-14 15:10:10.899987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.719 qpair failed and we were unable to recover it. 00:37:31.719 [2024-07-14 15:10:10.900088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.719 [2024-07-14 15:10:10.900138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.719 qpair failed and we were unable to recover it. 00:37:31.719 [2024-07-14 15:10:10.900312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.719 [2024-07-14 15:10:10.900349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.719 qpair failed and we were unable to recover it. 00:37:31.719 [2024-07-14 15:10:10.900511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.719 [2024-07-14 15:10:10.900544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.719 qpair failed and we were unable to recover it. 00:37:31.719 [2024-07-14 15:10:10.900637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.719 [2024-07-14 15:10:10.900671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.719 qpair failed and we were unable to recover it. 00:37:31.719 [2024-07-14 15:10:10.900840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.719 [2024-07-14 15:10:10.900884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.719 qpair failed and we were unable to recover it. 00:37:31.719 [2024-07-14 15:10:10.901042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.719 [2024-07-14 15:10:10.901074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.719 qpair failed and we were unable to recover it. 00:37:31.719 [2024-07-14 15:10:10.901202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.719 [2024-07-14 15:10:10.901252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.719 qpair failed and we were unable to recover it. 00:37:31.719 [2024-07-14 15:10:10.901399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.719 [2024-07-14 15:10:10.901436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.719 qpair failed and we were unable to recover it. 00:37:31.719 [2024-07-14 15:10:10.901594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.719 [2024-07-14 15:10:10.901628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.719 qpair failed and we were unable to recover it. 00:37:31.719 [2024-07-14 15:10:10.901765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.719 [2024-07-14 15:10:10.901816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.719 qpair failed and we were unable to recover it. 00:37:31.719 [2024-07-14 15:10:10.901978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.719 [2024-07-14 15:10:10.902020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.719 qpair failed and we were unable to recover it. 00:37:31.719 [2024-07-14 15:10:10.902208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.719 [2024-07-14 15:10:10.902241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.719 qpair failed and we were unable to recover it. 00:37:31.719 [2024-07-14 15:10:10.902399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.719 [2024-07-14 15:10:10.902435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.719 qpair failed and we were unable to recover it. 00:37:31.719 [2024-07-14 15:10:10.902539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.719 [2024-07-14 15:10:10.902576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.719 qpair failed and we were unable to recover it. 00:37:31.719 [2024-07-14 15:10:10.902737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.719 [2024-07-14 15:10:10.902770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.719 qpair failed and we were unable to recover it. 00:37:31.719 [2024-07-14 15:10:10.902900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.719 [2024-07-14 15:10:10.902952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.719 qpair failed and we were unable to recover it. 00:37:31.719 [2024-07-14 15:10:10.903138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.719 [2024-07-14 15:10:10.903172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.719 qpair failed and we were unable to recover it. 00:37:31.719 [2024-07-14 15:10:10.903289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.719 [2024-07-14 15:10:10.903322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.719 qpair failed and we were unable to recover it. 00:37:31.719 [2024-07-14 15:10:10.903458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.719 [2024-07-14 15:10:10.903491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.719 qpair failed and we were unable to recover it. 00:37:31.719 [2024-07-14 15:10:10.903601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.719 [2024-07-14 15:10:10.903634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.719 qpair failed and we were unable to recover it. 00:37:31.719 [2024-07-14 15:10:10.903804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.719 [2024-07-14 15:10:10.903837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.719 qpair failed and we were unable to recover it. 00:37:31.719 [2024-07-14 15:10:10.904001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.719 [2024-07-14 15:10:10.904035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.719 qpair failed and we were unable to recover it. 00:37:31.719 [2024-07-14 15:10:10.904162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.719 [2024-07-14 15:10:10.904199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.719 qpair failed and we were unable to recover it. 00:37:31.719 [2024-07-14 15:10:10.904349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.719 [2024-07-14 15:10:10.904382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.719 qpair failed and we were unable to recover it. 00:37:31.719 [2024-07-14 15:10:10.904527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.719 [2024-07-14 15:10:10.904579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.719 qpair failed and we were unable to recover it. 00:37:31.719 [2024-07-14 15:10:10.904729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.719 [2024-07-14 15:10:10.904762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.720 qpair failed and we were unable to recover it. 00:37:31.720 [2024-07-14 15:10:10.904899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.720 [2024-07-14 15:10:10.904932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.720 qpair failed and we were unable to recover it. 00:37:31.720 [2024-07-14 15:10:10.905107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.720 [2024-07-14 15:10:10.905145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.720 qpair failed and we were unable to recover it. 00:37:31.720 [2024-07-14 15:10:10.905297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.720 [2024-07-14 15:10:10.905333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.720 qpair failed and we were unable to recover it. 00:37:31.720 [2024-07-14 15:10:10.905500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.720 [2024-07-14 15:10:10.905533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.720 qpair failed and we were unable to recover it. 00:37:31.720 [2024-07-14 15:10:10.905631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.720 [2024-07-14 15:10:10.905664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.720 qpair failed and we were unable to recover it. 00:37:31.720 [2024-07-14 15:10:10.905791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.720 [2024-07-14 15:10:10.905828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.720 qpair failed and we were unable to recover it. 00:37:31.720 [2024-07-14 15:10:10.905990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.720 [2024-07-14 15:10:10.906024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.720 qpair failed and we were unable to recover it. 00:37:31.720 [2024-07-14 15:10:10.906203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.720 [2024-07-14 15:10:10.906239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.720 qpair failed and we were unable to recover it. 00:37:31.720 [2024-07-14 15:10:10.906401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.720 [2024-07-14 15:10:10.906434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.720 qpair failed and we were unable to recover it. 00:37:31.720 [2024-07-14 15:10:10.906593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.720 [2024-07-14 15:10:10.906626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.720 qpair failed and we were unable to recover it. 00:37:31.720 [2024-07-14 15:10:10.906775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.720 [2024-07-14 15:10:10.906812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.720 qpair failed and we were unable to recover it. 00:37:31.720 [2024-07-14 15:10:10.906975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.720 [2024-07-14 15:10:10.907013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.720 qpair failed and we were unable to recover it. 00:37:31.720 [2024-07-14 15:10:10.907199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.720 [2024-07-14 15:10:10.907232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.720 qpair failed and we were unable to recover it. 00:37:31.720 [2024-07-14 15:10:10.907344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.720 [2024-07-14 15:10:10.907396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.720 qpair failed and we were unable to recover it. 00:37:31.720 [2024-07-14 15:10:10.907553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.720 [2024-07-14 15:10:10.907605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.720 qpair failed and we were unable to recover it. 00:37:31.720 [2024-07-14 15:10:10.907735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.720 [2024-07-14 15:10:10.907768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.720 qpair failed and we were unable to recover it. 00:37:31.720 [2024-07-14 15:10:10.907896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.720 [2024-07-14 15:10:10.907961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.720 qpair failed and we were unable to recover it. 00:37:31.720 [2024-07-14 15:10:10.908132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.720 [2024-07-14 15:10:10.908169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.720 qpair failed and we were unable to recover it. 00:37:31.720 [2024-07-14 15:10:10.908299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.720 [2024-07-14 15:10:10.908332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.720 qpair failed and we were unable to recover it. 00:37:31.720 [2024-07-14 15:10:10.908491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.720 [2024-07-14 15:10:10.908542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.720 qpair failed and we were unable to recover it. 00:37:31.720 [2024-07-14 15:10:10.908692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.720 [2024-07-14 15:10:10.908728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.720 qpair failed and we were unable to recover it. 00:37:31.720 [2024-07-14 15:10:10.908907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.720 [2024-07-14 15:10:10.908941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.720 qpair failed and we were unable to recover it. 00:37:31.720 [2024-07-14 15:10:10.909132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.720 [2024-07-14 15:10:10.909170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.720 qpair failed and we were unable to recover it. 00:37:31.720 [2024-07-14 15:10:10.909283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.720 [2024-07-14 15:10:10.909320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.720 qpair failed and we were unable to recover it. 00:37:31.720 [2024-07-14 15:10:10.909453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.720 [2024-07-14 15:10:10.909491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.720 qpair failed and we were unable to recover it. 00:37:31.720 [2024-07-14 15:10:10.909627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.720 [2024-07-14 15:10:10.909676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.720 qpair failed and we were unable to recover it. 00:37:31.720 [2024-07-14 15:10:10.909854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.720 [2024-07-14 15:10:10.909897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.720 qpair failed and we were unable to recover it. 00:37:31.720 [2024-07-14 15:10:10.910037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.720 [2024-07-14 15:10:10.910070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.720 qpair failed and we were unable to recover it. 00:37:31.720 [2024-07-14 15:10:10.910206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.720 [2024-07-14 15:10:10.910239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.720 qpair failed and we were unable to recover it. 00:37:31.720 [2024-07-14 15:10:10.910364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.720 [2024-07-14 15:10:10.910400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.720 qpair failed and we were unable to recover it. 00:37:31.720 [2024-07-14 15:10:10.910556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.720 [2024-07-14 15:10:10.910590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.720 qpair failed and we were unable to recover it. 00:37:31.720 [2024-07-14 15:10:10.910732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.720 [2024-07-14 15:10:10.910783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.720 qpair failed and we were unable to recover it. 00:37:31.720 [2024-07-14 15:10:10.910943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.720 [2024-07-14 15:10:10.910980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.720 qpair failed and we were unable to recover it. 00:37:31.720 [2024-07-14 15:10:10.911142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.720 [2024-07-14 15:10:10.911175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.720 qpair failed and we were unable to recover it. 00:37:31.720 [2024-07-14 15:10:10.911324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.720 [2024-07-14 15:10:10.911357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.720 qpair failed and we were unable to recover it. 00:37:31.720 [2024-07-14 15:10:10.911457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.720 [2024-07-14 15:10:10.911491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.720 qpair failed and we were unable to recover it. 00:37:31.720 [2024-07-14 15:10:10.911654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.720 [2024-07-14 15:10:10.911688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.720 qpair failed and we were unable to recover it. 00:37:31.720 [2024-07-14 15:10:10.911849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.720 [2024-07-14 15:10:10.911893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.720 qpair failed and we were unable to recover it. 00:37:31.720 [2024-07-14 15:10:10.912108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.720 [2024-07-14 15:10:10.912141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.720 qpair failed and we were unable to recover it. 00:37:31.720 [2024-07-14 15:10:10.912305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.720 [2024-07-14 15:10:10.912339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.720 qpair failed and we were unable to recover it. 00:37:31.720 [2024-07-14 15:10:10.912507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.721 [2024-07-14 15:10:10.912543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.721 qpair failed and we were unable to recover it. 00:37:31.721 [2024-07-14 15:10:10.912671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.721 [2024-07-14 15:10:10.912708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.721 qpair failed and we were unable to recover it. 00:37:31.721 [2024-07-14 15:10:10.912898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.721 [2024-07-14 15:10:10.912932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.721 qpair failed and we were unable to recover it. 00:37:31.721 [2024-07-14 15:10:10.913050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.721 [2024-07-14 15:10:10.913103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.721 qpair failed and we were unable to recover it. 00:37:31.721 [2024-07-14 15:10:10.913276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.721 [2024-07-14 15:10:10.913313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.721 qpair failed and we were unable to recover it. 00:37:31.721 [2024-07-14 15:10:10.913467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.721 [2024-07-14 15:10:10.913500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.721 qpair failed and we were unable to recover it. 00:37:31.721 [2024-07-14 15:10:10.913608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.721 [2024-07-14 15:10:10.913642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.721 qpair failed and we were unable to recover it. 00:37:31.721 [2024-07-14 15:10:10.913808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.721 [2024-07-14 15:10:10.913845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.721 qpair failed and we were unable to recover it. 00:37:31.721 [2024-07-14 15:10:10.914035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.721 [2024-07-14 15:10:10.914069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.721 qpair failed and we were unable to recover it. 00:37:31.721 [2024-07-14 15:10:10.914185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.721 [2024-07-14 15:10:10.914218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.721 qpair failed and we were unable to recover it. 00:37:31.721 [2024-07-14 15:10:10.914331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.721 [2024-07-14 15:10:10.914364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.721 qpair failed and we were unable to recover it. 00:37:31.721 [2024-07-14 15:10:10.914549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.721 [2024-07-14 15:10:10.914598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.721 qpair failed and we were unable to recover it. 00:37:31.721 [2024-07-14 15:10:10.914789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.721 [2024-07-14 15:10:10.914842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.721 qpair failed and we were unable to recover it. 00:37:31.721 [2024-07-14 15:10:10.914960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.721 [2024-07-14 15:10:10.914996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.721 qpair failed and we were unable to recover it. 00:37:31.721 [2024-07-14 15:10:10.915141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.721 [2024-07-14 15:10:10.915176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.721 qpair failed and we were unable to recover it. 00:37:31.721 [2024-07-14 15:10:10.915370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.721 [2024-07-14 15:10:10.915422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.721 qpair failed and we were unable to recover it. 00:37:31.721 [2024-07-14 15:10:10.915587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.721 [2024-07-14 15:10:10.915637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.721 qpair failed and we were unable to recover it. 00:37:31.721 [2024-07-14 15:10:10.915778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.721 [2024-07-14 15:10:10.915813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.721 qpair failed and we were unable to recover it. 00:37:31.721 [2024-07-14 15:10:10.915955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.721 [2024-07-14 15:10:10.915989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.721 qpair failed and we were unable to recover it. 00:37:31.721 [2024-07-14 15:10:10.916124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.721 [2024-07-14 15:10:10.916175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.721 qpair failed and we were unable to recover it. 00:37:31.721 [2024-07-14 15:10:10.916314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.721 [2024-07-14 15:10:10.916367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.721 qpair failed and we were unable to recover it. 00:37:31.721 [2024-07-14 15:10:10.916526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.721 [2024-07-14 15:10:10.916564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.721 qpair failed and we were unable to recover it. 00:37:31.721 [2024-07-14 15:10:10.916715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.721 [2024-07-14 15:10:10.916753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.721 qpair failed and we were unable to recover it. 00:37:31.721 [2024-07-14 15:10:10.916889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.721 [2024-07-14 15:10:10.916923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.721 qpair failed and we were unable to recover it. 00:37:31.721 [2024-07-14 15:10:10.917055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.721 [2024-07-14 15:10:10.917093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.721 qpair failed and we were unable to recover it. 00:37:31.721 [2024-07-14 15:10:10.917278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.721 [2024-07-14 15:10:10.917315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.721 qpair failed and we were unable to recover it. 00:37:31.721 [2024-07-14 15:10:10.917452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.721 [2024-07-14 15:10:10.917489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.721 qpair failed and we were unable to recover it. 00:37:31.721 [2024-07-14 15:10:10.917597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.721 [2024-07-14 15:10:10.917634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.721 qpair failed and we were unable to recover it. 00:37:31.721 [2024-07-14 15:10:10.917759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.721 [2024-07-14 15:10:10.917796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.721 qpair failed and we were unable to recover it. 00:37:31.721 [2024-07-14 15:10:10.917978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.721 [2024-07-14 15:10:10.918012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.721 qpair failed and we were unable to recover it. 00:37:31.721 [2024-07-14 15:10:10.918123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.721 [2024-07-14 15:10:10.918162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.721 qpair failed and we were unable to recover it. 00:37:31.721 [2024-07-14 15:10:10.918291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.721 [2024-07-14 15:10:10.918324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.721 qpair failed and we were unable to recover it. 00:37:31.721 [2024-07-14 15:10:10.918568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.721 [2024-07-14 15:10:10.918605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.721 qpair failed and we were unable to recover it. 00:37:31.721 [2024-07-14 15:10:10.918755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.721 [2024-07-14 15:10:10.918791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.721 qpair failed and we were unable to recover it. 00:37:31.721 [2024-07-14 15:10:10.918933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.721 [2024-07-14 15:10:10.918967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.721 qpair failed and we were unable to recover it. 00:37:31.721 [2024-07-14 15:10:10.919097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.722 [2024-07-14 15:10:10.919130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.722 qpair failed and we were unable to recover it. 00:37:31.722 [2024-07-14 15:10:10.919318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.722 [2024-07-14 15:10:10.919355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.722 qpair failed and we were unable to recover it. 00:37:31.722 [2024-07-14 15:10:10.919464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.722 [2024-07-14 15:10:10.919500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.722 qpair failed and we were unable to recover it. 00:37:31.722 [2024-07-14 15:10:10.919681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.722 [2024-07-14 15:10:10.919718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.722 qpair failed and we were unable to recover it. 00:37:31.722 [2024-07-14 15:10:10.919873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.722 [2024-07-14 15:10:10.919930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.722 qpair failed and we were unable to recover it. 00:37:31.722 [2024-07-14 15:10:10.920048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.722 [2024-07-14 15:10:10.920081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.722 qpair failed and we were unable to recover it. 00:37:31.722 [2024-07-14 15:10:10.920237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.722 [2024-07-14 15:10:10.920274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.722 qpair failed and we were unable to recover it. 00:37:31.722 [2024-07-14 15:10:10.920422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.722 [2024-07-14 15:10:10.920459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.722 qpair failed and we were unable to recover it. 00:37:31.722 [2024-07-14 15:10:10.920631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.722 [2024-07-14 15:10:10.920668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.722 qpair failed and we were unable to recover it. 00:37:31.722 [2024-07-14 15:10:10.920820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.722 [2024-07-14 15:10:10.920854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.722 qpair failed and we were unable to recover it. 00:37:31.722 [2024-07-14 15:10:10.921009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.722 [2024-07-14 15:10:10.921043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.722 qpair failed and we were unable to recover it. 00:37:31.722 [2024-07-14 15:10:10.921238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.722 [2024-07-14 15:10:10.921286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.722 qpair failed and we were unable to recover it. 00:37:31.722 [2024-07-14 15:10:10.921472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.722 [2024-07-14 15:10:10.921509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.722 qpair failed and we were unable to recover it. 00:37:31.722 [2024-07-14 15:10:10.921636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.722 [2024-07-14 15:10:10.921673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.722 qpair failed and we were unable to recover it. 00:37:31.722 [2024-07-14 15:10:10.921818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.722 [2024-07-14 15:10:10.921855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.722 qpair failed and we were unable to recover it. 00:37:31.722 [2024-07-14 15:10:10.922030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.722 [2024-07-14 15:10:10.922079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.722 qpair failed and we were unable to recover it. 00:37:31.722 [2024-07-14 15:10:10.922231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.722 [2024-07-14 15:10:10.922280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.722 qpair failed and we were unable to recover it. 00:37:31.722 [2024-07-14 15:10:10.922441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.722 [2024-07-14 15:10:10.922481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.722 qpair failed and we were unable to recover it. 00:37:31.722 [2024-07-14 15:10:10.922654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.722 [2024-07-14 15:10:10.922727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.722 qpair failed and we were unable to recover it. 00:37:31.722 [2024-07-14 15:10:10.922891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.722 [2024-07-14 15:10:10.922925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.722 qpair failed and we were unable to recover it. 00:37:31.722 [2024-07-14 15:10:10.923098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.722 [2024-07-14 15:10:10.923132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.722 qpair failed and we were unable to recover it. 00:37:31.722 [2024-07-14 15:10:10.923302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.722 [2024-07-14 15:10:10.923362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.722 qpair failed and we were unable to recover it. 00:37:31.722 [2024-07-14 15:10:10.923507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.722 [2024-07-14 15:10:10.923559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.722 qpair failed and we were unable to recover it. 00:37:31.722 [2024-07-14 15:10:10.923687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.722 [2024-07-14 15:10:10.923724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.722 qpair failed and we were unable to recover it. 00:37:31.722 [2024-07-14 15:10:10.923873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.722 [2024-07-14 15:10:10.923963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.722 qpair failed and we were unable to recover it. 00:37:31.722 [2024-07-14 15:10:10.924086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.722 [2024-07-14 15:10:10.924122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.722 qpair failed and we were unable to recover it. 00:37:31.722 [2024-07-14 15:10:10.924319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.722 [2024-07-14 15:10:10.924358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.722 qpair failed and we were unable to recover it. 00:37:31.722 [2024-07-14 15:10:10.924507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.722 [2024-07-14 15:10:10.924544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.722 qpair failed and we were unable to recover it. 00:37:31.722 [2024-07-14 15:10:10.924717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.722 [2024-07-14 15:10:10.924755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.722 qpair failed and we were unable to recover it. 00:37:31.722 [2024-07-14 15:10:10.924882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.722 [2024-07-14 15:10:10.924941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.722 qpair failed and we were unable to recover it. 00:37:31.722 [2024-07-14 15:10:10.925085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.722 [2024-07-14 15:10:10.925121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.722 qpair failed and we were unable to recover it. 00:37:31.722 [2024-07-14 15:10:10.925280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.722 [2024-07-14 15:10:10.925332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.722 qpair failed and we were unable to recover it. 00:37:31.722 [2024-07-14 15:10:10.925505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.722 [2024-07-14 15:10:10.925542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.722 qpair failed and we were unable to recover it. 00:37:31.722 [2024-07-14 15:10:10.925768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.722 [2024-07-14 15:10:10.925827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.722 qpair failed and we were unable to recover it. 00:37:31.722 [2024-07-14 15:10:10.926001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.722 [2024-07-14 15:10:10.926036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.722 qpair failed and we were unable to recover it. 00:37:31.722 [2024-07-14 15:10:10.926191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.722 [2024-07-14 15:10:10.926229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.722 qpair failed and we were unable to recover it. 00:37:31.722 [2024-07-14 15:10:10.926377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.722 [2024-07-14 15:10:10.926416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.722 qpair failed and we were unable to recover it. 00:37:31.722 [2024-07-14 15:10:10.926635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.722 [2024-07-14 15:10:10.926673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.722 qpair failed and we were unable to recover it. 00:37:31.722 [2024-07-14 15:10:10.926807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.722 [2024-07-14 15:10:10.926841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.722 qpair failed and we were unable to recover it. 00:37:31.722 [2024-07-14 15:10:10.926970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.722 [2024-07-14 15:10:10.927003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.722 qpair failed and we were unable to recover it. 00:37:31.722 [2024-07-14 15:10:10.927165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.722 [2024-07-14 15:10:10.927199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.722 qpair failed and we were unable to recover it. 00:37:31.722 [2024-07-14 15:10:10.927344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.723 [2024-07-14 15:10:10.927381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.723 qpair failed and we were unable to recover it. 00:37:31.723 [2024-07-14 15:10:10.927532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.723 [2024-07-14 15:10:10.927569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.723 qpair failed and we were unable to recover it. 00:37:31.723 [2024-07-14 15:10:10.927808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.723 [2024-07-14 15:10:10.927845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.723 qpair failed and we were unable to recover it. 00:37:31.723 [2024-07-14 15:10:10.928041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.723 [2024-07-14 15:10:10.928075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.723 qpair failed and we were unable to recover it. 00:37:31.723 [2024-07-14 15:10:10.928253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.723 [2024-07-14 15:10:10.928290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.723 qpair failed and we were unable to recover it. 00:37:31.723 [2024-07-14 15:10:10.928888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.723 [2024-07-14 15:10:10.928945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.723 qpair failed and we were unable to recover it. 00:37:31.723 [2024-07-14 15:10:10.929088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.723 [2024-07-14 15:10:10.929123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.723 qpair failed and we were unable to recover it. 00:37:31.723 [2024-07-14 15:10:10.929280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.723 [2024-07-14 15:10:10.929318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.723 qpair failed and we were unable to recover it. 00:37:31.723 [2024-07-14 15:10:10.929507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.723 [2024-07-14 15:10:10.929544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.723 qpair failed and we were unable to recover it. 00:37:31.723 [2024-07-14 15:10:10.929696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.723 [2024-07-14 15:10:10.929734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.723 qpair failed and we were unable to recover it. 00:37:31.723 [2024-07-14 15:10:10.929894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.723 [2024-07-14 15:10:10.929946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.723 qpair failed and we were unable to recover it. 00:37:31.723 [2024-07-14 15:10:10.930087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.723 [2024-07-14 15:10:10.930136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.723 qpair failed and we were unable to recover it. 00:37:31.723 [2024-07-14 15:10:10.930317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.723 [2024-07-14 15:10:10.930374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.723 qpair failed and we were unable to recover it. 00:37:31.723 [2024-07-14 15:10:10.930630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.723 [2024-07-14 15:10:10.930689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.723 qpair failed and we were unable to recover it. 00:37:31.723 [2024-07-14 15:10:10.930819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.723 [2024-07-14 15:10:10.930853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.723 qpair failed and we were unable to recover it. 00:37:31.723 [2024-07-14 15:10:10.931009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.723 [2024-07-14 15:10:10.931044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.723 qpair failed and we were unable to recover it. 00:37:31.723 [2024-07-14 15:10:10.931220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.723 [2024-07-14 15:10:10.931273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.723 qpair failed and we were unable to recover it. 00:37:31.723 [2024-07-14 15:10:10.931460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.723 [2024-07-14 15:10:10.931518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.723 qpair failed and we were unable to recover it. 00:37:31.723 [2024-07-14 15:10:10.931647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.723 [2024-07-14 15:10:10.931684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.723 qpair failed and we were unable to recover it. 00:37:31.723 [2024-07-14 15:10:10.931810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.723 [2024-07-14 15:10:10.931843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.723 qpair failed and we were unable to recover it. 00:37:31.723 [2024-07-14 15:10:10.932027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.723 [2024-07-14 15:10:10.932061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.723 qpair failed and we were unable to recover it. 00:37:31.723 [2024-07-14 15:10:10.932193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.723 [2024-07-14 15:10:10.932230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.723 qpair failed and we were unable to recover it. 00:37:31.723 [2024-07-14 15:10:10.932395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.723 [2024-07-14 15:10:10.932432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.723 qpair failed and we were unable to recover it. 00:37:31.723 [2024-07-14 15:10:10.932573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.723 [2024-07-14 15:10:10.932610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.723 qpair failed and we were unable to recover it. 00:37:31.723 [2024-07-14 15:10:10.932756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.723 [2024-07-14 15:10:10.932789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.723 qpair failed and we were unable to recover it. 00:37:31.723 [2024-07-14 15:10:10.932929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.723 [2024-07-14 15:10:10.932964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.723 qpair failed and we were unable to recover it. 00:37:31.723 [2024-07-14 15:10:10.933099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.723 [2024-07-14 15:10:10.933132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.723 qpair failed and we were unable to recover it. 00:37:31.723 [2024-07-14 15:10:10.933285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.723 [2024-07-14 15:10:10.933319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.723 qpair failed and we were unable to recover it. 00:37:31.723 [2024-07-14 15:10:10.933549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.723 [2024-07-14 15:10:10.933590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.723 qpair failed and we were unable to recover it. 00:37:31.723 [2024-07-14 15:10:10.933768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.723 [2024-07-14 15:10:10.933805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.723 qpair failed and we were unable to recover it. 00:37:31.723 [2024-07-14 15:10:10.933980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.723 [2024-07-14 15:10:10.934014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.723 qpair failed and we were unable to recover it. 00:37:31.723 [2024-07-14 15:10:10.934177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.723 [2024-07-14 15:10:10.934210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.723 qpair failed and we were unable to recover it. 00:37:31.723 [2024-07-14 15:10:10.934338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.723 [2024-07-14 15:10:10.934396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.723 qpair failed and we were unable to recover it. 00:37:31.723 [2024-07-14 15:10:10.934545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.723 [2024-07-14 15:10:10.934581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.723 qpair failed and we were unable to recover it. 00:37:31.723 [2024-07-14 15:10:10.934709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.723 [2024-07-14 15:10:10.934760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.723 qpair failed and we were unable to recover it. 00:37:31.723 [2024-07-14 15:10:10.934962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.723 [2024-07-14 15:10:10.935011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.723 qpair failed and we were unable to recover it. 00:37:31.723 [2024-07-14 15:10:10.935165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.723 [2024-07-14 15:10:10.935221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.723 qpair failed and we were unable to recover it. 00:37:31.723 [2024-07-14 15:10:10.935431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.723 [2024-07-14 15:10:10.935469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.723 qpair failed and we were unable to recover it. 00:37:31.723 [2024-07-14 15:10:10.935623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.723 [2024-07-14 15:10:10.935660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.723 qpair failed and we were unable to recover it. 00:37:31.723 [2024-07-14 15:10:10.935798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.723 [2024-07-14 15:10:10.935832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.723 qpair failed and we were unable to recover it. 00:37:31.723 [2024-07-14 15:10:10.935991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.723 [2024-07-14 15:10:10.936030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.723 qpair failed and we were unable to recover it. 00:37:31.724 [2024-07-14 15:10:10.936141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.724 [2024-07-14 15:10:10.936187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.724 qpair failed and we were unable to recover it. 00:37:31.724 [2024-07-14 15:10:10.936327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.724 [2024-07-14 15:10:10.936361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.724 qpair failed and we were unable to recover it. 00:37:31.724 [2024-07-14 15:10:10.936502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.724 [2024-07-14 15:10:10.936553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.724 qpair failed and we were unable to recover it. 00:37:31.724 [2024-07-14 15:10:10.936699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.724 [2024-07-14 15:10:10.936736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.724 qpair failed and we were unable to recover it. 00:37:31.724 [2024-07-14 15:10:10.936881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.724 [2024-07-14 15:10:10.936915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.724 qpair failed and we were unable to recover it. 00:37:31.724 [2024-07-14 15:10:10.937078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.724 [2024-07-14 15:10:10.937111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.724 qpair failed and we were unable to recover it. 00:37:31.724 [2024-07-14 15:10:10.937301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.724 [2024-07-14 15:10:10.937338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.724 qpair failed and we were unable to recover it. 00:37:31.724 [2024-07-14 15:10:10.937500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.724 [2024-07-14 15:10:10.937533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.724 qpair failed and we were unable to recover it. 00:37:31.724 [2024-07-14 15:10:10.937753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.724 [2024-07-14 15:10:10.937790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.724 qpair failed and we were unable to recover it. 00:37:31.724 [2024-07-14 15:10:10.937961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.724 [2024-07-14 15:10:10.938010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.724 qpair failed and we were unable to recover it. 00:37:31.724 [2024-07-14 15:10:10.938173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.724 [2024-07-14 15:10:10.938213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.724 qpair failed and we were unable to recover it. 00:37:31.724 [2024-07-14 15:10:10.938396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.724 [2024-07-14 15:10:10.938435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.724 qpair failed and we were unable to recover it. 00:37:31.724 [2024-07-14 15:10:10.938566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.724 [2024-07-14 15:10:10.938604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.724 qpair failed and we were unable to recover it. 00:37:31.724 [2024-07-14 15:10:10.938738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.724 [2024-07-14 15:10:10.938776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.724 qpair failed and we were unable to recover it. 00:37:31.724 [2024-07-14 15:10:10.938953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.724 [2024-07-14 15:10:10.939001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.724 qpair failed and we were unable to recover it. 00:37:31.724 [2024-07-14 15:10:10.939145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.724 [2024-07-14 15:10:10.939191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.724 qpair failed and we were unable to recover it. 00:37:31.724 [2024-07-14 15:10:10.939338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.724 [2024-07-14 15:10:10.939390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.724 qpair failed and we were unable to recover it. 00:37:31.724 [2024-07-14 15:10:10.939585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.724 [2024-07-14 15:10:10.939640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.724 qpair failed and we were unable to recover it. 00:37:31.724 [2024-07-14 15:10:10.939774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.724 [2024-07-14 15:10:10.939808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.724 qpair failed and we were unable to recover it. 00:37:31.724 [2024-07-14 15:10:10.939988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.724 [2024-07-14 15:10:10.940023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.724 qpair failed and we were unable to recover it. 00:37:31.724 [2024-07-14 15:10:10.940137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.724 [2024-07-14 15:10:10.940201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.724 qpair failed and we were unable to recover it. 00:37:31.724 [2024-07-14 15:10:10.940347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.724 [2024-07-14 15:10:10.940381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.724 qpair failed and we were unable to recover it. 00:37:31.724 [2024-07-14 15:10:10.940521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.724 [2024-07-14 15:10:10.940555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.724 qpair failed and we were unable to recover it. 00:37:31.724 [2024-07-14 15:10:10.940697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.724 [2024-07-14 15:10:10.940733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.724 qpair failed and we were unable to recover it. 00:37:31.724 [2024-07-14 15:10:10.940873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.724 [2024-07-14 15:10:10.940933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.724 qpair failed and we were unable to recover it. 00:37:31.724 [2024-07-14 15:10:10.941083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.724 [2024-07-14 15:10:10.941119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.724 qpair failed and we were unable to recover it. 00:37:31.724 [2024-07-14 15:10:10.941294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.724 [2024-07-14 15:10:10.941345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.724 qpair failed and we were unable to recover it. 00:37:31.724 [2024-07-14 15:10:10.941493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.724 [2024-07-14 15:10:10.941570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.724 qpair failed and we were unable to recover it. 00:37:31.724 [2024-07-14 15:10:10.941756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.724 [2024-07-14 15:10:10.941793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.724 qpair failed and we were unable to recover it. 00:37:31.724 [2024-07-14 15:10:10.941925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.724 [2024-07-14 15:10:10.941960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.724 qpair failed and we were unable to recover it. 00:37:31.724 [2024-07-14 15:10:10.942090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.724 [2024-07-14 15:10:10.942138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.724 qpair failed and we were unable to recover it. 00:37:31.724 [2024-07-14 15:10:10.942333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.724 [2024-07-14 15:10:10.942373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.724 qpair failed and we were unable to recover it. 00:37:31.724 [2024-07-14 15:10:10.942542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.724 [2024-07-14 15:10:10.942579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.724 qpair failed and we were unable to recover it. 00:37:31.724 [2024-07-14 15:10:10.942726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.724 [2024-07-14 15:10:10.942763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.724 qpair failed and we were unable to recover it. 00:37:31.724 [2024-07-14 15:10:10.942903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.724 [2024-07-14 15:10:10.942955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.724 qpair failed and we were unable to recover it. 00:37:31.724 [2024-07-14 15:10:10.943082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.724 [2024-07-14 15:10:10.943131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.724 qpair failed and we were unable to recover it. 00:37:31.724 [2024-07-14 15:10:10.943305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.724 [2024-07-14 15:10:10.943360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.724 qpair failed and we were unable to recover it. 00:37:31.724 [2024-07-14 15:10:10.943531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.724 [2024-07-14 15:10:10.943584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.724 qpair failed and we were unable to recover it. 00:37:31.724 [2024-07-14 15:10:10.943714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.724 [2024-07-14 15:10:10.943747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.724 qpair failed and we were unable to recover it. 00:37:31.724 [2024-07-14 15:10:10.943905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.724 [2024-07-14 15:10:10.943939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.724 qpair failed and we were unable to recover it. 00:37:31.724 [2024-07-14 15:10:10.944086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.725 [2024-07-14 15:10:10.944138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.725 qpair failed and we were unable to recover it. 00:37:31.725 [2024-07-14 15:10:10.944321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.725 [2024-07-14 15:10:10.944374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.725 qpair failed and we were unable to recover it. 00:37:31.725 [2024-07-14 15:10:10.944525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.725 [2024-07-14 15:10:10.944577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.725 qpair failed and we were unable to recover it. 00:37:31.725 [2024-07-14 15:10:10.944692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.725 [2024-07-14 15:10:10.944725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.725 qpair failed and we were unable to recover it. 00:37:31.725 [2024-07-14 15:10:10.944839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.725 [2024-07-14 15:10:10.944889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.725 qpair failed and we were unable to recover it. 00:37:31.725 [2024-07-14 15:10:10.945046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.725 [2024-07-14 15:10:10.945093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.725 qpair failed and we were unable to recover it. 00:37:31.725 [2024-07-14 15:10:10.945241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.725 [2024-07-14 15:10:10.945277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.725 qpair failed and we were unable to recover it. 00:37:31.725 [2024-07-14 15:10:10.945467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.725 [2024-07-14 15:10:10.945502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.725 qpair failed and we were unable to recover it. 00:37:31.725 [2024-07-14 15:10:10.945662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.725 [2024-07-14 15:10:10.945706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.725 qpair failed and we were unable to recover it. 00:37:31.725 [2024-07-14 15:10:10.945845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.725 [2024-07-14 15:10:10.945894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.725 qpair failed and we were unable to recover it. 00:37:31.725 [2024-07-14 15:10:10.946048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.725 [2024-07-14 15:10:10.946085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.725 qpair failed and we were unable to recover it. 00:37:31.725 [2024-07-14 15:10:10.946247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.725 [2024-07-14 15:10:10.946300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.725 qpair failed and we were unable to recover it. 00:37:31.725 [2024-07-14 15:10:10.946451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.725 [2024-07-14 15:10:10.946490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.725 qpair failed and we were unable to recover it. 00:37:31.725 [2024-07-14 15:10:10.946674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.725 [2024-07-14 15:10:10.946771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.725 qpair failed and we were unable to recover it. 00:37:31.725 [2024-07-14 15:10:10.946951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.725 [2024-07-14 15:10:10.946987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.725 qpair failed and we were unable to recover it. 00:37:31.725 [2024-07-14 15:10:10.947100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.725 [2024-07-14 15:10:10.947134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.725 qpair failed and we were unable to recover it. 00:37:31.725 [2024-07-14 15:10:10.947328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.725 [2024-07-14 15:10:10.947365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.725 qpair failed and we were unable to recover it. 00:37:31.725 [2024-07-14 15:10:10.947536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.725 [2024-07-14 15:10:10.947573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.725 qpair failed and we were unable to recover it. 00:37:31.725 [2024-07-14 15:10:10.947729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.725 [2024-07-14 15:10:10.947767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.725 qpair failed and we were unable to recover it. 00:37:31.725 [2024-07-14 15:10:10.947950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.725 [2024-07-14 15:10:10.947999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.725 qpair failed and we were unable to recover it. 00:37:31.725 [2024-07-14 15:10:10.948169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.725 [2024-07-14 15:10:10.948209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.725 qpair failed and we were unable to recover it. 00:37:31.725 [2024-07-14 15:10:10.948329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.725 [2024-07-14 15:10:10.948366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.725 qpair failed and we were unable to recover it. 00:37:31.725 [2024-07-14 15:10:10.948541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.725 [2024-07-14 15:10:10.948578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.725 qpair failed and we were unable to recover it. 00:37:31.725 [2024-07-14 15:10:10.948757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.725 [2024-07-14 15:10:10.948804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.725 qpair failed and we were unable to recover it. 00:37:31.725 [2024-07-14 15:10:10.948965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.725 [2024-07-14 15:10:10.949001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.725 qpair failed and we were unable to recover it. 00:37:31.725 [2024-07-14 15:10:10.949137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.725 [2024-07-14 15:10:10.949202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.725 qpair failed and we were unable to recover it. 00:37:31.725 [2024-07-14 15:10:10.949338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.725 [2024-07-14 15:10:10.949372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.725 qpair failed and we were unable to recover it. 00:37:31.725 [2024-07-14 15:10:10.949541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.725 [2024-07-14 15:10:10.949580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.725 qpair failed and we were unable to recover it. 00:37:31.725 [2024-07-14 15:10:10.949694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.725 [2024-07-14 15:10:10.949729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.725 qpair failed and we were unable to recover it. 00:37:31.725 [2024-07-14 15:10:10.949900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.725 [2024-07-14 15:10:10.949934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.725 qpair failed and we were unable to recover it. 00:37:31.725 [2024-07-14 15:10:10.950066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.725 [2024-07-14 15:10:10.950099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.725 qpair failed and we were unable to recover it. 00:37:31.725 [2024-07-14 15:10:10.950228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.725 [2024-07-14 15:10:10.950265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.725 qpair failed and we were unable to recover it. 00:37:31.725 [2024-07-14 15:10:10.950408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.725 [2024-07-14 15:10:10.950472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.725 qpair failed and we were unable to recover it. 00:37:31.725 [2024-07-14 15:10:10.950616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.725 [2024-07-14 15:10:10.950653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.725 qpair failed and we were unable to recover it. 00:37:31.725 [2024-07-14 15:10:10.950825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.725 [2024-07-14 15:10:10.950869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.725 qpair failed and we were unable to recover it. 00:37:31.725 [2024-07-14 15:10:10.951028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.725 [2024-07-14 15:10:10.951076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.726 qpair failed and we were unable to recover it. 00:37:31.726 [2024-07-14 15:10:10.951234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.726 [2024-07-14 15:10:10.951273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.726 qpair failed and we were unable to recover it. 00:37:31.726 [2024-07-14 15:10:10.951419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.726 [2024-07-14 15:10:10.951456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.726 qpair failed and we were unable to recover it. 00:37:31.726 [2024-07-14 15:10:10.951624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.726 [2024-07-14 15:10:10.951662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.726 qpair failed and we were unable to recover it. 00:37:31.726 [2024-07-14 15:10:10.951801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.726 [2024-07-14 15:10:10.951837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.726 qpair failed and we were unable to recover it. 00:37:31.726 [2024-07-14 15:10:10.952017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.726 [2024-07-14 15:10:10.952052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.726 qpair failed and we were unable to recover it. 00:37:31.726 [2024-07-14 15:10:10.952215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.726 [2024-07-14 15:10:10.952249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.726 qpair failed and we were unable to recover it. 00:37:31.726 [2024-07-14 15:10:10.952405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.726 [2024-07-14 15:10:10.952442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.726 qpair failed and we were unable to recover it. 00:37:31.726 [2024-07-14 15:10:10.952563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.726 [2024-07-14 15:10:10.952600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.726 qpair failed and we were unable to recover it. 00:37:31.726 [2024-07-14 15:10:10.952797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.726 [2024-07-14 15:10:10.952846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.726 qpair failed and we were unable to recover it. 00:37:31.726 [2024-07-14 15:10:10.953017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.726 [2024-07-14 15:10:10.953054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.726 qpair failed and we were unable to recover it. 00:37:31.726 [2024-07-14 15:10:10.953191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.726 [2024-07-14 15:10:10.953227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.726 qpair failed and we were unable to recover it. 00:37:31.726 [2024-07-14 15:10:10.953346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.726 [2024-07-14 15:10:10.953384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.726 qpair failed and we were unable to recover it. 00:37:31.726 [2024-07-14 15:10:10.953586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.726 [2024-07-14 15:10:10.953638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.726 qpair failed and we were unable to recover it. 00:37:31.726 [2024-07-14 15:10:10.953747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.726 [2024-07-14 15:10:10.953782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.726 qpair failed and we were unable to recover it. 00:37:31.726 [2024-07-14 15:10:10.953946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.726 [2024-07-14 15:10:10.953981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.726 qpair failed and we were unable to recover it. 00:37:31.726 [2024-07-14 15:10:10.954098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.726 [2024-07-14 15:10:10.954133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.726 qpair failed and we were unable to recover it. 00:37:31.726 [2024-07-14 15:10:10.954293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.726 [2024-07-14 15:10:10.954327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.726 qpair failed and we were unable to recover it. 00:37:31.726 [2024-07-14 15:10:10.954459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.726 [2024-07-14 15:10:10.954511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.726 qpair failed and we were unable to recover it. 00:37:31.726 [2024-07-14 15:10:10.954692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.726 [2024-07-14 15:10:10.954729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.726 qpair failed and we were unable to recover it. 00:37:31.726 [2024-07-14 15:10:10.954904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.726 [2024-07-14 15:10:10.954938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.726 qpair failed and we were unable to recover it. 00:37:31.726 [2024-07-14 15:10:10.955100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.726 [2024-07-14 15:10:10.955137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.726 qpair failed and we were unable to recover it. 00:37:31.726 [2024-07-14 15:10:10.955292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.726 [2024-07-14 15:10:10.955329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.726 qpair failed and we were unable to recover it. 00:37:31.726 [2024-07-14 15:10:10.955438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.726 [2024-07-14 15:10:10.955476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.726 qpair failed and we were unable to recover it. 00:37:31.726 [2024-07-14 15:10:10.955662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.726 [2024-07-14 15:10:10.955718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.726 qpair failed and we were unable to recover it. 00:37:31.726 [2024-07-14 15:10:10.955843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.726 [2024-07-14 15:10:10.955909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.726 qpair failed and we were unable to recover it. 00:37:31.726 [2024-07-14 15:10:10.956039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.726 [2024-07-14 15:10:10.956091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.726 qpair failed and we were unable to recover it. 00:37:31.726 [2024-07-14 15:10:10.956237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.726 [2024-07-14 15:10:10.956289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.726 qpair failed and we were unable to recover it. 00:37:31.726 [2024-07-14 15:10:10.956448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.726 [2024-07-14 15:10:10.956500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.726 qpair failed and we were unable to recover it. 00:37:31.726 [2024-07-14 15:10:10.956659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.726 [2024-07-14 15:10:10.956693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.726 qpair failed and we were unable to recover it. 00:37:31.726 [2024-07-14 15:10:10.956852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.726 [2024-07-14 15:10:10.956903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.726 qpair failed and we were unable to recover it. 00:37:31.726 [2024-07-14 15:10:10.957043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.726 [2024-07-14 15:10:10.957077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.726 qpair failed and we were unable to recover it. 00:37:31.726 [2024-07-14 15:10:10.957264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.726 [2024-07-14 15:10:10.957312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.726 qpair failed and we were unable to recover it. 00:37:31.726 [2024-07-14 15:10:10.957432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.726 [2024-07-14 15:10:10.957468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.726 qpair failed and we were unable to recover it. 00:37:31.726 [2024-07-14 15:10:10.957607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.726 [2024-07-14 15:10:10.957641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.726 qpair failed and we were unable to recover it. 00:37:31.726 [2024-07-14 15:10:10.957776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.726 [2024-07-14 15:10:10.957809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.726 qpair failed and we were unable to recover it. 00:37:31.726 [2024-07-14 15:10:10.957976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.726 [2024-07-14 15:10:10.958011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.726 qpair failed and we were unable to recover it. 00:37:31.726 [2024-07-14 15:10:10.958135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.726 [2024-07-14 15:10:10.958197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.726 qpair failed and we were unable to recover it. 00:37:31.726 [2024-07-14 15:10:10.958361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.726 [2024-07-14 15:10:10.958412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.726 qpair failed and we were unable to recover it. 00:37:31.726 [2024-07-14 15:10:10.958611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.726 [2024-07-14 15:10:10.958663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.726 qpair failed and we were unable to recover it. 00:37:31.726 [2024-07-14 15:10:10.958800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.726 [2024-07-14 15:10:10.958834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.726 qpair failed and we were unable to recover it. 00:37:31.726 [2024-07-14 15:10:10.958983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.727 [2024-07-14 15:10:10.959019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.727 qpair failed and we were unable to recover it. 00:37:31.727 [2024-07-14 15:10:10.959168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.727 [2024-07-14 15:10:10.959207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.727 qpair failed and we were unable to recover it. 00:37:31.727 [2024-07-14 15:10:10.959325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.727 [2024-07-14 15:10:10.959362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.727 qpair failed and we were unable to recover it. 00:37:31.727 [2024-07-14 15:10:10.959507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.727 [2024-07-14 15:10:10.959556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.727 qpair failed and we were unable to recover it. 00:37:31.727 [2024-07-14 15:10:10.959703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.727 [2024-07-14 15:10:10.959740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.727 qpair failed and we were unable to recover it. 00:37:31.727 [2024-07-14 15:10:10.959901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.727 [2024-07-14 15:10:10.959954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.727 qpair failed and we were unable to recover it. 00:37:31.727 [2024-07-14 15:10:10.960092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.727 [2024-07-14 15:10:10.960147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.727 qpair failed and we were unable to recover it. 00:37:31.727 [2024-07-14 15:10:10.960329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.727 [2024-07-14 15:10:10.960380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.727 qpair failed and we were unable to recover it. 00:37:31.727 [2024-07-14 15:10:10.960563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.727 [2024-07-14 15:10:10.960627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.727 qpair failed and we were unable to recover it. 00:37:31.727 [2024-07-14 15:10:10.960748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.727 [2024-07-14 15:10:10.960782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.727 qpair failed and we were unable to recover it. 00:37:31.727 [2024-07-14 15:10:10.960934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.727 [2024-07-14 15:10:10.960982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.727 qpair failed and we were unable to recover it. 00:37:31.727 [2024-07-14 15:10:10.961134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.727 [2024-07-14 15:10:10.961181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.727 qpair failed and we were unable to recover it. 00:37:31.727 [2024-07-14 15:10:10.961348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.727 [2024-07-14 15:10:10.961384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.727 qpair failed and we were unable to recover it. 00:37:31.727 [2024-07-14 15:10:10.961517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.727 [2024-07-14 15:10:10.961550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.727 qpair failed and we were unable to recover it. 00:37:31.727 [2024-07-14 15:10:10.961687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.727 [2024-07-14 15:10:10.961720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.727 qpair failed and we were unable to recover it. 00:37:31.727 [2024-07-14 15:10:10.961897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.727 [2024-07-14 15:10:10.961931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.727 qpair failed and we were unable to recover it. 00:37:31.727 [2024-07-14 15:10:10.962037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.727 [2024-07-14 15:10:10.962072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.727 qpair failed and we were unable to recover it. 00:37:31.727 [2024-07-14 15:10:10.962234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.727 [2024-07-14 15:10:10.962284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.727 qpair failed and we were unable to recover it. 00:37:31.727 [2024-07-14 15:10:10.962436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.727 [2024-07-14 15:10:10.962493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.727 qpair failed and we were unable to recover it. 00:37:31.727 [2024-07-14 15:10:10.962655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.727 [2024-07-14 15:10:10.962689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.727 qpair failed and we were unable to recover it. 00:37:31.727 [2024-07-14 15:10:10.962827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.727 [2024-07-14 15:10:10.962867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.727 qpair failed and we were unable to recover it. 00:37:31.727 [2024-07-14 15:10:10.963038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.727 [2024-07-14 15:10:10.963072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.727 qpair failed and we were unable to recover it. 00:37:31.727 [2024-07-14 15:10:10.963224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.727 [2024-07-14 15:10:10.963259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.727 qpair failed and we were unable to recover it. 00:37:31.727 [2024-07-14 15:10:10.963407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.727 [2024-07-14 15:10:10.963440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.727 qpair failed and we were unable to recover it. 00:37:31.727 [2024-07-14 15:10:10.963576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.727 [2024-07-14 15:10:10.963610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.727 qpair failed and we were unable to recover it. 00:37:31.727 [2024-07-14 15:10:10.963748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.727 [2024-07-14 15:10:10.963782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.727 qpair failed and we were unable to recover it. 00:37:31.727 [2024-07-14 15:10:10.963903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.727 [2024-07-14 15:10:10.963937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.727 qpair failed and we were unable to recover it. 00:37:31.727 [2024-07-14 15:10:10.964046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.727 [2024-07-14 15:10:10.964080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.727 qpair failed and we were unable to recover it. 00:37:31.727 [2024-07-14 15:10:10.964229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.727 [2024-07-14 15:10:10.964267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.727 qpair failed and we were unable to recover it. 00:37:31.727 [2024-07-14 15:10:10.964414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.727 [2024-07-14 15:10:10.964451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.727 qpair failed and we were unable to recover it. 00:37:31.727 [2024-07-14 15:10:10.964602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.727 [2024-07-14 15:10:10.964638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.727 qpair failed and we were unable to recover it. 00:37:31.727 [2024-07-14 15:10:10.964763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.727 [2024-07-14 15:10:10.964799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.727 qpair failed and we were unable to recover it. 00:37:31.727 [2024-07-14 15:10:10.964934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.727 [2024-07-14 15:10:10.964969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.727 qpair failed and we were unable to recover it. 00:37:31.727 [2024-07-14 15:10:10.965094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.727 [2024-07-14 15:10:10.965147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.727 qpair failed and we were unable to recover it. 00:37:31.727 [2024-07-14 15:10:10.965330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.727 [2024-07-14 15:10:10.965404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.727 qpair failed and we were unable to recover it. 00:37:31.727 [2024-07-14 15:10:10.965528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.727 [2024-07-14 15:10:10.965565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.727 qpair failed and we were unable to recover it. 00:37:31.727 [2024-07-14 15:10:10.965731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.727 [2024-07-14 15:10:10.965765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.727 qpair failed and we were unable to recover it. 00:37:31.727 [2024-07-14 15:10:10.965917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.727 [2024-07-14 15:10:10.965975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.727 qpair failed and we were unable to recover it. 00:37:31.727 [2024-07-14 15:10:10.966125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.727 [2024-07-14 15:10:10.966162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.727 qpair failed and we were unable to recover it. 00:37:31.727 [2024-07-14 15:10:10.966337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.727 [2024-07-14 15:10:10.966376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.727 qpair failed and we were unable to recover it. 00:37:31.727 [2024-07-14 15:10:10.966555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.727 [2024-07-14 15:10:10.966613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.727 qpair failed and we were unable to recover it. 00:37:31.728 [2024-07-14 15:10:10.966789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.728 [2024-07-14 15:10:10.966826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.728 qpair failed and we were unable to recover it. 00:37:31.728 [2024-07-14 15:10:10.967000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.728 [2024-07-14 15:10:10.967033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.728 qpair failed and we were unable to recover it. 00:37:31.728 [2024-07-14 15:10:10.967225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.728 [2024-07-14 15:10:10.967262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.728 qpair failed and we were unable to recover it. 00:37:31.728 [2024-07-14 15:10:10.967412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.728 [2024-07-14 15:10:10.967449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.728 qpair failed and we were unable to recover it. 00:37:31.728 [2024-07-14 15:10:10.967596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.728 [2024-07-14 15:10:10.967633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.728 qpair failed and we were unable to recover it. 00:37:31.728 [2024-07-14 15:10:10.967799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.728 [2024-07-14 15:10:10.967835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.728 qpair failed and we were unable to recover it. 00:37:31.728 [2024-07-14 15:10:10.967998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.728 [2024-07-14 15:10:10.968034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.728 qpair failed and we were unable to recover it. 00:37:31.728 [2024-07-14 15:10:10.968169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.728 [2024-07-14 15:10:10.968221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.728 qpair failed and we were unable to recover it. 00:37:31.728 [2024-07-14 15:10:10.968402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.728 [2024-07-14 15:10:10.968453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.728 qpair failed and we were unable to recover it. 00:37:31.728 [2024-07-14 15:10:10.968630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.728 [2024-07-14 15:10:10.968682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.728 qpair failed and we were unable to recover it. 00:37:31.728 [2024-07-14 15:10:10.968821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.728 [2024-07-14 15:10:10.968855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.728 qpair failed and we were unable to recover it. 00:37:31.728 [2024-07-14 15:10:10.968991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.728 [2024-07-14 15:10:10.969025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.728 qpair failed and we were unable to recover it. 00:37:31.728 [2024-07-14 15:10:10.969165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.728 [2024-07-14 15:10:10.969221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.728 qpair failed and we were unable to recover it. 00:37:31.728 [2024-07-14 15:10:10.969347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.728 [2024-07-14 15:10:10.969385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.728 qpair failed and we were unable to recover it. 00:37:31.728 [2024-07-14 15:10:10.969565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.728 [2024-07-14 15:10:10.969599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.728 qpair failed and we were unable to recover it. 00:37:31.728 [2024-07-14 15:10:10.969761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.728 [2024-07-14 15:10:10.969794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.728 qpair failed and we were unable to recover it. 00:37:31.728 [2024-07-14 15:10:10.969930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.728 [2024-07-14 15:10:10.969965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.728 qpair failed and we were unable to recover it. 00:37:31.728 [2024-07-14 15:10:10.970096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.728 [2024-07-14 15:10:10.970133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.728 qpair failed and we were unable to recover it. 00:37:31.728 [2024-07-14 15:10:10.970270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.728 [2024-07-14 15:10:10.970303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.728 qpair failed and we were unable to recover it. 00:37:31.728 [2024-07-14 15:10:10.970407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.728 [2024-07-14 15:10:10.970440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.728 qpair failed and we were unable to recover it. 00:37:31.728 [2024-07-14 15:10:10.970625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.728 [2024-07-14 15:10:10.970663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.728 qpair failed and we were unable to recover it. 00:37:31.728 [2024-07-14 15:10:10.970804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.728 [2024-07-14 15:10:10.970841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.728 qpair failed and we were unable to recover it. 00:37:31.728 [2024-07-14 15:10:10.971019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.728 [2024-07-14 15:10:10.971052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.728 qpair failed and we were unable to recover it. 00:37:31.728 [2024-07-14 15:10:10.971188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.728 [2024-07-14 15:10:10.971221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.728 qpair failed and we were unable to recover it. 00:37:31.728 [2024-07-14 15:10:10.971331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.728 [2024-07-14 15:10:10.971382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.728 qpair failed and we were unable to recover it. 00:37:31.728 [2024-07-14 15:10:10.971528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.728 [2024-07-14 15:10:10.971565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.728 qpair failed and we were unable to recover it. 00:37:31.728 [2024-07-14 15:10:10.971774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.728 [2024-07-14 15:10:10.971810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.728 qpair failed and we were unable to recover it. 00:37:31.728 [2024-07-14 15:10:10.971964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.728 [2024-07-14 15:10:10.971998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.728 qpair failed and we were unable to recover it. 00:37:31.728 [2024-07-14 15:10:10.972126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.728 [2024-07-14 15:10:10.972202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.728 qpair failed and we were unable to recover it. 00:37:31.728 [2024-07-14 15:10:10.972413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.728 [2024-07-14 15:10:10.972454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.728 qpair failed and we were unable to recover it. 00:37:31.728 [2024-07-14 15:10:10.972577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.728 [2024-07-14 15:10:10.972616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.728 qpair failed and we were unable to recover it. 00:37:31.728 [2024-07-14 15:10:10.972806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.728 [2024-07-14 15:10:10.972844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.728 qpair failed and we were unable to recover it. 00:37:31.728 [2024-07-14 15:10:10.973012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.728 [2024-07-14 15:10:10.973061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.728 qpair failed and we were unable to recover it. 00:37:31.728 [2024-07-14 15:10:10.973191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.728 [2024-07-14 15:10:10.973227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.728 qpair failed and we were unable to recover it. 00:37:31.728 [2024-07-14 15:10:10.973507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.728 [2024-07-14 15:10:10.973545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.728 qpair failed and we were unable to recover it. 00:37:31.728 [2024-07-14 15:10:10.973721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.728 [2024-07-14 15:10:10.973758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.728 qpair failed and we were unable to recover it. 00:37:31.728 [2024-07-14 15:10:10.973938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.728 [2024-07-14 15:10:10.973972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.728 qpair failed and we were unable to recover it. 00:37:31.728 [2024-07-14 15:10:10.974079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.728 [2024-07-14 15:10:10.974112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.728 qpair failed and we were unable to recover it. 00:37:31.728 [2024-07-14 15:10:10.974257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.728 [2024-07-14 15:10:10.974308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.728 qpair failed and we were unable to recover it. 00:37:31.728 [2024-07-14 15:10:10.974485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.728 [2024-07-14 15:10:10.974530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.728 qpair failed and we were unable to recover it. 00:37:31.728 [2024-07-14 15:10:10.974772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.728 [2024-07-14 15:10:10.974808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.728 qpair failed and we were unable to recover it. 00:37:31.729 [2024-07-14 15:10:10.974978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.729 [2024-07-14 15:10:10.975012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.729 qpair failed and we were unable to recover it. 00:37:31.729 [2024-07-14 15:10:10.975210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.729 [2024-07-14 15:10:10.975247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.729 qpair failed and we were unable to recover it. 00:37:31.729 [2024-07-14 15:10:10.975420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.729 [2024-07-14 15:10:10.975493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.729 qpair failed and we were unable to recover it. 00:37:31.729 [2024-07-14 15:10:10.975674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.729 [2024-07-14 15:10:10.975711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.729 qpair failed and we were unable to recover it. 00:37:31.729 [2024-07-14 15:10:10.975832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.729 [2024-07-14 15:10:10.975886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.729 qpair failed and we were unable to recover it. 00:37:31.729 [2024-07-14 15:10:10.976085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.729 [2024-07-14 15:10:10.976134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.729 qpair failed and we were unable to recover it. 00:37:31.729 [2024-07-14 15:10:10.976278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.729 [2024-07-14 15:10:10.976334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.729 qpair failed and we were unable to recover it. 00:37:31.729 [2024-07-14 15:10:10.976519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.729 [2024-07-14 15:10:10.976572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.729 qpair failed and we were unable to recover it. 00:37:31.729 [2024-07-14 15:10:10.976700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.729 [2024-07-14 15:10:10.976738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.729 qpair failed and we were unable to recover it. 00:37:31.729 [2024-07-14 15:10:10.976922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.729 [2024-07-14 15:10:10.976956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.729 qpair failed and we were unable to recover it. 00:37:31.729 [2024-07-14 15:10:10.977105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.729 [2024-07-14 15:10:10.977139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.729 qpair failed and we were unable to recover it. 00:37:31.729 [2024-07-14 15:10:10.977289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.729 [2024-07-14 15:10:10.977324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.729 qpair failed and we were unable to recover it. 00:37:31.729 [2024-07-14 15:10:10.977491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.729 [2024-07-14 15:10:10.977525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.729 qpair failed and we were unable to recover it. 00:37:31.729 [2024-07-14 15:10:10.977639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.729 [2024-07-14 15:10:10.977673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.729 qpair failed and we were unable to recover it. 00:37:31.729 [2024-07-14 15:10:10.977813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.729 [2024-07-14 15:10:10.977847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.729 qpair failed and we were unable to recover it. 00:37:31.729 [2024-07-14 15:10:10.978020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.729 [2024-07-14 15:10:10.978068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.729 qpair failed and we were unable to recover it. 00:37:31.729 [2024-07-14 15:10:10.978206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.729 [2024-07-14 15:10:10.978252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.729 qpair failed and we were unable to recover it. 00:37:31.729 [2024-07-14 15:10:10.978459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.729 [2024-07-14 15:10:10.978518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.729 qpair failed and we were unable to recover it. 00:37:31.729 [2024-07-14 15:10:10.978774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.729 [2024-07-14 15:10:10.978830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.729 qpair failed and we were unable to recover it. 00:37:31.729 [2024-07-14 15:10:10.978994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.729 [2024-07-14 15:10:10.979028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.729 qpair failed and we were unable to recover it. 00:37:31.729 [2024-07-14 15:10:10.979189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.729 [2024-07-14 15:10:10.979227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.729 qpair failed and we were unable to recover it. 00:37:31.729 [2024-07-14 15:10:10.979378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.729 [2024-07-14 15:10:10.979415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.729 qpair failed and we were unable to recover it. 00:37:31.729 [2024-07-14 15:10:10.979572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.729 [2024-07-14 15:10:10.979609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.729 qpair failed and we were unable to recover it. 00:37:31.729 [2024-07-14 15:10:10.979781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.729 [2024-07-14 15:10:10.979828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.729 qpair failed and we were unable to recover it. 00:37:31.729 [2024-07-14 15:10:10.979985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.729 [2024-07-14 15:10:10.980022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.729 qpair failed and we were unable to recover it. 00:37:31.729 [2024-07-14 15:10:10.980180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.729 [2024-07-14 15:10:10.980233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.729 qpair failed and we were unable to recover it. 00:37:31.729 [2024-07-14 15:10:10.980405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.729 [2024-07-14 15:10:10.980439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.729 qpair failed and we were unable to recover it. 00:37:31.729 [2024-07-14 15:10:10.980680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.729 [2024-07-14 15:10:10.980751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.729 qpair failed and we were unable to recover it. 00:37:31.729 [2024-07-14 15:10:10.980894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.729 [2024-07-14 15:10:10.980951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.729 qpair failed and we were unable to recover it. 00:37:31.729 [2024-07-14 15:10:10.981128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.729 [2024-07-14 15:10:10.981179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.729 qpair failed and we were unable to recover it. 00:37:31.729 [2024-07-14 15:10:10.981325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.729 [2024-07-14 15:10:10.981423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.729 qpair failed and we were unable to recover it. 00:37:31.729 [2024-07-14 15:10:10.981562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.729 [2024-07-14 15:10:10.981600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.729 qpair failed and we were unable to recover it. 00:37:31.729 [2024-07-14 15:10:10.981777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.729 [2024-07-14 15:10:10.981814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.729 qpair failed and we were unable to recover it. 00:37:31.729 [2024-07-14 15:10:10.981955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.729 [2024-07-14 15:10:10.982000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.729 qpair failed and we were unable to recover it. 00:37:31.729 [2024-07-14 15:10:10.982129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.729 [2024-07-14 15:10:10.982180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.729 qpair failed and we were unable to recover it. 00:37:31.729 [2024-07-14 15:10:10.982332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.729 [2024-07-14 15:10:10.982369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.729 qpair failed and we were unable to recover it. 00:37:31.729 [2024-07-14 15:10:10.982540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.729 [2024-07-14 15:10:10.982577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.729 qpair failed and we were unable to recover it. 00:37:31.729 [2024-07-14 15:10:10.982690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.729 [2024-07-14 15:10:10.982728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.729 qpair failed and we were unable to recover it. 00:37:31.729 [2024-07-14 15:10:10.982922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.729 [2024-07-14 15:10:10.982957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.729 qpair failed and we were unable to recover it. 00:37:31.729 [2024-07-14 15:10:10.983097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.729 [2024-07-14 15:10:10.983131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.729 qpair failed and we were unable to recover it. 00:37:31.729 [2024-07-14 15:10:10.983280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.729 [2024-07-14 15:10:10.983316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.729 qpair failed and we were unable to recover it. 00:37:31.730 [2024-07-14 15:10:10.983490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.730 [2024-07-14 15:10:10.983527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.730 qpair failed and we were unable to recover it. 00:37:31.730 [2024-07-14 15:10:10.983662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.730 [2024-07-14 15:10:10.983713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.730 qpair failed and we were unable to recover it. 00:37:31.730 [2024-07-14 15:10:10.983830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.730 [2024-07-14 15:10:10.983867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.730 qpair failed and we were unable to recover it. 00:37:31.730 [2024-07-14 15:10:10.984008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.730 [2024-07-14 15:10:10.984042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.730 qpair failed and we were unable to recover it. 00:37:31.730 [2024-07-14 15:10:10.984176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.730 [2024-07-14 15:10:10.984210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.730 qpair failed and we were unable to recover it. 00:37:31.730 [2024-07-14 15:10:10.984389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.730 [2024-07-14 15:10:10.984426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.730 qpair failed and we were unable to recover it. 00:37:31.730 [2024-07-14 15:10:10.984583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.730 [2024-07-14 15:10:10.984620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.730 qpair failed and we were unable to recover it. 00:37:31.730 [2024-07-14 15:10:10.984820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.730 [2024-07-14 15:10:10.984857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.730 qpair failed and we were unable to recover it. 00:37:31.730 [2024-07-14 15:10:10.985029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.730 [2024-07-14 15:10:10.985061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.730 qpair failed and we were unable to recover it. 00:37:31.730 [2024-07-14 15:10:10.985187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.730 [2024-07-14 15:10:10.985224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.730 qpair failed and we were unable to recover it. 00:37:31.730 [2024-07-14 15:10:10.985423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.730 [2024-07-14 15:10:10.985458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.730 qpair failed and we were unable to recover it. 00:37:31.730 [2024-07-14 15:10:10.985631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.730 [2024-07-14 15:10:10.985668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.730 qpair failed and we were unable to recover it. 00:37:31.730 [2024-07-14 15:10:10.985810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.730 [2024-07-14 15:10:10.985847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.730 qpair failed and we were unable to recover it. 00:37:31.730 [2024-07-14 15:10:10.986037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.730 [2024-07-14 15:10:10.986071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.730 qpair failed and we were unable to recover it. 00:37:31.730 [2024-07-14 15:10:10.986207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.730 [2024-07-14 15:10:10.986241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.730 qpair failed and we were unable to recover it. 00:37:31.730 [2024-07-14 15:10:10.986370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.730 [2024-07-14 15:10:10.986411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.730 qpair failed and we were unable to recover it. 00:37:31.730 [2024-07-14 15:10:10.986622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.730 [2024-07-14 15:10:10.986659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.730 qpair failed and we were unable to recover it. 00:37:31.730 [2024-07-14 15:10:10.986810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.730 [2024-07-14 15:10:10.986847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.730 qpair failed and we were unable to recover it. 00:37:31.730 [2024-07-14 15:10:10.987027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.730 [2024-07-14 15:10:10.987075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.730 qpair failed and we were unable to recover it. 00:37:31.730 [2024-07-14 15:10:10.987204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.730 [2024-07-14 15:10:10.987252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.730 qpair failed and we were unable to recover it. 00:37:31.730 [2024-07-14 15:10:10.987446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.730 [2024-07-14 15:10:10.987500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.730 qpair failed and we were unable to recover it. 00:37:31.730 [2024-07-14 15:10:10.987722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.730 [2024-07-14 15:10:10.987781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.730 qpair failed and we were unable to recover it. 00:37:31.730 [2024-07-14 15:10:10.987955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.730 [2024-07-14 15:10:10.987989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.730 qpair failed and we were unable to recover it. 00:37:31.730 [2024-07-14 15:10:10.988152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.730 [2024-07-14 15:10:10.988206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.730 qpair failed and we were unable to recover it. 00:37:31.730 [2024-07-14 15:10:10.988356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.730 [2024-07-14 15:10:10.988407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.730 qpair failed and we were unable to recover it. 00:37:31.730 [2024-07-14 15:10:10.988563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.730 [2024-07-14 15:10:10.988620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.730 qpair failed and we were unable to recover it. 00:37:31.730 [2024-07-14 15:10:10.988767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.730 [2024-07-14 15:10:10.988802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.730 qpair failed and we were unable to recover it. 00:37:31.730 [2024-07-14 15:10:10.988974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.730 [2024-07-14 15:10:10.989014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.730 qpair failed and we were unable to recover it. 00:37:31.730 [2024-07-14 15:10:10.989210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.730 [2024-07-14 15:10:10.989264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.730 qpair failed and we were unable to recover it. 00:37:31.730 [2024-07-14 15:10:10.989435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.730 [2024-07-14 15:10:10.989475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.730 qpair failed and we were unable to recover it. 00:37:31.730 [2024-07-14 15:10:10.989632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.730 [2024-07-14 15:10:10.989671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.730 qpair failed and we were unable to recover it. 00:37:31.730 [2024-07-14 15:10:10.989812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.730 [2024-07-14 15:10:10.989849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.730 qpair failed and we were unable to recover it. 00:37:31.730 [2024-07-14 15:10:10.990015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.730 [2024-07-14 15:10:10.990049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.730 qpair failed and we were unable to recover it. 00:37:31.730 [2024-07-14 15:10:10.990183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.730 [2024-07-14 15:10:10.990221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.730 qpair failed and we were unable to recover it. 00:37:31.730 [2024-07-14 15:10:10.990354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.730 [2024-07-14 15:10:10.990388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.730 qpair failed and we were unable to recover it. 00:37:31.731 [2024-07-14 15:10:10.990555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.731 [2024-07-14 15:10:10.990592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.731 qpair failed and we were unable to recover it. 00:37:31.731 [2024-07-14 15:10:10.990767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.731 [2024-07-14 15:10:10.990804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.731 qpair failed and we were unable to recover it. 00:37:31.731 [2024-07-14 15:10:10.990947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.731 [2024-07-14 15:10:10.991015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.731 qpair failed and we were unable to recover it. 00:37:31.731 [2024-07-14 15:10:10.991162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.731 [2024-07-14 15:10:10.991197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.731 qpair failed and we were unable to recover it. 00:37:31.731 [2024-07-14 15:10:10.991397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.731 [2024-07-14 15:10:10.991436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.731 qpair failed and we were unable to recover it. 00:37:31.731 [2024-07-14 15:10:10.991617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.731 [2024-07-14 15:10:10.991656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.731 qpair failed and we were unable to recover it. 00:37:31.731 [2024-07-14 15:10:10.991806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.731 [2024-07-14 15:10:10.991843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.731 qpair failed and we were unable to recover it. 00:37:31.731 [2024-07-14 15:10:10.992030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.731 [2024-07-14 15:10:10.992078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.731 qpair failed and we were unable to recover it. 00:37:31.731 [2024-07-14 15:10:10.992241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.731 [2024-07-14 15:10:10.992296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.731 qpair failed and we were unable to recover it. 00:37:31.731 [2024-07-14 15:10:10.992434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.731 [2024-07-14 15:10:10.992471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.731 qpair failed and we were unable to recover it. 00:37:31.731 [2024-07-14 15:10:10.992609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.731 [2024-07-14 15:10:10.992647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.731 qpair failed and we were unable to recover it. 00:37:31.731 [2024-07-14 15:10:10.992775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.731 [2024-07-14 15:10:10.992808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.731 qpair failed and we were unable to recover it. 00:37:31.731 [2024-07-14 15:10:10.992943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.731 [2024-07-14 15:10:10.992977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.731 qpair failed and we were unable to recover it. 00:37:31.731 [2024-07-14 15:10:10.993109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.731 [2024-07-14 15:10:10.993176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.731 qpair failed and we were unable to recover it. 00:37:31.731 [2024-07-14 15:10:10.993324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.731 [2024-07-14 15:10:10.993382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.731 qpair failed and we were unable to recover it. 00:37:31.731 [2024-07-14 15:10:10.993568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.731 [2024-07-14 15:10:10.993607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.731 qpair failed and we were unable to recover it. 00:37:31.731 [2024-07-14 15:10:10.993755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.731 [2024-07-14 15:10:10.993792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.731 qpair failed and we were unable to recover it. 00:37:31.731 [2024-07-14 15:10:10.993955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.731 [2024-07-14 15:10:10.994004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.731 qpair failed and we were unable to recover it. 00:37:31.731 [2024-07-14 15:10:10.994152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.731 [2024-07-14 15:10:10.994189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.731 qpair failed and we were unable to recover it. 00:37:31.731 [2024-07-14 15:10:10.994358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.731 [2024-07-14 15:10:10.994397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.731 qpair failed and we were unable to recover it. 00:37:31.731 [2024-07-14 15:10:10.994542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.731 [2024-07-14 15:10:10.994584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.731 qpair failed and we were unable to recover it. 00:37:31.731 [2024-07-14 15:10:10.994760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.731 [2024-07-14 15:10:10.994798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.731 qpair failed and we were unable to recover it. 00:37:31.731 [2024-07-14 15:10:10.994942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.731 [2024-07-14 15:10:10.994976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.731 qpair failed and we were unable to recover it. 00:37:31.731 [2024-07-14 15:10:10.995121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.731 [2024-07-14 15:10:10.995175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.731 qpair failed and we were unable to recover it. 00:37:31.731 [2024-07-14 15:10:10.995325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.731 [2024-07-14 15:10:10.995363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.731 qpair failed and we were unable to recover it. 00:37:31.731 [2024-07-14 15:10:10.995589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.731 [2024-07-14 15:10:10.995672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.731 qpair failed and we were unable to recover it. 00:37:31.731 [2024-07-14 15:10:10.995850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.731 [2024-07-14 15:10:10.995896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.731 qpair failed and we were unable to recover it. 00:37:31.731 [2024-07-14 15:10:10.996057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.731 [2024-07-14 15:10:10.996090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.731 qpair failed and we were unable to recover it. 00:37:31.731 [2024-07-14 15:10:10.996277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.731 [2024-07-14 15:10:10.996326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.731 qpair failed and we were unable to recover it. 00:37:31.731 [2024-07-14 15:10:10.996554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.731 [2024-07-14 15:10:10.996610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.731 qpair failed and we were unable to recover it. 00:37:31.731 [2024-07-14 15:10:10.996749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.731 [2024-07-14 15:10:10.996784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.731 qpair failed and we were unable to recover it. 00:37:31.731 [2024-07-14 15:10:10.996895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.731 [2024-07-14 15:10:10.996930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.731 qpair failed and we were unable to recover it. 00:37:31.731 [2024-07-14 15:10:10.997094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.731 [2024-07-14 15:10:10.997149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.731 qpair failed and we were unable to recover it. 00:37:31.731 [2024-07-14 15:10:10.997341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.731 [2024-07-14 15:10:10.997393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.731 qpair failed and we were unable to recover it. 00:37:31.731 [2024-07-14 15:10:10.997604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.731 [2024-07-14 15:10:10.997663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.731 qpair failed and we were unable to recover it. 00:37:31.731 [2024-07-14 15:10:10.997842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.731 [2024-07-14 15:10:10.997886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.731 qpair failed and we were unable to recover it. 00:37:31.731 [2024-07-14 15:10:10.998058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.731 [2024-07-14 15:10:10.998091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.731 qpair failed and we were unable to recover it. 00:37:31.731 [2024-07-14 15:10:10.998263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.731 [2024-07-14 15:10:10.998329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.731 qpair failed and we were unable to recover it. 00:37:31.731 [2024-07-14 15:10:10.998466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.731 [2024-07-14 15:10:10.998516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.731 qpair failed and we were unable to recover it. 00:37:31.731 [2024-07-14 15:10:10.998676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.731 [2024-07-14 15:10:10.998713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.731 qpair failed and we were unable to recover it. 00:37:31.731 [2024-07-14 15:10:10.998839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.731 [2024-07-14 15:10:10.998873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.732 qpair failed and we were unable to recover it. 00:37:31.732 [2024-07-14 15:10:10.999054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.732 [2024-07-14 15:10:10.999088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.732 qpair failed and we were unable to recover it. 00:37:31.732 [2024-07-14 15:10:10.999246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.732 [2024-07-14 15:10:10.999288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.732 qpair failed and we were unable to recover it. 00:37:31.732 [2024-07-14 15:10:10.999547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.732 [2024-07-14 15:10:10.999584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.732 qpair failed and we were unable to recover it. 00:37:31.732 [2024-07-14 15:10:10.999735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.732 [2024-07-14 15:10:10.999784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.732 qpair failed and we were unable to recover it. 00:37:31.732 [2024-07-14 15:10:10.999979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.732 [2024-07-14 15:10:11.000013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.732 qpair failed and we were unable to recover it. 00:37:31.732 [2024-07-14 15:10:11.000131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.732 [2024-07-14 15:10:11.000165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.732 qpair failed and we were unable to recover it. 00:37:31.732 [2024-07-14 15:10:11.000284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.732 [2024-07-14 15:10:11.000335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.732 qpair failed and we were unable to recover it. 00:37:31.732 [2024-07-14 15:10:11.000483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.732 [2024-07-14 15:10:11.000520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.732 qpair failed and we were unable to recover it. 00:37:31.732 [2024-07-14 15:10:11.000648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.732 [2024-07-14 15:10:11.000700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.732 qpair failed and we were unable to recover it. 00:37:31.732 [2024-07-14 15:10:11.000869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.732 [2024-07-14 15:10:11.000931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.732 qpair failed and we were unable to recover it. 00:37:31.732 [2024-07-14 15:10:11.001043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.732 [2024-07-14 15:10:11.001077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.732 qpair failed and we were unable to recover it. 00:37:31.732 [2024-07-14 15:10:11.001249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.732 [2024-07-14 15:10:11.001283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.732 qpair failed and we were unable to recover it. 00:37:31.732 [2024-07-14 15:10:11.001443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.732 [2024-07-14 15:10:11.001480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:31.732 qpair failed and we were unable to recover it. 00:37:31.732 [2024-07-14 15:10:11.001615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.012 [2024-07-14 15:10:11.001658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.012 qpair failed and we were unable to recover it. 00:37:32.012 [2024-07-14 15:10:11.001784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.012 [2024-07-14 15:10:11.001819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.012 qpair failed and we were unable to recover it. 00:37:32.012 [2024-07-14 15:10:11.001967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.012 [2024-07-14 15:10:11.002001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.012 qpair failed and we were unable to recover it. 00:37:32.012 [2024-07-14 15:10:11.002137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.012 [2024-07-14 15:10:11.002205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.012 qpair failed and we were unable to recover it. 00:37:32.012 [2024-07-14 15:10:11.002395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.012 [2024-07-14 15:10:11.002432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.012 qpair failed and we were unable to recover it. 00:37:32.012 [2024-07-14 15:10:11.002578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.012 [2024-07-14 15:10:11.002615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.012 qpair failed and we were unable to recover it. 00:37:32.012 [2024-07-14 15:10:11.002799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.012 [2024-07-14 15:10:11.002842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.012 qpair failed and we were unable to recover it. 00:37:32.012 [2024-07-14 15:10:11.003008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.012 [2024-07-14 15:10:11.003042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.012 qpair failed and we were unable to recover it. 00:37:32.012 [2024-07-14 15:10:11.003151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.012 [2024-07-14 15:10:11.003186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.012 qpair failed and we were unable to recover it. 00:37:32.012 [2024-07-14 15:10:11.003332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.012 [2024-07-14 15:10:11.003366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.012 qpair failed and we were unable to recover it. 00:37:32.012 [2024-07-14 15:10:11.003513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.012 [2024-07-14 15:10:11.003565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.012 qpair failed and we were unable to recover it. 00:37:32.012 [2024-07-14 15:10:11.003716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.012 [2024-07-14 15:10:11.003754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.012 qpair failed and we were unable to recover it. 00:37:32.012 [2024-07-14 15:10:11.003942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.012 [2024-07-14 15:10:11.003977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.012 qpair failed and we were unable to recover it. 00:37:32.012 [2024-07-14 15:10:11.004096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.012 [2024-07-14 15:10:11.004130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.012 qpair failed and we were unable to recover it. 00:37:32.012 [2024-07-14 15:10:11.004271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.012 [2024-07-14 15:10:11.004323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.012 qpair failed and we were unable to recover it. 00:37:32.013 [2024-07-14 15:10:11.004463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.013 [2024-07-14 15:10:11.004500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.013 qpair failed and we were unable to recover it. 00:37:32.013 [2024-07-14 15:10:11.004685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.013 [2024-07-14 15:10:11.004722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.013 qpair failed and we were unable to recover it. 00:37:32.013 [2024-07-14 15:10:11.004882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.013 [2024-07-14 15:10:11.004934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.013 qpair failed and we were unable to recover it. 00:37:32.013 [2024-07-14 15:10:11.005062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.013 [2024-07-14 15:10:11.005095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.013 qpair failed and we were unable to recover it. 00:37:32.013 [2024-07-14 15:10:11.005204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.013 [2024-07-14 15:10:11.005237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.013 qpair failed and we were unable to recover it. 00:37:32.013 [2024-07-14 15:10:11.005376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.013 [2024-07-14 15:10:11.005437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.013 qpair failed and we were unable to recover it. 00:37:32.013 [2024-07-14 15:10:11.005566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.013 [2024-07-14 15:10:11.005604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.013 qpair failed and we were unable to recover it. 00:37:32.013 [2024-07-14 15:10:11.005810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.013 [2024-07-14 15:10:11.005847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.013 qpair failed and we were unable to recover it. 00:37:32.013 [2024-07-14 15:10:11.006037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.013 [2024-07-14 15:10:11.006071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.013 qpair failed and we were unable to recover it. 00:37:32.013 [2024-07-14 15:10:11.006197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.013 [2024-07-14 15:10:11.006234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.013 qpair failed and we were unable to recover it. 00:37:32.013 [2024-07-14 15:10:11.006375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.013 [2024-07-14 15:10:11.006428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.013 qpair failed and we were unable to recover it. 00:37:32.013 [2024-07-14 15:10:11.006578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.013 [2024-07-14 15:10:11.006615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.013 qpair failed and we were unable to recover it. 00:37:32.013 [2024-07-14 15:10:11.006732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.013 [2024-07-14 15:10:11.006769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.013 qpair failed and we were unable to recover it. 00:37:32.013 [2024-07-14 15:10:11.006970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.013 [2024-07-14 15:10:11.007020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.013 qpair failed and we were unable to recover it. 00:37:32.013 [2024-07-14 15:10:11.007170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.013 [2024-07-14 15:10:11.007208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.013 qpair failed and we were unable to recover it. 00:37:32.013 [2024-07-14 15:10:11.007323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.013 [2024-07-14 15:10:11.007383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.013 qpair failed and we were unable to recover it. 00:37:32.013 [2024-07-14 15:10:11.007560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.013 [2024-07-14 15:10:11.007614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.013 qpair failed and we were unable to recover it. 00:37:32.013 [2024-07-14 15:10:11.007749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.013 [2024-07-14 15:10:11.007783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.013 qpair failed and we were unable to recover it. 00:37:32.013 [2024-07-14 15:10:11.007895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.013 [2024-07-14 15:10:11.007930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.013 qpair failed and we were unable to recover it. 00:37:32.013 [2024-07-14 15:10:11.008066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.013 [2024-07-14 15:10:11.008101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.013 qpair failed and we were unable to recover it. 00:37:32.013 [2024-07-14 15:10:11.008272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.013 [2024-07-14 15:10:11.008306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.013 qpair failed and we were unable to recover it. 00:37:32.013 [2024-07-14 15:10:11.008446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.013 [2024-07-14 15:10:11.008479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.013 qpair failed and we were unable to recover it. 00:37:32.013 [2024-07-14 15:10:11.008589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.013 [2024-07-14 15:10:11.008622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.013 qpair failed and we were unable to recover it. 00:37:32.013 [2024-07-14 15:10:11.008760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.013 [2024-07-14 15:10:11.008794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.013 qpair failed and we were unable to recover it. 00:37:32.013 [2024-07-14 15:10:11.008953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.013 [2024-07-14 15:10:11.008988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.013 qpair failed and we were unable to recover it. 00:37:32.013 [2024-07-14 15:10:11.009131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.013 [2024-07-14 15:10:11.009166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.013 qpair failed and we were unable to recover it. 00:37:32.013 [2024-07-14 15:10:11.009349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.013 [2024-07-14 15:10:11.009401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.013 qpair failed and we were unable to recover it. 00:37:32.013 [2024-07-14 15:10:11.009583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.013 [2024-07-14 15:10:11.009635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.013 qpair failed and we were unable to recover it. 00:37:32.013 [2024-07-14 15:10:11.009772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.013 [2024-07-14 15:10:11.009806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.013 qpair failed and we were unable to recover it. 00:37:32.013 [2024-07-14 15:10:11.009976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.013 [2024-07-14 15:10:11.010010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.013 qpair failed and we were unable to recover it. 00:37:32.013 [2024-07-14 15:10:11.010196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.013 [2024-07-14 15:10:11.010253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.013 qpair failed and we were unable to recover it. 00:37:32.013 [2024-07-14 15:10:11.010455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.013 [2024-07-14 15:10:11.010515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.013 qpair failed and we were unable to recover it. 00:37:32.013 [2024-07-14 15:10:11.010677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.013 [2024-07-14 15:10:11.010711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.013 qpair failed and we were unable to recover it. 00:37:32.013 [2024-07-14 15:10:11.010862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.013 [2024-07-14 15:10:11.010922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.013 qpair failed and we were unable to recover it. 00:37:32.013 [2024-07-14 15:10:11.011083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.013 [2024-07-14 15:10:11.011122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.013 qpair failed and we were unable to recover it. 00:37:32.013 [2024-07-14 15:10:11.011260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.013 [2024-07-14 15:10:11.011310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.013 qpair failed and we were unable to recover it. 00:37:32.013 [2024-07-14 15:10:11.011457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.013 [2024-07-14 15:10:11.011493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.013 qpair failed and we were unable to recover it. 00:37:32.013 [2024-07-14 15:10:11.011700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.013 [2024-07-14 15:10:11.011758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.013 qpair failed and we were unable to recover it. 00:37:32.013 [2024-07-14 15:10:11.011903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.013 [2024-07-14 15:10:11.011937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.013 qpair failed and we were unable to recover it. 00:37:32.013 [2024-07-14 15:10:11.012096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.013 [2024-07-14 15:10:11.012130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.013 qpair failed and we were unable to recover it. 00:37:32.014 [2024-07-14 15:10:11.012255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.014 [2024-07-14 15:10:11.012292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.014 qpair failed and we were unable to recover it. 00:37:32.014 [2024-07-14 15:10:11.012444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.014 [2024-07-14 15:10:11.012481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.014 qpair failed and we were unable to recover it. 00:37:32.014 [2024-07-14 15:10:11.012605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.014 [2024-07-14 15:10:11.012642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.014 qpair failed and we were unable to recover it. 00:37:32.014 [2024-07-14 15:10:11.012798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.014 [2024-07-14 15:10:11.012844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.014 qpair failed and we were unable to recover it. 00:37:32.014 [2024-07-14 15:10:11.012967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.014 [2024-07-14 15:10:11.013002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.014 qpair failed and we were unable to recover it. 00:37:32.014 [2024-07-14 15:10:11.013164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.014 [2024-07-14 15:10:11.013217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.014 qpair failed and we were unable to recover it. 00:37:32.014 [2024-07-14 15:10:11.013379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.014 [2024-07-14 15:10:11.013430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.014 qpair failed and we were unable to recover it. 00:37:32.014 [2024-07-14 15:10:11.013578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.014 [2024-07-14 15:10:11.013629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.014 qpair failed and we were unable to recover it. 00:37:32.014 [2024-07-14 15:10:11.013736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.014 [2024-07-14 15:10:11.013770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.014 qpair failed and we were unable to recover it. 00:37:32.014 [2024-07-14 15:10:11.013938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.014 [2024-07-14 15:10:11.013977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.014 qpair failed and we were unable to recover it. 00:37:32.014 [2024-07-14 15:10:11.014126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.014 [2024-07-14 15:10:11.014164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.014 qpair failed and we were unable to recover it. 00:37:32.014 [2024-07-14 15:10:11.014314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.014 [2024-07-14 15:10:11.014351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.014 qpair failed and we were unable to recover it. 00:37:32.014 [2024-07-14 15:10:11.014585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.014 [2024-07-14 15:10:11.014645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.014 qpair failed and we were unable to recover it. 00:37:32.014 [2024-07-14 15:10:11.014766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.014 [2024-07-14 15:10:11.014815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.014 qpair failed and we were unable to recover it. 00:37:32.014 [2024-07-14 15:10:11.014953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.014 [2024-07-14 15:10:11.014987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.014 qpair failed and we were unable to recover it. 00:37:32.014 [2024-07-14 15:10:11.015101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.014 [2024-07-14 15:10:11.015135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.014 qpair failed and we were unable to recover it. 00:37:32.014 [2024-07-14 15:10:11.015244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.014 [2024-07-14 15:10:11.015283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.014 qpair failed and we were unable to recover it. 00:37:32.014 [2024-07-14 15:10:11.015468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.014 [2024-07-14 15:10:11.015524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.014 qpair failed and we were unable to recover it. 00:37:32.014 [2024-07-14 15:10:11.015680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.014 [2024-07-14 15:10:11.015717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.014 qpair failed and we were unable to recover it. 00:37:32.014 [2024-07-14 15:10:11.015887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.014 [2024-07-14 15:10:11.015939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.014 qpair failed and we were unable to recover it. 00:37:32.014 [2024-07-14 15:10:11.016072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.014 [2024-07-14 15:10:11.016106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.014 qpair failed and we were unable to recover it. 00:37:32.014 [2024-07-14 15:10:11.016266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.014 [2024-07-14 15:10:11.016303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.014 qpair failed and we were unable to recover it. 00:37:32.014 [2024-07-14 15:10:11.016490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.014 [2024-07-14 15:10:11.016527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.014 qpair failed and we were unable to recover it. 00:37:32.014 [2024-07-14 15:10:11.016684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.014 [2024-07-14 15:10:11.016722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.014 qpair failed and we were unable to recover it. 00:37:32.014 [2024-07-14 15:10:11.016874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.014 [2024-07-14 15:10:11.016929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.014 qpair failed and we were unable to recover it. 00:37:32.014 [2024-07-14 15:10:11.017075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.014 [2024-07-14 15:10:11.017112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.014 qpair failed and we were unable to recover it. 00:37:32.014 [2024-07-14 15:10:11.017243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.014 [2024-07-14 15:10:11.017296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.014 qpair failed and we were unable to recover it. 00:37:32.014 [2024-07-14 15:10:11.017477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.014 [2024-07-14 15:10:11.017527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.014 qpair failed and we were unable to recover it. 00:37:32.014 [2024-07-14 15:10:11.017637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.014 [2024-07-14 15:10:11.017672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.014 qpair failed and we were unable to recover it. 00:37:32.014 [2024-07-14 15:10:11.017841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.014 [2024-07-14 15:10:11.017883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.014 qpair failed and we were unable to recover it. 00:37:32.014 [2024-07-14 15:10:11.017994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.014 [2024-07-14 15:10:11.018029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.014 qpair failed and we were unable to recover it. 00:37:32.014 [2024-07-14 15:10:11.018169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.014 [2024-07-14 15:10:11.018209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.014 qpair failed and we were unable to recover it. 00:37:32.014 [2024-07-14 15:10:11.018344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.014 [2024-07-14 15:10:11.018379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.014 qpair failed and we were unable to recover it. 00:37:32.014 [2024-07-14 15:10:11.018542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.014 [2024-07-14 15:10:11.018577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.014 qpair failed and we were unable to recover it. 00:37:32.014 [2024-07-14 15:10:11.018693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.014 [2024-07-14 15:10:11.018727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.014 qpair failed and we were unable to recover it. 00:37:32.014 [2024-07-14 15:10:11.018862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.014 [2024-07-14 15:10:11.018902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.014 qpair failed and we were unable to recover it. 00:37:32.014 [2024-07-14 15:10:11.019069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.014 [2024-07-14 15:10:11.019104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.014 qpair failed and we were unable to recover it. 00:37:32.014 [2024-07-14 15:10:11.019243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.014 [2024-07-14 15:10:11.019276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.014 qpair failed and we were unable to recover it. 00:37:32.014 [2024-07-14 15:10:11.019408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.014 [2024-07-14 15:10:11.019442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.014 qpair failed and we were unable to recover it. 00:37:32.014 [2024-07-14 15:10:11.019563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.014 [2024-07-14 15:10:11.019599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.014 qpair failed and we were unable to recover it. 00:37:32.014 [2024-07-14 15:10:11.019733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.015 [2024-07-14 15:10:11.019768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.015 qpair failed and we were unable to recover it. 00:37:32.015 [2024-07-14 15:10:11.019931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.015 [2024-07-14 15:10:11.019966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.015 qpair failed and we were unable to recover it. 00:37:32.015 [2024-07-14 15:10:11.020075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.015 [2024-07-14 15:10:11.020110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.015 qpair failed and we were unable to recover it. 00:37:32.015 [2024-07-14 15:10:11.020260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.015 [2024-07-14 15:10:11.020295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.015 qpair failed and we were unable to recover it. 00:37:32.015 [2024-07-14 15:10:11.020406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.015 [2024-07-14 15:10:11.020441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.015 qpair failed and we were unable to recover it. 00:37:32.015 [2024-07-14 15:10:11.020616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.015 [2024-07-14 15:10:11.020651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.015 qpair failed and we were unable to recover it. 00:37:32.015 [2024-07-14 15:10:11.020761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.015 [2024-07-14 15:10:11.020795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.015 qpair failed and we were unable to recover it. 00:37:32.015 [2024-07-14 15:10:11.020931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.015 [2024-07-14 15:10:11.020965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.015 qpair failed and we were unable to recover it. 00:37:32.015 [2024-07-14 15:10:11.021101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.015 [2024-07-14 15:10:11.021145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.015 qpair failed and we were unable to recover it. 00:37:32.015 [2024-07-14 15:10:11.021273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.015 [2024-07-14 15:10:11.021306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.015 qpair failed and we were unable to recover it. 00:37:32.015 [2024-07-14 15:10:11.021470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.015 [2024-07-14 15:10:11.021503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.015 qpair failed and we were unable to recover it. 00:37:32.015 [2024-07-14 15:10:11.021634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.015 [2024-07-14 15:10:11.021688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.015 qpair failed and we were unable to recover it. 00:37:32.015 [2024-07-14 15:10:11.021833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.015 [2024-07-14 15:10:11.021868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.015 qpair failed and we were unable to recover it. 00:37:32.015 [2024-07-14 15:10:11.022037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.015 [2024-07-14 15:10:11.022091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.015 qpair failed and we were unable to recover it. 00:37:32.015 [2024-07-14 15:10:11.022244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.015 [2024-07-14 15:10:11.022294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.015 qpair failed and we were unable to recover it. 00:37:32.015 [2024-07-14 15:10:11.022450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.015 [2024-07-14 15:10:11.022501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.015 qpair failed and we were unable to recover it. 00:37:32.015 [2024-07-14 15:10:11.022662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.015 [2024-07-14 15:10:11.022696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.015 qpair failed and we were unable to recover it. 00:37:32.015 [2024-07-14 15:10:11.022825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.015 [2024-07-14 15:10:11.022860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.015 qpair failed and we were unable to recover it. 00:37:32.015 [2024-07-14 15:10:11.023038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.015 [2024-07-14 15:10:11.023072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.015 qpair failed and we were unable to recover it. 00:37:32.015 [2024-07-14 15:10:11.023206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.015 [2024-07-14 15:10:11.023240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.015 qpair failed and we were unable to recover it. 00:37:32.015 [2024-07-14 15:10:11.023355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.015 [2024-07-14 15:10:11.023389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.015 qpair failed and we were unable to recover it. 00:37:32.015 [2024-07-14 15:10:11.023554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.015 [2024-07-14 15:10:11.023586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.015 qpair failed and we were unable to recover it. 00:37:32.015 [2024-07-14 15:10:11.023712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.015 [2024-07-14 15:10:11.023744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.015 qpair failed and we were unable to recover it. 00:37:32.015 [2024-07-14 15:10:11.023869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.015 [2024-07-14 15:10:11.023910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.015 qpair failed and we were unable to recover it. 00:37:32.015 [2024-07-14 15:10:11.024048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.015 [2024-07-14 15:10:11.024080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.015 qpair failed and we were unable to recover it. 00:37:32.015 [2024-07-14 15:10:11.024226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.015 [2024-07-14 15:10:11.024277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.015 qpair failed and we were unable to recover it. 00:37:32.015 [2024-07-14 15:10:11.024423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.015 [2024-07-14 15:10:11.024474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.015 qpair failed and we were unable to recover it. 00:37:32.015 [2024-07-14 15:10:11.024609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.015 [2024-07-14 15:10:11.024658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.015 qpair failed and we were unable to recover it. 00:37:32.015 [2024-07-14 15:10:11.024798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.015 [2024-07-14 15:10:11.024830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.015 qpair failed and we were unable to recover it. 00:37:32.015 [2024-07-14 15:10:11.024975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.015 [2024-07-14 15:10:11.025012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.015 qpair failed and we were unable to recover it. 00:37:32.015 [2024-07-14 15:10:11.025142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.015 [2024-07-14 15:10:11.025178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.015 qpair failed and we were unable to recover it. 00:37:32.015 [2024-07-14 15:10:11.025305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.015 [2024-07-14 15:10:11.025346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.015 qpair failed and we were unable to recover it. 00:37:32.015 [2024-07-14 15:10:11.025466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.015 [2024-07-14 15:10:11.025502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.015 qpair failed and we were unable to recover it. 00:37:32.015 [2024-07-14 15:10:11.025649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.015 [2024-07-14 15:10:11.025686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.015 qpair failed and we were unable to recover it. 00:37:32.015 [2024-07-14 15:10:11.025810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.015 [2024-07-14 15:10:11.025847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.015 qpair failed and we were unable to recover it. 00:37:32.015 [2024-07-14 15:10:11.025992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.015 [2024-07-14 15:10:11.026026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.015 qpair failed and we were unable to recover it. 00:37:32.015 [2024-07-14 15:10:11.026126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.015 [2024-07-14 15:10:11.026160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.015 qpair failed and we were unable to recover it. 00:37:32.015 [2024-07-14 15:10:11.026347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.015 [2024-07-14 15:10:11.026384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.015 qpair failed and we were unable to recover it. 00:37:32.015 [2024-07-14 15:10:11.026596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.015 [2024-07-14 15:10:11.026633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.015 qpair failed and we were unable to recover it. 00:37:32.015 [2024-07-14 15:10:11.026756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.015 [2024-07-14 15:10:11.026794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.015 qpair failed and we were unable to recover it. 00:37:32.015 [2024-07-14 15:10:11.026957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.015 [2024-07-14 15:10:11.026991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.015 qpair failed and we were unable to recover it. 00:37:32.015 [2024-07-14 15:10:11.027125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.016 [2024-07-14 15:10:11.027159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.016 qpair failed and we were unable to recover it. 00:37:32.016 [2024-07-14 15:10:11.027271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.016 [2024-07-14 15:10:11.027322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.016 qpair failed and we were unable to recover it. 00:37:32.016 [2024-07-14 15:10:11.027470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.016 [2024-07-14 15:10:11.027507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.016 qpair failed and we were unable to recover it. 00:37:32.016 [2024-07-14 15:10:11.027721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.016 [2024-07-14 15:10:11.027758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.016 qpair failed and we were unable to recover it. 00:37:32.016 [2024-07-14 15:10:11.027917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.016 [2024-07-14 15:10:11.027967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.016 qpair failed and we were unable to recover it. 00:37:32.016 [2024-07-14 15:10:11.028107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.016 [2024-07-14 15:10:11.028140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.016 qpair failed and we were unable to recover it. 00:37:32.016 [2024-07-14 15:10:11.028337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.016 [2024-07-14 15:10:11.028370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.016 qpair failed and we were unable to recover it. 00:37:32.016 [2024-07-14 15:10:11.028502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.016 [2024-07-14 15:10:11.028539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.016 qpair failed and we were unable to recover it. 00:37:32.016 [2024-07-14 15:10:11.028666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.016 [2024-07-14 15:10:11.028703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.016 qpair failed and we were unable to recover it. 00:37:32.016 [2024-07-14 15:10:11.028847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.016 [2024-07-14 15:10:11.028887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.016 qpair failed and we were unable to recover it. 00:37:32.016 [2024-07-14 15:10:11.029024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.016 [2024-07-14 15:10:11.029057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.016 qpair failed and we were unable to recover it. 00:37:32.016 [2024-07-14 15:10:11.029184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.016 [2024-07-14 15:10:11.029221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.016 qpair failed and we were unable to recover it. 00:37:32.016 [2024-07-14 15:10:11.029351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.016 [2024-07-14 15:10:11.029403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.016 qpair failed and we were unable to recover it. 00:37:32.016 [2024-07-14 15:10:11.029562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.016 [2024-07-14 15:10:11.029612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.016 qpair failed and we were unable to recover it. 00:37:32.016 [2024-07-14 15:10:11.029755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.016 [2024-07-14 15:10:11.029792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.016 qpair failed and we were unable to recover it. 00:37:32.016 [2024-07-14 15:10:11.029933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.016 [2024-07-14 15:10:11.029968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.016 qpair failed and we were unable to recover it. 00:37:32.016 [2024-07-14 15:10:11.030096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.016 [2024-07-14 15:10:11.030130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.016 qpair failed and we were unable to recover it. 00:37:32.016 [2024-07-14 15:10:11.030341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.016 [2024-07-14 15:10:11.030378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.016 qpair failed and we were unable to recover it. 00:37:32.016 [2024-07-14 15:10:11.030507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.016 [2024-07-14 15:10:11.030560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.016 qpair failed and we were unable to recover it. 00:37:32.016 [2024-07-14 15:10:11.030715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.016 [2024-07-14 15:10:11.030752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.016 qpair failed and we were unable to recover it. 00:37:32.016 [2024-07-14 15:10:11.030885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.016 [2024-07-14 15:10:11.030936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.016 qpair failed and we were unable to recover it. 00:37:32.016 [2024-07-14 15:10:11.031060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.016 [2024-07-14 15:10:11.031094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.016 qpair failed and we were unable to recover it. 00:37:32.016 [2024-07-14 15:10:11.031227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.016 [2024-07-14 15:10:11.031279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.016 qpair failed and we were unable to recover it. 00:37:32.016 [2024-07-14 15:10:11.031439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.016 [2024-07-14 15:10:11.031473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.016 qpair failed and we were unable to recover it. 00:37:32.016 [2024-07-14 15:10:11.031607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.016 [2024-07-14 15:10:11.031644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.016 qpair failed and we were unable to recover it. 00:37:32.016 [2024-07-14 15:10:11.031802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.016 [2024-07-14 15:10:11.031839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.016 qpair failed and we were unable to recover it. 00:37:32.016 [2024-07-14 15:10:11.031975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.016 [2024-07-14 15:10:11.032009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.016 qpair failed and we were unable to recover it. 00:37:32.016 [2024-07-14 15:10:11.032116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.016 [2024-07-14 15:10:11.032149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.016 qpair failed and we were unable to recover it. 00:37:32.016 [2024-07-14 15:10:11.032298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.016 [2024-07-14 15:10:11.032335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.016 qpair failed and we were unable to recover it. 00:37:32.016 [2024-07-14 15:10:11.032498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.016 [2024-07-14 15:10:11.032536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.016 qpair failed and we were unable to recover it. 00:37:32.016 [2024-07-14 15:10:11.032674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.016 [2024-07-14 15:10:11.032716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.016 qpair failed and we were unable to recover it. 00:37:32.016 [2024-07-14 15:10:11.032864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.016 [2024-07-14 15:10:11.032924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.016 qpair failed and we were unable to recover it. 00:37:32.016 [2024-07-14 15:10:11.033036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.016 [2024-07-14 15:10:11.033069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.016 qpair failed and we were unable to recover it. 00:37:32.016 [2024-07-14 15:10:11.033221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.017 [2024-07-14 15:10:11.033270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.017 qpair failed and we were unable to recover it. 00:37:32.017 [2024-07-14 15:10:11.033438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.017 [2024-07-14 15:10:11.033492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.017 qpair failed and we were unable to recover it. 00:37:32.017 [2024-07-14 15:10:11.033683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.017 [2024-07-14 15:10:11.033737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.017 qpair failed and we were unable to recover it. 00:37:32.017 [2024-07-14 15:10:11.033880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.017 [2024-07-14 15:10:11.033916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.017 qpair failed and we were unable to recover it. 00:37:32.017 [2024-07-14 15:10:11.034039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.017 [2024-07-14 15:10:11.034073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.017 qpair failed and we were unable to recover it. 00:37:32.017 [2024-07-14 15:10:11.034254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.017 [2024-07-14 15:10:11.034307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.017 qpair failed and we were unable to recover it. 00:37:32.017 [2024-07-14 15:10:11.034477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.017 [2024-07-14 15:10:11.034516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.017 qpair failed and we were unable to recover it. 00:37:32.017 [2024-07-14 15:10:11.034643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.017 [2024-07-14 15:10:11.034680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.017 qpair failed and we were unable to recover it. 00:37:32.017 [2024-07-14 15:10:11.034803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.017 [2024-07-14 15:10:11.034840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.017 qpair failed and we were unable to recover it. 00:37:32.017 [2024-07-14 15:10:11.035008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.017 [2024-07-14 15:10:11.035042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.017 qpair failed and we were unable to recover it. 00:37:32.017 [2024-07-14 15:10:11.035151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.017 [2024-07-14 15:10:11.035203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.017 qpair failed and we were unable to recover it. 00:37:32.017 [2024-07-14 15:10:11.035333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.017 [2024-07-14 15:10:11.035370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.017 qpair failed and we were unable to recover it. 00:37:32.017 [2024-07-14 15:10:11.035522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.017 [2024-07-14 15:10:11.035559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.017 qpair failed and we were unable to recover it. 00:37:32.017 [2024-07-14 15:10:11.035681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.017 [2024-07-14 15:10:11.035718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.017 qpair failed and we were unable to recover it. 00:37:32.017 [2024-07-14 15:10:11.035849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.017 [2024-07-14 15:10:11.035896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.017 qpair failed and we were unable to recover it. 00:37:32.017 [2024-07-14 15:10:11.036062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.017 [2024-07-14 15:10:11.036098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.017 qpair failed and we were unable to recover it. 00:37:32.017 [2024-07-14 15:10:11.036225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.017 [2024-07-14 15:10:11.036264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.017 qpair failed and we were unable to recover it. 00:37:32.017 [2024-07-14 15:10:11.036447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.017 [2024-07-14 15:10:11.036499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.017 qpair failed and we were unable to recover it. 00:37:32.017 [2024-07-14 15:10:11.036635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.017 [2024-07-14 15:10:11.036670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.017 qpair failed and we were unable to recover it. 00:37:32.017 [2024-07-14 15:10:11.036803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.017 [2024-07-14 15:10:11.036838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.017 qpair failed and we were unable to recover it. 00:37:32.017 [2024-07-14 15:10:11.036979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.017 [2024-07-14 15:10:11.037032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.017 qpair failed and we were unable to recover it. 00:37:32.017 [2024-07-14 15:10:11.037169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.017 [2024-07-14 15:10:11.037202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.017 qpair failed and we were unable to recover it. 00:37:32.017 [2024-07-14 15:10:11.037351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.017 [2024-07-14 15:10:11.037403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.017 qpair failed and we were unable to recover it. 00:37:32.017 [2024-07-14 15:10:11.037536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.017 [2024-07-14 15:10:11.037570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.017 qpair failed and we were unable to recover it. 00:37:32.017 [2024-07-14 15:10:11.037673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.017 [2024-07-14 15:10:11.037708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.017 qpair failed and we were unable to recover it. 00:37:32.017 [2024-07-14 15:10:11.037824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.017 [2024-07-14 15:10:11.037869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.017 qpair failed and we were unable to recover it. 00:37:32.017 [2024-07-14 15:10:11.038054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.017 [2024-07-14 15:10:11.038087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.017 qpair failed and we were unable to recover it. 00:37:32.017 [2024-07-14 15:10:11.038207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.017 [2024-07-14 15:10:11.038241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.017 qpair failed and we were unable to recover it. 00:37:32.017 [2024-07-14 15:10:11.038403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.017 [2024-07-14 15:10:11.038437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.017 qpair failed and we were unable to recover it. 00:37:32.017 [2024-07-14 15:10:11.038539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.017 [2024-07-14 15:10:11.038572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.017 qpair failed and we were unable to recover it. 00:37:32.017 [2024-07-14 15:10:11.038753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.017 [2024-07-14 15:10:11.038790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.017 qpair failed and we were unable to recover it. 00:37:32.017 [2024-07-14 15:10:11.038928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.017 [2024-07-14 15:10:11.038971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.017 qpair failed and we were unable to recover it. 00:37:32.017 [2024-07-14 15:10:11.039106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.017 [2024-07-14 15:10:11.039139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.017 qpair failed and we were unable to recover it. 00:37:32.017 [2024-07-14 15:10:11.039270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.017 [2024-07-14 15:10:11.039309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.017 qpair failed and we were unable to recover it. 00:37:32.017 [2024-07-14 15:10:11.039450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.017 [2024-07-14 15:10:11.039487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.017 qpair failed and we were unable to recover it. 00:37:32.017 [2024-07-14 15:10:11.039616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.017 [2024-07-14 15:10:11.039654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.017 qpair failed and we were unable to recover it. 00:37:32.017 [2024-07-14 15:10:11.039814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.017 [2024-07-14 15:10:11.039850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.017 qpair failed and we were unable to recover it. 00:37:32.017 [2024-07-14 15:10:11.040033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.017 [2024-07-14 15:10:11.040086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.017 qpair failed and we were unable to recover it. 00:37:32.017 [2024-07-14 15:10:11.040264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.017 [2024-07-14 15:10:11.040320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.017 qpair failed and we were unable to recover it. 00:37:32.017 [2024-07-14 15:10:11.040447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.017 [2024-07-14 15:10:11.040489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.017 qpair failed and we were unable to recover it. 00:37:32.017 [2024-07-14 15:10:11.040701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.017 [2024-07-14 15:10:11.040739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.017 qpair failed and we were unable to recover it. 00:37:32.017 [2024-07-14 15:10:11.040891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.018 [2024-07-14 15:10:11.040955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.018 qpair failed and we were unable to recover it. 00:37:32.018 [2024-07-14 15:10:11.041078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.018 [2024-07-14 15:10:11.041123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.018 qpair failed and we were unable to recover it. 00:37:32.018 [2024-07-14 15:10:11.041320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.018 [2024-07-14 15:10:11.041353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.018 qpair failed and we were unable to recover it. 00:37:32.018 [2024-07-14 15:10:11.041566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.018 [2024-07-14 15:10:11.041623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.018 qpair failed and we were unable to recover it. 00:37:32.018 [2024-07-14 15:10:11.041813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.018 [2024-07-14 15:10:11.041851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.018 qpair failed and we were unable to recover it. 00:37:32.018 [2024-07-14 15:10:11.042015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.018 [2024-07-14 15:10:11.042048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.018 qpair failed and we were unable to recover it. 00:37:32.018 [2024-07-14 15:10:11.042213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.018 [2024-07-14 15:10:11.042250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.018 qpair failed and we were unable to recover it. 00:37:32.018 [2024-07-14 15:10:11.042473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.018 [2024-07-14 15:10:11.042532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.018 qpair failed and we were unable to recover it. 00:37:32.018 [2024-07-14 15:10:11.042732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.018 [2024-07-14 15:10:11.042770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.018 qpair failed and we were unable to recover it. 00:37:32.018 [2024-07-14 15:10:11.042935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.018 [2024-07-14 15:10:11.042969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.018 qpair failed and we were unable to recover it. 00:37:32.018 [2024-07-14 15:10:11.043082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.018 [2024-07-14 15:10:11.043115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.018 qpair failed and we were unable to recover it. 00:37:32.018 [2024-07-14 15:10:11.043262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.018 [2024-07-14 15:10:11.043295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.018 qpair failed and we were unable to recover it. 00:37:32.018 [2024-07-14 15:10:11.043436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.018 [2024-07-14 15:10:11.043488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.018 qpair failed and we were unable to recover it. 00:37:32.018 [2024-07-14 15:10:11.043650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.018 [2024-07-14 15:10:11.043683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.018 qpair failed and we were unable to recover it. 00:37:32.018 [2024-07-14 15:10:11.043851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.018 [2024-07-14 15:10:11.043892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.018 qpair failed and we were unable to recover it. 00:37:32.018 [2024-07-14 15:10:11.044018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.018 [2024-07-14 15:10:11.044068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.018 qpair failed and we were unable to recover it. 00:37:32.018 [2024-07-14 15:10:11.044241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.018 [2024-07-14 15:10:11.044281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.018 qpair failed and we were unable to recover it. 00:37:32.018 [2024-07-14 15:10:11.044445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.018 [2024-07-14 15:10:11.044484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.018 qpair failed and we were unable to recover it. 00:37:32.018 [2024-07-14 15:10:11.044623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.018 [2024-07-14 15:10:11.044662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.018 qpair failed and we were unable to recover it. 00:37:32.018 [2024-07-14 15:10:11.044788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.018 [2024-07-14 15:10:11.044826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.018 qpair failed and we were unable to recover it. 00:37:32.018 [2024-07-14 15:10:11.044959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.018 [2024-07-14 15:10:11.044994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.018 qpair failed and we were unable to recover it. 00:37:32.018 [2024-07-14 15:10:11.045137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.018 [2024-07-14 15:10:11.045189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.018 qpair failed and we were unable to recover it. 00:37:32.018 [2024-07-14 15:10:11.045313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.018 [2024-07-14 15:10:11.045350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.018 qpair failed and we were unable to recover it. 00:37:32.018 [2024-07-14 15:10:11.045548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.018 [2024-07-14 15:10:11.045586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.018 qpair failed and we were unable to recover it. 00:37:32.018 [2024-07-14 15:10:11.045739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.018 [2024-07-14 15:10:11.045778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.018 qpair failed and we were unable to recover it. 00:37:32.018 [2024-07-14 15:10:11.045947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.018 [2024-07-14 15:10:11.045982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.018 qpair failed and we were unable to recover it. 00:37:32.018 [2024-07-14 15:10:11.046138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.018 [2024-07-14 15:10:11.046186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.018 qpair failed and we were unable to recover it. 00:37:32.018 [2024-07-14 15:10:11.046345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.018 [2024-07-14 15:10:11.046401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.018 qpair failed and we were unable to recover it. 00:37:32.018 [2024-07-14 15:10:11.046624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.018 [2024-07-14 15:10:11.046680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.018 qpair failed and we were unable to recover it. 00:37:32.018 [2024-07-14 15:10:11.046819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.018 [2024-07-14 15:10:11.046853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.018 qpair failed and we were unable to recover it. 00:37:32.018 [2024-07-14 15:10:11.047008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.018 [2024-07-14 15:10:11.047042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.018 qpair failed and we were unable to recover it. 00:37:32.018 [2024-07-14 15:10:11.047184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.018 [2024-07-14 15:10:11.047219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.018 qpair failed and we were unable to recover it. 00:37:32.018 [2024-07-14 15:10:11.047339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.018 [2024-07-14 15:10:11.047374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.018 qpair failed and we were unable to recover it. 00:37:32.018 [2024-07-14 15:10:11.047490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.018 [2024-07-14 15:10:11.047525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.018 qpair failed and we were unable to recover it. 00:37:32.018 [2024-07-14 15:10:11.047645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.018 [2024-07-14 15:10:11.047680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.018 qpair failed and we were unable to recover it. 00:37:32.018 [2024-07-14 15:10:11.047811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.018 [2024-07-14 15:10:11.047858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.018 qpair failed and we were unable to recover it. 00:37:32.018 [2024-07-14 15:10:11.047992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.018 [2024-07-14 15:10:11.048033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.018 qpair failed and we were unable to recover it. 00:37:32.018 [2024-07-14 15:10:11.048143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.018 [2024-07-14 15:10:11.048177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.018 qpair failed and we were unable to recover it. 00:37:32.018 [2024-07-14 15:10:11.048315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.018 [2024-07-14 15:10:11.048349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.018 qpair failed and we were unable to recover it. 00:37:32.018 [2024-07-14 15:10:11.048483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.018 [2024-07-14 15:10:11.048517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.018 qpair failed and we were unable to recover it. 00:37:32.018 [2024-07-14 15:10:11.048648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.018 [2024-07-14 15:10:11.048681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.018 qpair failed and we were unable to recover it. 00:37:32.019 [2024-07-14 15:10:11.048843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.019 [2024-07-14 15:10:11.048886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.019 qpair failed and we were unable to recover it. 00:37:32.019 [2024-07-14 15:10:11.049003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.019 [2024-07-14 15:10:11.049040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.019 qpair failed and we were unable to recover it. 00:37:32.019 [2024-07-14 15:10:11.049237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.019 [2024-07-14 15:10:11.049272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.019 qpair failed and we were unable to recover it. 00:37:32.019 [2024-07-14 15:10:11.049452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.019 [2024-07-14 15:10:11.049486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.019 qpair failed and we were unable to recover it. 00:37:32.019 [2024-07-14 15:10:11.049599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.019 [2024-07-14 15:10:11.049632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.019 qpair failed and we were unable to recover it. 00:37:32.019 [2024-07-14 15:10:11.049747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.019 [2024-07-14 15:10:11.049780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.019 qpair failed and we were unable to recover it. 00:37:32.019 [2024-07-14 15:10:11.049890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.019 [2024-07-14 15:10:11.049924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.019 qpair failed and we were unable to recover it. 00:37:32.019 [2024-07-14 15:10:11.050037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.019 [2024-07-14 15:10:11.050071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.019 qpair failed and we were unable to recover it. 00:37:32.019 [2024-07-14 15:10:11.050205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.019 [2024-07-14 15:10:11.050242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.019 qpair failed and we were unable to recover it. 00:37:32.019 [2024-07-14 15:10:11.050387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.019 [2024-07-14 15:10:11.050424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.019 qpair failed and we were unable to recover it. 00:37:32.019 [2024-07-14 15:10:11.050569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.019 [2024-07-14 15:10:11.050605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.019 qpair failed and we were unable to recover it. 00:37:32.019 [2024-07-14 15:10:11.050734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.019 [2024-07-14 15:10:11.050771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.019 qpair failed and we were unable to recover it. 00:37:32.019 [2024-07-14 15:10:11.050940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.019 [2024-07-14 15:10:11.050989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.019 qpair failed and we were unable to recover it. 00:37:32.019 [2024-07-14 15:10:11.051157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.019 [2024-07-14 15:10:11.051211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.019 qpair failed and we were unable to recover it. 00:37:32.019 [2024-07-14 15:10:11.051372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.019 [2024-07-14 15:10:11.051412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.019 qpair failed and we were unable to recover it. 00:37:32.019 [2024-07-14 15:10:11.051566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.019 [2024-07-14 15:10:11.051604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.019 qpair failed and we were unable to recover it. 00:37:32.019 [2024-07-14 15:10:11.051743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.019 [2024-07-14 15:10:11.051781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.019 qpair failed and we were unable to recover it. 00:37:32.019 [2024-07-14 15:10:11.051917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.019 [2024-07-14 15:10:11.051953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.019 qpair failed and we were unable to recover it. 00:37:32.019 [2024-07-14 15:10:11.052098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.019 [2024-07-14 15:10:11.052132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.019 qpair failed and we were unable to recover it. 00:37:32.019 [2024-07-14 15:10:11.052323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.019 [2024-07-14 15:10:11.052361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.019 qpair failed and we were unable to recover it. 00:37:32.019 [2024-07-14 15:10:11.052529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.019 [2024-07-14 15:10:11.052590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.019 qpair failed and we were unable to recover it. 00:37:32.019 [2024-07-14 15:10:11.052744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.019 [2024-07-14 15:10:11.052781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.019 qpair failed and we were unable to recover it. 00:37:32.019 [2024-07-14 15:10:11.052925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.019 [2024-07-14 15:10:11.052963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.019 qpair failed and we were unable to recover it. 00:37:32.019 [2024-07-14 15:10:11.053102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.019 [2024-07-14 15:10:11.053137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.019 qpair failed and we were unable to recover it. 00:37:32.019 [2024-07-14 15:10:11.053270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.019 [2024-07-14 15:10:11.053322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.019 qpair failed and we were unable to recover it. 00:37:32.019 [2024-07-14 15:10:11.053481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.019 [2024-07-14 15:10:11.053534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.019 qpair failed and we were unable to recover it. 00:37:32.019 [2024-07-14 15:10:11.053690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.019 [2024-07-14 15:10:11.053725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.019 qpair failed and we were unable to recover it. 00:37:32.019 [2024-07-14 15:10:11.053889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.019 [2024-07-14 15:10:11.053935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.019 qpair failed and we were unable to recover it. 00:37:32.019 [2024-07-14 15:10:11.054074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.019 [2024-07-14 15:10:11.054108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.019 qpair failed and we were unable to recover it. 00:37:32.019 [2024-07-14 15:10:11.054248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.019 [2024-07-14 15:10:11.054282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.019 qpair failed and we were unable to recover it. 00:37:32.019 [2024-07-14 15:10:11.054394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.019 [2024-07-14 15:10:11.054428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.019 qpair failed and we were unable to recover it. 00:37:32.019 [2024-07-14 15:10:11.054570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.019 [2024-07-14 15:10:11.054605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.019 qpair failed and we were unable to recover it. 00:37:32.019 [2024-07-14 15:10:11.054719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.019 [2024-07-14 15:10:11.054753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.019 qpair failed and we were unable to recover it. 00:37:32.019 [2024-07-14 15:10:11.054902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.019 [2024-07-14 15:10:11.054937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.019 qpair failed and we were unable to recover it. 00:37:32.019 [2024-07-14 15:10:11.055043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.019 [2024-07-14 15:10:11.055077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.019 qpair failed and we were unable to recover it. 00:37:32.019 [2024-07-14 15:10:11.055242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.019 [2024-07-14 15:10:11.055280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.019 qpair failed and we were unable to recover it. 00:37:32.019 [2024-07-14 15:10:11.055384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.019 [2024-07-14 15:10:11.055417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.019 qpair failed and we were unable to recover it. 00:37:32.019 [2024-07-14 15:10:11.055555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.019 [2024-07-14 15:10:11.055588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.019 qpair failed and we were unable to recover it. 00:37:32.019 [2024-07-14 15:10:11.055703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.019 [2024-07-14 15:10:11.055736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.019 qpair failed and we were unable to recover it. 00:37:32.019 [2024-07-14 15:10:11.055848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.019 [2024-07-14 15:10:11.055888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.019 qpair failed and we were unable to recover it. 00:37:32.019 [2024-07-14 15:10:11.056006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.019 [2024-07-14 15:10:11.056040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.020 qpair failed and we were unable to recover it. 00:37:32.020 [2024-07-14 15:10:11.056185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.020 [2024-07-14 15:10:11.056237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.020 qpair failed and we were unable to recover it. 00:37:32.020 [2024-07-14 15:10:11.056398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.020 [2024-07-14 15:10:11.056435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.020 qpair failed and we were unable to recover it. 00:37:32.020 [2024-07-14 15:10:11.056571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.020 [2024-07-14 15:10:11.056623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.020 qpair failed and we were unable to recover it. 00:37:32.020 [2024-07-14 15:10:11.056762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.020 [2024-07-14 15:10:11.056799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.020 qpair failed and we were unable to recover it. 00:37:32.020 [2024-07-14 15:10:11.056942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.020 [2024-07-14 15:10:11.056976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.020 qpair failed and we were unable to recover it. 00:37:32.020 [2024-07-14 15:10:11.057154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.020 [2024-07-14 15:10:11.057203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.020 qpair failed and we were unable to recover it. 00:37:32.020 [2024-07-14 15:10:11.057364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.020 [2024-07-14 15:10:11.057419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.020 qpair failed and we were unable to recover it. 00:37:32.020 [2024-07-14 15:10:11.057600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.020 [2024-07-14 15:10:11.057653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.020 qpair failed and we were unable to recover it. 00:37:32.020 [2024-07-14 15:10:11.057783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.020 [2024-07-14 15:10:11.057819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.020 qpair failed and we were unable to recover it. 00:37:32.020 [2024-07-14 15:10:11.057939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.020 [2024-07-14 15:10:11.057974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.020 qpair failed and we were unable to recover it. 00:37:32.020 [2024-07-14 15:10:11.058092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.020 [2024-07-14 15:10:11.058145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.020 qpair failed and we were unable to recover it. 00:37:32.020 [2024-07-14 15:10:11.058293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.020 [2024-07-14 15:10:11.058329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.020 qpair failed and we were unable to recover it. 00:37:32.020 [2024-07-14 15:10:11.058439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.020 [2024-07-14 15:10:11.058474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.020 qpair failed and we were unable to recover it. 00:37:32.020 [2024-07-14 15:10:11.058590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.020 [2024-07-14 15:10:11.058623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.020 qpair failed and we were unable to recover it. 00:37:32.020 [2024-07-14 15:10:11.058758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.020 [2024-07-14 15:10:11.058791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.020 qpair failed and we were unable to recover it. 00:37:32.020 [2024-07-14 15:10:11.058917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.020 [2024-07-14 15:10:11.058952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.020 qpair failed and we were unable to recover it. 00:37:32.020 [2024-07-14 15:10:11.059067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.020 [2024-07-14 15:10:11.059100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.020 qpair failed and we were unable to recover it. 00:37:32.020 [2024-07-14 15:10:11.059284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.020 [2024-07-14 15:10:11.059333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.020 qpair failed and we were unable to recover it. 00:37:32.020 [2024-07-14 15:10:11.059488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.020 [2024-07-14 15:10:11.059525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.020 qpair failed and we were unable to recover it. 00:37:32.020 [2024-07-14 15:10:11.059651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.020 [2024-07-14 15:10:11.059688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.020 qpair failed and we were unable to recover it. 00:37:32.020 [2024-07-14 15:10:11.059893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.020 [2024-07-14 15:10:11.059947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.020 qpair failed and we were unable to recover it. 00:37:32.020 [2024-07-14 15:10:11.060062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.020 [2024-07-14 15:10:11.060101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.020 qpair failed and we were unable to recover it. 00:37:32.020 [2024-07-14 15:10:11.060250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.020 [2024-07-14 15:10:11.060303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.020 qpair failed and we were unable to recover it. 00:37:32.020 [2024-07-14 15:10:11.060436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.020 [2024-07-14 15:10:11.060491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.020 qpair failed and we were unable to recover it. 00:37:32.020 [2024-07-14 15:10:11.060624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.020 [2024-07-14 15:10:11.060677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.020 qpair failed and we were unable to recover it. 00:37:32.020 [2024-07-14 15:10:11.060793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.020 [2024-07-14 15:10:11.060827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.020 qpair failed and we were unable to recover it. 00:37:32.020 [2024-07-14 15:10:11.060955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.020 [2024-07-14 15:10:11.060990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.020 qpair failed and we were unable to recover it. 00:37:32.020 [2024-07-14 15:10:11.061099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.020 [2024-07-14 15:10:11.061132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.020 qpair failed and we were unable to recover it. 00:37:32.020 [2024-07-14 15:10:11.061238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.020 [2024-07-14 15:10:11.061271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.020 qpair failed and we were unable to recover it. 00:37:32.020 [2024-07-14 15:10:11.061411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.020 [2024-07-14 15:10:11.061445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.020 qpair failed and we were unable to recover it. 00:37:32.020 [2024-07-14 15:10:11.061584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.020 [2024-07-14 15:10:11.061618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.020 qpair failed and we were unable to recover it. 00:37:32.020 [2024-07-14 15:10:11.061722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.020 [2024-07-14 15:10:11.061755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.020 qpair failed and we were unable to recover it. 00:37:32.020 [2024-07-14 15:10:11.061899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.020 [2024-07-14 15:10:11.061935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.020 qpair failed and we were unable to recover it. 00:37:32.020 [2024-07-14 15:10:11.062045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.020 [2024-07-14 15:10:11.062079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.020 qpair failed and we were unable to recover it. 00:37:32.020 [2024-07-14 15:10:11.062187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.020 [2024-07-14 15:10:11.062221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.020 qpair failed and we were unable to recover it. 00:37:32.020 [2024-07-14 15:10:11.062378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.020 [2024-07-14 15:10:11.062413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.020 qpair failed and we were unable to recover it. 00:37:32.020 [2024-07-14 15:10:11.062523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.020 [2024-07-14 15:10:11.062557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.020 qpair failed and we were unable to recover it. 00:37:32.020 [2024-07-14 15:10:11.062699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.020 [2024-07-14 15:10:11.062733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.020 qpair failed and we were unable to recover it. 00:37:32.020 [2024-07-14 15:10:11.062865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.020 [2024-07-14 15:10:11.062905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.020 qpair failed and we were unable to recover it. 00:37:32.020 [2024-07-14 15:10:11.063071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.020 [2024-07-14 15:10:11.063105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.020 qpair failed and we were unable to recover it. 00:37:32.020 [2024-07-14 15:10:11.063238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.021 [2024-07-14 15:10:11.063272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.021 qpair failed and we were unable to recover it. 00:37:32.021 [2024-07-14 15:10:11.063398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.021 [2024-07-14 15:10:11.063435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.021 qpair failed and we were unable to recover it. 00:37:32.021 [2024-07-14 15:10:11.063593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.021 [2024-07-14 15:10:11.063627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.021 qpair failed and we were unable to recover it. 00:37:32.021 [2024-07-14 15:10:11.063792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.021 [2024-07-14 15:10:11.063829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.021 qpair failed and we were unable to recover it. 00:37:32.021 [2024-07-14 15:10:11.063961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.021 [2024-07-14 15:10:11.063995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.021 qpair failed and we were unable to recover it. 00:37:32.021 [2024-07-14 15:10:11.064112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.021 [2024-07-14 15:10:11.064146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.021 qpair failed and we were unable to recover it. 00:37:32.021 [2024-07-14 15:10:11.064307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.021 [2024-07-14 15:10:11.064344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.021 qpair failed and we were unable to recover it. 00:37:32.021 [2024-07-14 15:10:11.064502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.021 [2024-07-14 15:10:11.064541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.021 qpair failed and we were unable to recover it. 00:37:32.021 [2024-07-14 15:10:11.064684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.021 [2024-07-14 15:10:11.064722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.021 qpair failed and we were unable to recover it. 00:37:32.021 [2024-07-14 15:10:11.064890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.021 [2024-07-14 15:10:11.064923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.021 qpair failed and we were unable to recover it. 00:37:32.021 [2024-07-14 15:10:11.065066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.021 [2024-07-14 15:10:11.065099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.021 qpair failed and we were unable to recover it. 00:37:32.021 [2024-07-14 15:10:11.065280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.021 [2024-07-14 15:10:11.065317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.021 qpair failed and we were unable to recover it. 00:37:32.021 [2024-07-14 15:10:11.065474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.021 [2024-07-14 15:10:11.065511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.021 qpair failed and we were unable to recover it. 00:37:32.021 [2024-07-14 15:10:11.065716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.021 [2024-07-14 15:10:11.065753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.021 qpair failed and we were unable to recover it. 00:37:32.021 [2024-07-14 15:10:11.065911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.021 [2024-07-14 15:10:11.065945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.021 qpair failed and we were unable to recover it. 00:37:32.021 [2024-07-14 15:10:11.066057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.021 [2024-07-14 15:10:11.066091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.021 qpair failed and we were unable to recover it. 00:37:32.021 [2024-07-14 15:10:11.066201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.021 [2024-07-14 15:10:11.066235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.021 qpair failed and we were unable to recover it. 00:37:32.021 [2024-07-14 15:10:11.066361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.021 [2024-07-14 15:10:11.066412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.021 qpair failed and we were unable to recover it. 00:37:32.021 [2024-07-14 15:10:11.066555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.021 [2024-07-14 15:10:11.066592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.021 qpair failed and we were unable to recover it. 00:37:32.021 [2024-07-14 15:10:11.066807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.021 [2024-07-14 15:10:11.066845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.021 qpair failed and we were unable to recover it. 00:37:32.021 [2024-07-14 15:10:11.067031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.021 [2024-07-14 15:10:11.067087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.021 qpair failed and we were unable to recover it. 00:37:32.021 [2024-07-14 15:10:11.067244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.021 [2024-07-14 15:10:11.067291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.021 qpair failed and we were unable to recover it. 00:37:32.021 [2024-07-14 15:10:11.067433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.021 [2024-07-14 15:10:11.067489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.021 qpair failed and we were unable to recover it. 00:37:32.021 [2024-07-14 15:10:11.067757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.021 [2024-07-14 15:10:11.067816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.021 qpair failed and we were unable to recover it. 00:37:32.021 [2024-07-14 15:10:11.067967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.021 [2024-07-14 15:10:11.068005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.021 qpair failed and we were unable to recover it. 00:37:32.021 [2024-07-14 15:10:11.068149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.021 [2024-07-14 15:10:11.068183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.021 qpair failed and we were unable to recover it. 00:37:32.021 [2024-07-14 15:10:11.068297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.021 [2024-07-14 15:10:11.068332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.021 qpair failed and we were unable to recover it. 00:37:32.021 [2024-07-14 15:10:11.068469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.021 [2024-07-14 15:10:11.068508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.021 qpair failed and we were unable to recover it. 00:37:32.021 [2024-07-14 15:10:11.068682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.021 [2024-07-14 15:10:11.068723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.021 qpair failed and we were unable to recover it. 00:37:32.021 [2024-07-14 15:10:11.068872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.021 [2024-07-14 15:10:11.068947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.021 qpair failed and we were unable to recover it. 00:37:32.021 [2024-07-14 15:10:11.069091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.021 [2024-07-14 15:10:11.069127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.021 qpair failed and we were unable to recover it. 00:37:32.021 [2024-07-14 15:10:11.069272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.021 [2024-07-14 15:10:11.069307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.021 qpair failed and we were unable to recover it. 00:37:32.021 [2024-07-14 15:10:11.069452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.021 [2024-07-14 15:10:11.069503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.021 qpair failed and we were unable to recover it. 00:37:32.021 [2024-07-14 15:10:11.069629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.021 [2024-07-14 15:10:11.069667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.021 qpair failed and we were unable to recover it. 00:37:32.021 [2024-07-14 15:10:11.069821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.021 [2024-07-14 15:10:11.069854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.021 qpair failed and we were unable to recover it. 00:37:32.021 [2024-07-14 15:10:11.070013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.022 [2024-07-14 15:10:11.070046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.022 qpair failed and we were unable to recover it. 00:37:32.022 [2024-07-14 15:10:11.070204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.022 [2024-07-14 15:10:11.070241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.022 qpair failed and we were unable to recover it. 00:37:32.022 [2024-07-14 15:10:11.070457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.022 [2024-07-14 15:10:11.070495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.022 qpair failed and we were unable to recover it. 00:37:32.022 [2024-07-14 15:10:11.070676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.022 [2024-07-14 15:10:11.070713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.022 qpair failed and we were unable to recover it. 00:37:32.022 [2024-07-14 15:10:11.070828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.022 [2024-07-14 15:10:11.070864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.022 qpair failed and we were unable to recover it. 00:37:32.022 [2024-07-14 15:10:11.071045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.022 [2024-07-14 15:10:11.071093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.022 qpair failed and we were unable to recover it. 00:37:32.022 [2024-07-14 15:10:11.071268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.022 [2024-07-14 15:10:11.071304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.022 qpair failed and we were unable to recover it. 00:37:32.022 [2024-07-14 15:10:11.071466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.022 [2024-07-14 15:10:11.071520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.022 qpair failed and we were unable to recover it. 00:37:32.022 [2024-07-14 15:10:11.071650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.022 [2024-07-14 15:10:11.071702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.022 qpair failed and we were unable to recover it. 00:37:32.022 [2024-07-14 15:10:11.071831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.022 [2024-07-14 15:10:11.071864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.022 qpair failed and we were unable to recover it. 00:37:32.022 [2024-07-14 15:10:11.072013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.022 [2024-07-14 15:10:11.072047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.022 qpair failed and we were unable to recover it. 00:37:32.022 [2024-07-14 15:10:11.072158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.022 [2024-07-14 15:10:11.072193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.022 qpair failed and we were unable to recover it. 00:37:32.022 [2024-07-14 15:10:11.072353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.022 [2024-07-14 15:10:11.072386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.022 qpair failed and we were unable to recover it. 00:37:32.022 [2024-07-14 15:10:11.072532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.022 [2024-07-14 15:10:11.072566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.022 qpair failed and we were unable to recover it. 00:37:32.022 [2024-07-14 15:10:11.072680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.022 [2024-07-14 15:10:11.072716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.022 qpair failed and we were unable to recover it. 00:37:32.022 [2024-07-14 15:10:11.072852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.022 [2024-07-14 15:10:11.072897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.022 qpair failed and we were unable to recover it. 00:37:32.022 [2024-07-14 15:10:11.073014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.022 [2024-07-14 15:10:11.073047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.022 qpair failed and we were unable to recover it. 00:37:32.022 [2024-07-14 15:10:11.073181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.022 [2024-07-14 15:10:11.073214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.022 qpair failed and we were unable to recover it. 00:37:32.022 [2024-07-14 15:10:11.073329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.022 [2024-07-14 15:10:11.073363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.022 qpair failed and we were unable to recover it. 00:37:32.022 [2024-07-14 15:10:11.073500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.022 [2024-07-14 15:10:11.073534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.022 qpair failed and we were unable to recover it. 00:37:32.022 [2024-07-14 15:10:11.073648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.022 [2024-07-14 15:10:11.073685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.022 qpair failed and we were unable to recover it. 00:37:32.022 [2024-07-14 15:10:11.073808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.022 [2024-07-14 15:10:11.073845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.022 qpair failed and we were unable to recover it. 00:37:32.022 [2024-07-14 15:10:11.073992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.022 [2024-07-14 15:10:11.074026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.022 qpair failed and we were unable to recover it. 00:37:32.022 [2024-07-14 15:10:11.074184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.022 [2024-07-14 15:10:11.074239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.022 qpair failed and we were unable to recover it. 00:37:32.022 [2024-07-14 15:10:11.074395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.022 [2024-07-14 15:10:11.074447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.022 qpair failed and we were unable to recover it. 00:37:32.022 [2024-07-14 15:10:11.074627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.022 [2024-07-14 15:10:11.074679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.022 qpair failed and we were unable to recover it. 00:37:32.022 [2024-07-14 15:10:11.074813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.022 [2024-07-14 15:10:11.074852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.022 qpair failed and we were unable to recover it. 00:37:32.022 [2024-07-14 15:10:11.074998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.022 [2024-07-14 15:10:11.075032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.022 qpair failed and we were unable to recover it. 00:37:32.022 [2024-07-14 15:10:11.075157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.022 [2024-07-14 15:10:11.075209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.022 qpair failed and we were unable to recover it. 00:37:32.022 [2024-07-14 15:10:11.075370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.022 [2024-07-14 15:10:11.075408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.022 qpair failed and we were unable to recover it. 00:37:32.022 [2024-07-14 15:10:11.075552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.022 [2024-07-14 15:10:11.075589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.022 qpair failed and we were unable to recover it. 00:37:32.022 [2024-07-14 15:10:11.075725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.022 [2024-07-14 15:10:11.075758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.022 qpair failed and we were unable to recover it. 00:37:32.022 [2024-07-14 15:10:11.075873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.022 [2024-07-14 15:10:11.075922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.022 qpair failed and we were unable to recover it. 00:37:32.022 [2024-07-14 15:10:11.076065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.022 [2024-07-14 15:10:11.076099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.022 qpair failed and we were unable to recover it. 00:37:32.022 [2024-07-14 15:10:11.076231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.022 [2024-07-14 15:10:11.076267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.022 qpair failed and we were unable to recover it. 00:37:32.022 [2024-07-14 15:10:11.076475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.022 [2024-07-14 15:10:11.076512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.022 qpair failed and we were unable to recover it. 00:37:32.022 [2024-07-14 15:10:11.076657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.022 [2024-07-14 15:10:11.076691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.022 qpair failed and we were unable to recover it. 00:37:32.022 [2024-07-14 15:10:11.076874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.022 [2024-07-14 15:10:11.076934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.022 qpair failed and we were unable to recover it. 00:37:32.022 [2024-07-14 15:10:11.077032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.023 [2024-07-14 15:10:11.077065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.023 qpair failed and we were unable to recover it. 00:37:32.023 [2024-07-14 15:10:11.077186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.023 [2024-07-14 15:10:11.077220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.023 qpair failed and we were unable to recover it. 00:37:32.023 [2024-07-14 15:10:11.077396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.023 [2024-07-14 15:10:11.077433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.023 qpair failed and we were unable to recover it. 00:37:32.023 [2024-07-14 15:10:11.077573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.023 [2024-07-14 15:10:11.077624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.023 qpair failed and we were unable to recover it. 00:37:32.023 [2024-07-14 15:10:11.077744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.023 [2024-07-14 15:10:11.077793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.023 qpair failed and we were unable to recover it. 00:37:32.023 [2024-07-14 15:10:11.077923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.023 [2024-07-14 15:10:11.077957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.023 qpair failed and we were unable to recover it. 00:37:32.023 [2024-07-14 15:10:11.078082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.023 [2024-07-14 15:10:11.078132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.023 qpair failed and we were unable to recover it. 00:37:32.023 [2024-07-14 15:10:11.078271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.023 [2024-07-14 15:10:11.078326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.023 qpair failed and we were unable to recover it. 00:37:32.023 [2024-07-14 15:10:11.078466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.023 [2024-07-14 15:10:11.078519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.023 qpair failed and we were unable to recover it. 00:37:32.023 [2024-07-14 15:10:11.078637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.023 [2024-07-14 15:10:11.078691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.023 qpair failed and we were unable to recover it. 00:37:32.023 [2024-07-14 15:10:11.078821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.023 [2024-07-14 15:10:11.078855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.023 qpair failed and we were unable to recover it. 00:37:32.023 [2024-07-14 15:10:11.078972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.023 [2024-07-14 15:10:11.079007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.023 qpair failed and we were unable to recover it. 00:37:32.023 [2024-07-14 15:10:11.079147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.023 [2024-07-14 15:10:11.079182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.023 qpair failed and we were unable to recover it. 00:37:32.023 [2024-07-14 15:10:11.079321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.023 [2024-07-14 15:10:11.079355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.023 qpair failed and we were unable to recover it. 00:37:32.023 [2024-07-14 15:10:11.079496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.023 [2024-07-14 15:10:11.079530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.023 qpair failed and we were unable to recover it. 00:37:32.023 [2024-07-14 15:10:11.079656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.023 [2024-07-14 15:10:11.079691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.023 qpair failed and we were unable to recover it. 00:37:32.023 [2024-07-14 15:10:11.079827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.023 [2024-07-14 15:10:11.079860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.023 qpair failed and we were unable to recover it. 00:37:32.023 [2024-07-14 15:10:11.080002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.023 [2024-07-14 15:10:11.080035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.023 qpair failed and we were unable to recover it. 00:37:32.023 [2024-07-14 15:10:11.080142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.023 [2024-07-14 15:10:11.080176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.023 qpair failed and we were unable to recover it. 00:37:32.023 [2024-07-14 15:10:11.080287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.023 [2024-07-14 15:10:11.080339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.023 qpair failed and we were unable to recover it. 00:37:32.023 [2024-07-14 15:10:11.080489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.023 [2024-07-14 15:10:11.080526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.023 qpair failed and we were unable to recover it. 00:37:32.023 [2024-07-14 15:10:11.080642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.023 [2024-07-14 15:10:11.080679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.023 qpair failed and we were unable to recover it. 00:37:32.023 [2024-07-14 15:10:11.080838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.023 [2024-07-14 15:10:11.080871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.023 qpair failed and we were unable to recover it. 00:37:32.023 [2024-07-14 15:10:11.081051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.023 [2024-07-14 15:10:11.081085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.023 qpair failed and we were unable to recover it. 00:37:32.023 [2024-07-14 15:10:11.081216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.023 [2024-07-14 15:10:11.081252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.023 qpair failed and we were unable to recover it. 00:37:32.023 [2024-07-14 15:10:11.081371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.023 [2024-07-14 15:10:11.081408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.023 qpair failed and we were unable to recover it. 00:37:32.023 [2024-07-14 15:10:11.081528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.023 [2024-07-14 15:10:11.081566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.023 qpair failed and we were unable to recover it. 00:37:32.023 [2024-07-14 15:10:11.081755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.023 [2024-07-14 15:10:11.081792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.023 qpair failed and we were unable to recover it. 00:37:32.023 [2024-07-14 15:10:11.081925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.023 [2024-07-14 15:10:11.081981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.023 qpair failed and we were unable to recover it. 00:37:32.023 [2024-07-14 15:10:11.082088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.023 [2024-07-14 15:10:11.082122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.023 qpair failed and we were unable to recover it. 00:37:32.023 [2024-07-14 15:10:11.082255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.023 [2024-07-14 15:10:11.082310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.023 qpair failed and we were unable to recover it. 00:37:32.023 [2024-07-14 15:10:11.082462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.023 [2024-07-14 15:10:11.082500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.023 qpair failed and we were unable to recover it. 00:37:32.023 [2024-07-14 15:10:11.082671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.023 [2024-07-14 15:10:11.082708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.023 qpair failed and we were unable to recover it. 00:37:32.023 [2024-07-14 15:10:11.082839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.023 [2024-07-14 15:10:11.082873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.023 qpair failed and we were unable to recover it. 00:37:32.023 [2024-07-14 15:10:11.083018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.023 [2024-07-14 15:10:11.083052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.023 qpair failed and we were unable to recover it. 00:37:32.023 [2024-07-14 15:10:11.083207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.023 [2024-07-14 15:10:11.083245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.023 qpair failed and we were unable to recover it. 00:37:32.023 [2024-07-14 15:10:11.083366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.023 [2024-07-14 15:10:11.083417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.023 qpair failed and we were unable to recover it. 00:37:32.023 [2024-07-14 15:10:11.083568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.024 [2024-07-14 15:10:11.083604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.024 qpair failed and we were unable to recover it. 00:37:32.024 [2024-07-14 15:10:11.083717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.024 [2024-07-14 15:10:11.083754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.024 qpair failed and we were unable to recover it. 00:37:32.024 [2024-07-14 15:10:11.083922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.024 [2024-07-14 15:10:11.083972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.024 qpair failed and we were unable to recover it. 00:37:32.024 [2024-07-14 15:10:11.084123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.024 [2024-07-14 15:10:11.084160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.024 qpair failed and we were unable to recover it. 00:37:32.024 [2024-07-14 15:10:11.084321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.024 [2024-07-14 15:10:11.084372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.024 qpair failed and we were unable to recover it. 00:37:32.024 [2024-07-14 15:10:11.084513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.024 [2024-07-14 15:10:11.084565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.024 qpair failed and we were unable to recover it. 00:37:32.024 [2024-07-14 15:10:11.084746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.024 [2024-07-14 15:10:11.084797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.024 qpair failed and we were unable to recover it. 00:37:32.024 [2024-07-14 15:10:11.084952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.024 [2024-07-14 15:10:11.084993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.024 qpair failed and we were unable to recover it. 00:37:32.024 [2024-07-14 15:10:11.085146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.024 [2024-07-14 15:10:11.085198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.024 qpair failed and we were unable to recover it. 00:37:32.024 [2024-07-14 15:10:11.085351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.024 [2024-07-14 15:10:11.085388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.024 qpair failed and we were unable to recover it. 00:37:32.024 [2024-07-14 15:10:11.085504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.024 [2024-07-14 15:10:11.085541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.024 qpair failed and we were unable to recover it. 00:37:32.024 [2024-07-14 15:10:11.085659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.024 [2024-07-14 15:10:11.085695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.024 qpair failed and we were unable to recover it. 00:37:32.024 [2024-07-14 15:10:11.085850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.024 [2024-07-14 15:10:11.085901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.024 qpair failed and we were unable to recover it. 00:37:32.024 [2024-07-14 15:10:11.086052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.024 [2024-07-14 15:10:11.086086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.024 qpair failed and we were unable to recover it. 00:37:32.024 [2024-07-14 15:10:11.086211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.024 [2024-07-14 15:10:11.086247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.024 qpair failed and we were unable to recover it. 00:37:32.024 [2024-07-14 15:10:11.086391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.024 [2024-07-14 15:10:11.086428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.024 qpair failed and we were unable to recover it. 00:37:32.024 [2024-07-14 15:10:11.086603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.024 [2024-07-14 15:10:11.086640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.024 qpair failed and we were unable to recover it. 00:37:32.024 [2024-07-14 15:10:11.086786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.024 [2024-07-14 15:10:11.086821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.024 qpair failed and we were unable to recover it. 00:37:32.024 [2024-07-14 15:10:11.086996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.024 [2024-07-14 15:10:11.087031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.024 qpair failed and we were unable to recover it. 00:37:32.024 [2024-07-14 15:10:11.087158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.024 [2024-07-14 15:10:11.087210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.024 qpair failed and we were unable to recover it. 00:37:32.024 [2024-07-14 15:10:11.087407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.024 [2024-07-14 15:10:11.087459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.024 qpair failed and we were unable to recover it. 00:37:32.024 [2024-07-14 15:10:11.087612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.024 [2024-07-14 15:10:11.087664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.024 qpair failed and we were unable to recover it. 00:37:32.024 [2024-07-14 15:10:11.087797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.024 [2024-07-14 15:10:11.087831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.024 qpair failed and we were unable to recover it. 00:37:32.024 [2024-07-14 15:10:11.087974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.024 [2024-07-14 15:10:11.088009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.024 qpair failed and we were unable to recover it. 00:37:32.024 [2024-07-14 15:10:11.088146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.024 [2024-07-14 15:10:11.088180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.024 qpair failed and we were unable to recover it. 00:37:32.024 [2024-07-14 15:10:11.088314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.024 [2024-07-14 15:10:11.088347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.024 qpair failed and we were unable to recover it. 00:37:32.024 [2024-07-14 15:10:11.088480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.024 [2024-07-14 15:10:11.088514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.024 qpair failed and we were unable to recover it. 00:37:32.024 [2024-07-14 15:10:11.088658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.024 [2024-07-14 15:10:11.088691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.024 qpair failed and we were unable to recover it. 00:37:32.024 [2024-07-14 15:10:11.088800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.024 [2024-07-14 15:10:11.088834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.024 qpair failed and we were unable to recover it. 00:37:32.024 [2024-07-14 15:10:11.088947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.024 [2024-07-14 15:10:11.088982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.024 qpair failed and we were unable to recover it. 00:37:32.024 [2024-07-14 15:10:11.089147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.024 [2024-07-14 15:10:11.089199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.024 qpair failed and we were unable to recover it. 00:37:32.024 [2024-07-14 15:10:11.089379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.024 [2024-07-14 15:10:11.089438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.024 qpair failed and we were unable to recover it. 00:37:32.024 [2024-07-14 15:10:11.089565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.024 [2024-07-14 15:10:11.089618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.024 qpair failed and we were unable to recover it. 00:37:32.024 [2024-07-14 15:10:11.089730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.024 [2024-07-14 15:10:11.089764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.024 qpair failed and we were unable to recover it. 00:37:32.024 [2024-07-14 15:10:11.089954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.024 [2024-07-14 15:10:11.090007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.024 qpair failed and we were unable to recover it. 00:37:32.024 [2024-07-14 15:10:11.090173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.024 [2024-07-14 15:10:11.090209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.024 qpair failed and we were unable to recover it. 00:37:32.024 [2024-07-14 15:10:11.090351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.024 [2024-07-14 15:10:11.090385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.024 qpair failed and we were unable to recover it. 00:37:32.024 [2024-07-14 15:10:11.090487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.024 [2024-07-14 15:10:11.090521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.024 qpair failed and we were unable to recover it. 00:37:32.024 [2024-07-14 15:10:11.090656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.024 [2024-07-14 15:10:11.090689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.024 qpair failed and we were unable to recover it. 00:37:32.025 [2024-07-14 15:10:11.090824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.025 [2024-07-14 15:10:11.090858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.025 qpair failed and we were unable to recover it. 00:37:32.025 [2024-07-14 15:10:11.091002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.025 [2024-07-14 15:10:11.091036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.025 qpair failed and we were unable to recover it. 00:37:32.025 [2024-07-14 15:10:11.091169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.025 [2024-07-14 15:10:11.091234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.025 qpair failed and we were unable to recover it. 00:37:32.025 [2024-07-14 15:10:11.091358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.025 [2024-07-14 15:10:11.091396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.025 qpair failed and we were unable to recover it. 00:37:32.025 [2024-07-14 15:10:11.091564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.025 [2024-07-14 15:10:11.091615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.025 qpair failed and we were unable to recover it. 00:37:32.025 [2024-07-14 15:10:11.091762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.025 [2024-07-14 15:10:11.091796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.025 qpair failed and we were unable to recover it. 00:37:32.025 [2024-07-14 15:10:11.091975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.025 [2024-07-14 15:10:11.092028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.025 qpair failed and we were unable to recover it. 00:37:32.025 [2024-07-14 15:10:11.092155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.025 [2024-07-14 15:10:11.092208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.025 qpair failed and we were unable to recover it. 00:37:32.025 [2024-07-14 15:10:11.092368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.025 [2024-07-14 15:10:11.092408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.025 qpair failed and we were unable to recover it. 00:37:32.025 [2024-07-14 15:10:11.092529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.025 [2024-07-14 15:10:11.092567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.025 qpair failed and we were unable to recover it. 00:37:32.025 [2024-07-14 15:10:11.092711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.025 [2024-07-14 15:10:11.092748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.025 qpair failed and we were unable to recover it. 00:37:32.025 [2024-07-14 15:10:11.092859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.025 [2024-07-14 15:10:11.092904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.025 qpair failed and we were unable to recover it. 00:37:32.025 [2024-07-14 15:10:11.093020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.025 [2024-07-14 15:10:11.093057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.025 qpair failed and we were unable to recover it. 00:37:32.025 [2024-07-14 15:10:11.093205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.025 [2024-07-14 15:10:11.093241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.025 qpair failed and we were unable to recover it. 00:37:32.025 [2024-07-14 15:10:11.093397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.025 [2024-07-14 15:10:11.093434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.025 qpair failed and we were unable to recover it. 00:37:32.025 [2024-07-14 15:10:11.093586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.025 [2024-07-14 15:10:11.093623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.025 qpair failed and we were unable to recover it. 00:37:32.025 [2024-07-14 15:10:11.093755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.025 [2024-07-14 15:10:11.093788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.025 qpair failed and we were unable to recover it. 00:37:32.025 [2024-07-14 15:10:11.093944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.025 [2024-07-14 15:10:11.093978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.025 qpair failed and we were unable to recover it. 00:37:32.025 [2024-07-14 15:10:11.094081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.025 [2024-07-14 15:10:11.094114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.025 qpair failed and we were unable to recover it. 00:37:32.025 [2024-07-14 15:10:11.094262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.025 [2024-07-14 15:10:11.094299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.025 qpair failed and we were unable to recover it. 00:37:32.025 [2024-07-14 15:10:11.094472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.025 [2024-07-14 15:10:11.094508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.025 qpair failed and we were unable to recover it. 00:37:32.025 [2024-07-14 15:10:11.094623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.025 [2024-07-14 15:10:11.094660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.025 qpair failed and we were unable to recover it. 00:37:32.025 [2024-07-14 15:10:11.094825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.025 [2024-07-14 15:10:11.094863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.025 qpair failed and we were unable to recover it. 00:37:32.025 [2024-07-14 15:10:11.095011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.025 [2024-07-14 15:10:11.095045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.025 qpair failed and we were unable to recover it. 00:37:32.025 [2024-07-14 15:10:11.095200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.025 [2024-07-14 15:10:11.095236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.025 qpair failed and we were unable to recover it. 00:37:32.025 [2024-07-14 15:10:11.095379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.025 [2024-07-14 15:10:11.095448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.025 qpair failed and we were unable to recover it. 00:37:32.025 [2024-07-14 15:10:11.095606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.025 [2024-07-14 15:10:11.095643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.025 qpair failed and we were unable to recover it. 00:37:32.025 [2024-07-14 15:10:11.095790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.025 [2024-07-14 15:10:11.095828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.025 qpair failed and we were unable to recover it. 00:37:32.025 [2024-07-14 15:10:11.095998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.025 [2024-07-14 15:10:11.096032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.025 qpair failed and we were unable to recover it. 00:37:32.025 [2024-07-14 15:10:11.096176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.025 [2024-07-14 15:10:11.096243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.025 qpair failed and we were unable to recover it. 00:37:32.025 [2024-07-14 15:10:11.096413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.026 [2024-07-14 15:10:11.096469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.026 qpair failed and we were unable to recover it. 00:37:32.026 [2024-07-14 15:10:11.096600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.026 [2024-07-14 15:10:11.096653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.026 qpair failed and we were unable to recover it. 00:37:32.026 [2024-07-14 15:10:11.096787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.026 [2024-07-14 15:10:11.096827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.026 qpair failed and we were unable to recover it. 00:37:32.026 [2024-07-14 15:10:11.096937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.026 [2024-07-14 15:10:11.096972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.026 qpair failed and we were unable to recover it. 00:37:32.026 [2024-07-14 15:10:11.097114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.026 [2024-07-14 15:10:11.097149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.026 qpair failed and we were unable to recover it. 00:37:32.026 [2024-07-14 15:10:11.097299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.026 [2024-07-14 15:10:11.097334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.026 qpair failed and we were unable to recover it. 00:37:32.026 [2024-07-14 15:10:11.097492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.026 [2024-07-14 15:10:11.097526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.026 qpair failed and we were unable to recover it. 00:37:32.026 [2024-07-14 15:10:11.097663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.026 [2024-07-14 15:10:11.097697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.026 qpair failed and we were unable to recover it. 00:37:32.026 [2024-07-14 15:10:11.097817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.026 [2024-07-14 15:10:11.097852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.026 qpair failed and we were unable to recover it. 00:37:32.026 [2024-07-14 15:10:11.097996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.026 [2024-07-14 15:10:11.098029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.026 qpair failed and we were unable to recover it. 00:37:32.026 [2024-07-14 15:10:11.098138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.026 [2024-07-14 15:10:11.098172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.026 qpair failed and we were unable to recover it. 00:37:32.026 [2024-07-14 15:10:11.098306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.026 [2024-07-14 15:10:11.098339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.026 qpair failed and we were unable to recover it. 00:37:32.026 [2024-07-14 15:10:11.098439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.026 [2024-07-14 15:10:11.098472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.026 qpair failed and we were unable to recover it. 00:37:32.026 [2024-07-14 15:10:11.098582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.026 [2024-07-14 15:10:11.098615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.026 qpair failed and we were unable to recover it. 00:37:32.026 [2024-07-14 15:10:11.098729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.026 [2024-07-14 15:10:11.098764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.026 qpair failed and we were unable to recover it. 00:37:32.026 [2024-07-14 15:10:11.098893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.026 [2024-07-14 15:10:11.098928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.026 qpair failed and we were unable to recover it. 00:37:32.026 [2024-07-14 15:10:11.099078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.026 [2024-07-14 15:10:11.099130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.026 qpair failed and we were unable to recover it. 00:37:32.026 [2024-07-14 15:10:11.099258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.026 [2024-07-14 15:10:11.099311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.026 qpair failed and we were unable to recover it. 00:37:32.026 [2024-07-14 15:10:11.099472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.026 [2024-07-14 15:10:11.099523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.026 qpair failed and we were unable to recover it. 00:37:32.026 [2024-07-14 15:10:11.099666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.026 [2024-07-14 15:10:11.099701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.026 qpair failed and we were unable to recover it. 00:37:32.026 [2024-07-14 15:10:11.099818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.026 [2024-07-14 15:10:11.099861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.026 qpair failed and we were unable to recover it. 00:37:32.026 [2024-07-14 15:10:11.100051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.026 [2024-07-14 15:10:11.100089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.026 qpair failed and we were unable to recover it. 00:37:32.026 [2024-07-14 15:10:11.100249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.026 [2024-07-14 15:10:11.100286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.026 qpair failed and we were unable to recover it. 00:37:32.026 [2024-07-14 15:10:11.100415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.026 [2024-07-14 15:10:11.100452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.026 qpair failed and we were unable to recover it. 00:37:32.026 [2024-07-14 15:10:11.100601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.026 [2024-07-14 15:10:11.100638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.026 qpair failed and we were unable to recover it. 00:37:32.026 [2024-07-14 15:10:11.100782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.026 [2024-07-14 15:10:11.100819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.026 qpair failed and we were unable to recover it. 00:37:32.026 [2024-07-14 15:10:11.100991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.026 [2024-07-14 15:10:11.101025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.026 qpair failed and we were unable to recover it. 00:37:32.026 [2024-07-14 15:10:11.101156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.026 [2024-07-14 15:10:11.101193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.026 qpair failed and we were unable to recover it. 00:37:32.026 [2024-07-14 15:10:11.101383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.026 [2024-07-14 15:10:11.101421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.026 qpair failed and we were unable to recover it. 00:37:32.026 [2024-07-14 15:10:11.101576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.026 [2024-07-14 15:10:11.101613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.026 qpair failed and we were unable to recover it. 00:37:32.026 [2024-07-14 15:10:11.101815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.026 [2024-07-14 15:10:11.101851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.026 qpair failed and we were unable to recover it. 00:37:32.026 [2024-07-14 15:10:11.102021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.026 [2024-07-14 15:10:11.102055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.026 qpair failed and we were unable to recover it. 00:37:32.026 [2024-07-14 15:10:11.102237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.026 [2024-07-14 15:10:11.102275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.027 qpair failed and we were unable to recover it. 00:37:32.027 [2024-07-14 15:10:11.102445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.027 [2024-07-14 15:10:11.102482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.027 qpair failed and we were unable to recover it. 00:37:32.027 [2024-07-14 15:10:11.102635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.027 [2024-07-14 15:10:11.102672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.027 qpair failed and we were unable to recover it. 00:37:32.027 [2024-07-14 15:10:11.102822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.027 [2024-07-14 15:10:11.102868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.027 qpair failed and we were unable to recover it. 00:37:32.027 [2024-07-14 15:10:11.103038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.027 [2024-07-14 15:10:11.103087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.027 qpair failed and we were unable to recover it. 00:37:32.027 [2024-07-14 15:10:11.103234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.027 [2024-07-14 15:10:11.103288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.027 qpair failed and we were unable to recover it. 00:37:32.027 [2024-07-14 15:10:11.103445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.027 [2024-07-14 15:10:11.103499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.027 qpair failed and we were unable to recover it. 00:37:32.027 [2024-07-14 15:10:11.103607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.027 [2024-07-14 15:10:11.103642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.027 qpair failed and we were unable to recover it. 00:37:32.027 [2024-07-14 15:10:11.103802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.027 [2024-07-14 15:10:11.103837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.027 qpair failed and we were unable to recover it. 00:37:32.027 [2024-07-14 15:10:11.104009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.027 [2024-07-14 15:10:11.104044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.027 qpair failed and we were unable to recover it. 00:37:32.027 [2024-07-14 15:10:11.104193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.027 [2024-07-14 15:10:11.104235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.027 qpair failed and we were unable to recover it. 00:37:32.027 [2024-07-14 15:10:11.104344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.027 [2024-07-14 15:10:11.104378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.027 qpair failed and we were unable to recover it. 00:37:32.027 [2024-07-14 15:10:11.104522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.027 [2024-07-14 15:10:11.104555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.027 qpair failed and we were unable to recover it. 00:37:32.027 [2024-07-14 15:10:11.104666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.027 [2024-07-14 15:10:11.104699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.027 qpair failed and we were unable to recover it. 00:37:32.027 [2024-07-14 15:10:11.104847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.027 [2024-07-14 15:10:11.104897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.027 qpair failed and we were unable to recover it. 00:37:32.027 [2024-07-14 15:10:11.105009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.027 [2024-07-14 15:10:11.105044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.027 qpair failed and we were unable to recover it. 00:37:32.027 [2024-07-14 15:10:11.105173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.027 [2024-07-14 15:10:11.105231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.027 qpair failed and we were unable to recover it. 00:37:32.027 [2024-07-14 15:10:11.105379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.027 [2024-07-14 15:10:11.105431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.027 qpair failed and we were unable to recover it. 00:37:32.027 [2024-07-14 15:10:11.105565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.027 [2024-07-14 15:10:11.105617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.027 qpair failed and we were unable to recover it. 00:37:32.027 [2024-07-14 15:10:11.105756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.027 [2024-07-14 15:10:11.105790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.027 qpair failed and we were unable to recover it. 00:37:32.027 [2024-07-14 15:10:11.105960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.027 [2024-07-14 15:10:11.106013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.027 qpair failed and we were unable to recover it. 00:37:32.027 [2024-07-14 15:10:11.106125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.027 [2024-07-14 15:10:11.106170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.027 qpair failed and we were unable to recover it. 00:37:32.027 [2024-07-14 15:10:11.106300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.027 [2024-07-14 15:10:11.106334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.027 qpair failed and we were unable to recover it. 00:37:32.027 [2024-07-14 15:10:11.106455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.027 [2024-07-14 15:10:11.106490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.027 qpair failed and we were unable to recover it. 00:37:32.027 [2024-07-14 15:10:11.106662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.027 [2024-07-14 15:10:11.106696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.027 qpair failed and we were unable to recover it. 00:37:32.027 [2024-07-14 15:10:11.106809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.027 [2024-07-14 15:10:11.106844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.027 qpair failed and we were unable to recover it. 00:37:32.027 [2024-07-14 15:10:11.106967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.027 [2024-07-14 15:10:11.107000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.027 qpair failed and we were unable to recover it. 00:37:32.027 [2024-07-14 15:10:11.107146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.027 [2024-07-14 15:10:11.107190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.027 qpair failed and we were unable to recover it. 00:37:32.027 [2024-07-14 15:10:11.107327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.027 [2024-07-14 15:10:11.107364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.027 qpair failed and we were unable to recover it. 00:37:32.027 [2024-07-14 15:10:11.107517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.027 [2024-07-14 15:10:11.107576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.027 qpair failed and we were unable to recover it. 00:37:32.027 [2024-07-14 15:10:11.107685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.027 [2024-07-14 15:10:11.107722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.027 qpair failed and we were unable to recover it. 00:37:32.027 [2024-07-14 15:10:11.107890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.027 [2024-07-14 15:10:11.107925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.027 qpair failed and we were unable to recover it. 00:37:32.027 [2024-07-14 15:10:11.108078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.027 [2024-07-14 15:10:11.108130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.027 qpair failed and we were unable to recover it. 00:37:32.027 [2024-07-14 15:10:11.108300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.027 [2024-07-14 15:10:11.108353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.027 qpair failed and we were unable to recover it. 00:37:32.027 [2024-07-14 15:10:11.108509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.027 [2024-07-14 15:10:11.108560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.027 qpair failed and we were unable to recover it. 00:37:32.027 [2024-07-14 15:10:11.108699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.027 [2024-07-14 15:10:11.108733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.027 qpair failed and we were unable to recover it. 00:37:32.027 [2024-07-14 15:10:11.108895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.027 [2024-07-14 15:10:11.108929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.027 qpair failed and we were unable to recover it. 00:37:32.027 [2024-07-14 15:10:11.109090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.027 [2024-07-14 15:10:11.109129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.028 qpair failed and we were unable to recover it. 00:37:32.028 [2024-07-14 15:10:11.109302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.028 [2024-07-14 15:10:11.109340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.028 qpair failed and we were unable to recover it. 00:37:32.028 [2024-07-14 15:10:11.109479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.028 [2024-07-14 15:10:11.109516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.028 qpair failed and we were unable to recover it. 00:37:32.028 [2024-07-14 15:10:11.109637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.028 [2024-07-14 15:10:11.109674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.028 qpair failed and we were unable to recover it. 00:37:32.028 [2024-07-14 15:10:11.109869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.028 [2024-07-14 15:10:11.109912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.028 qpair failed and we were unable to recover it. 00:37:32.028 [2024-07-14 15:10:11.110089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.028 [2024-07-14 15:10:11.110123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.028 qpair failed and we were unable to recover it. 00:37:32.028 [2024-07-14 15:10:11.110276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.028 [2024-07-14 15:10:11.110327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.028 qpair failed and we were unable to recover it. 00:37:32.028 [2024-07-14 15:10:11.110471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.028 [2024-07-14 15:10:11.110508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.028 qpair failed and we were unable to recover it. 00:37:32.028 [2024-07-14 15:10:11.110636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.028 [2024-07-14 15:10:11.110673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.028 qpair failed and we were unable to recover it. 00:37:32.028 [2024-07-14 15:10:11.110825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.028 [2024-07-14 15:10:11.110865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.028 qpair failed and we were unable to recover it. 00:37:32.028 [2024-07-14 15:10:11.111008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.028 [2024-07-14 15:10:11.111041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.028 qpair failed and we were unable to recover it. 00:37:32.028 [2024-07-14 15:10:11.111189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.028 [2024-07-14 15:10:11.111226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.028 qpair failed and we were unable to recover it. 00:37:32.028 [2024-07-14 15:10:11.111363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.028 [2024-07-14 15:10:11.111416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.028 qpair failed and we were unable to recover it. 00:37:32.028 [2024-07-14 15:10:11.111553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.028 [2024-07-14 15:10:11.111595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.028 qpair failed and we were unable to recover it. 00:37:32.028 [2024-07-14 15:10:11.111705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.028 [2024-07-14 15:10:11.111742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.028 qpair failed and we were unable to recover it. 00:37:32.028 [2024-07-14 15:10:11.111871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.028 [2024-07-14 15:10:11.111940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.028 qpair failed and we were unable to recover it. 00:37:32.028 [2024-07-14 15:10:11.112085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.028 [2024-07-14 15:10:11.112119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.028 qpair failed and we were unable to recover it. 00:37:32.028 [2024-07-14 15:10:11.112280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.028 [2024-07-14 15:10:11.112318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.028 qpair failed and we were unable to recover it. 00:37:32.028 [2024-07-14 15:10:11.112450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.028 [2024-07-14 15:10:11.112501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.028 qpair failed and we were unable to recover it. 00:37:32.028 [2024-07-14 15:10:11.112630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.028 [2024-07-14 15:10:11.112667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.028 qpair failed and we were unable to recover it. 00:37:32.028 [2024-07-14 15:10:11.112793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.028 [2024-07-14 15:10:11.112831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.028 qpair failed and we were unable to recover it. 00:37:32.028 [2024-07-14 15:10:11.112976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.028 [2024-07-14 15:10:11.113010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.028 qpair failed and we were unable to recover it. 00:37:32.028 [2024-07-14 15:10:11.113174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.028 [2024-07-14 15:10:11.113211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.028 qpair failed and we were unable to recover it. 00:37:32.028 [2024-07-14 15:10:11.113366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.028 [2024-07-14 15:10:11.113403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.028 qpair failed and we were unable to recover it. 00:37:32.028 [2024-07-14 15:10:11.113634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.028 [2024-07-14 15:10:11.113671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.028 qpair failed and we were unable to recover it. 00:37:32.028 [2024-07-14 15:10:11.113832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.028 [2024-07-14 15:10:11.113888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.028 qpair failed and we were unable to recover it. 00:37:32.028 [2024-07-14 15:10:11.114015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.028 [2024-07-14 15:10:11.114059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.028 qpair failed and we were unable to recover it. 00:37:32.028 [2024-07-14 15:10:11.114177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.028 [2024-07-14 15:10:11.114210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.028 qpair failed and we were unable to recover it. 00:37:32.028 [2024-07-14 15:10:11.114352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.028 [2024-07-14 15:10:11.114404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.028 qpair failed and we were unable to recover it. 00:37:32.028 [2024-07-14 15:10:11.114552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.028 [2024-07-14 15:10:11.114589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.028 qpair failed and we were unable to recover it. 00:37:32.028 [2024-07-14 15:10:11.114797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.028 [2024-07-14 15:10:11.114834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.028 qpair failed and we were unable to recover it. 00:37:32.028 [2024-07-14 15:10:11.115005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.028 [2024-07-14 15:10:11.115056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.028 qpair failed and we were unable to recover it. 00:37:32.028 [2024-07-14 15:10:11.115234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.028 [2024-07-14 15:10:11.115285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.028 qpair failed and we were unable to recover it. 00:37:32.028 [2024-07-14 15:10:11.115460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.028 [2024-07-14 15:10:11.115505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.028 qpair failed and we were unable to recover it. 00:37:32.028 [2024-07-14 15:10:11.115757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.028 [2024-07-14 15:10:11.115795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.028 qpair failed and we were unable to recover it. 00:37:32.028 [2024-07-14 15:10:11.115943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.028 [2024-07-14 15:10:11.115977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.028 qpair failed and we were unable to recover it. 00:37:32.028 [2024-07-14 15:10:11.116092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.028 [2024-07-14 15:10:11.116125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.028 qpair failed and we were unable to recover it. 00:37:32.028 [2024-07-14 15:10:11.116245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.028 [2024-07-14 15:10:11.116294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.028 qpair failed and we were unable to recover it. 00:37:32.029 [2024-07-14 15:10:11.116415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.029 [2024-07-14 15:10:11.116452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.029 qpair failed and we were unable to recover it. 00:37:32.029 [2024-07-14 15:10:11.116668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.029 [2024-07-14 15:10:11.116705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.029 qpair failed and we were unable to recover it. 00:37:32.029 [2024-07-14 15:10:11.116891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.029 [2024-07-14 15:10:11.116947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.029 qpair failed and we were unable to recover it. 00:37:32.029 [2024-07-14 15:10:11.117090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.029 [2024-07-14 15:10:11.117128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.029 qpair failed and we were unable to recover it. 00:37:32.029 [2024-07-14 15:10:11.117328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.029 [2024-07-14 15:10:11.117362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.029 qpair failed and we were unable to recover it. 00:37:32.029 [2024-07-14 15:10:11.117512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.029 [2024-07-14 15:10:11.117582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.029 qpair failed and we were unable to recover it. 00:37:32.029 [2024-07-14 15:10:11.117738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.029 [2024-07-14 15:10:11.117776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.029 qpair failed and we were unable to recover it. 00:37:32.029 [2024-07-14 15:10:11.117919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.029 [2024-07-14 15:10:11.117956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.029 qpair failed and we were unable to recover it. 00:37:32.029 [2024-07-14 15:10:11.118093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.029 [2024-07-14 15:10:11.118131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.029 qpair failed and we were unable to recover it. 00:37:32.029 [2024-07-14 15:10:11.118279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.029 [2024-07-14 15:10:11.118327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.029 qpair failed and we were unable to recover it. 00:37:32.029 [2024-07-14 15:10:11.118463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.029 [2024-07-14 15:10:11.118519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.029 qpair failed and we were unable to recover it. 00:37:32.029 [2024-07-14 15:10:11.118647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.029 [2024-07-14 15:10:11.118689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.029 qpair failed and we were unable to recover it. 00:37:32.029 [2024-07-14 15:10:11.118844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.029 [2024-07-14 15:10:11.118905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.029 qpair failed and we were unable to recover it. 00:37:32.029 [2024-07-14 15:10:11.119047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.029 [2024-07-14 15:10:11.119081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.029 qpair failed and we were unable to recover it. 00:37:32.029 [2024-07-14 15:10:11.119217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.029 [2024-07-14 15:10:11.119252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.029 qpair failed and we were unable to recover it. 00:37:32.029 [2024-07-14 15:10:11.119448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.029 [2024-07-14 15:10:11.119506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.029 qpair failed and we were unable to recover it. 00:37:32.029 [2024-07-14 15:10:11.119719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.029 [2024-07-14 15:10:11.119756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.029 qpair failed and we were unable to recover it. 00:37:32.029 [2024-07-14 15:10:11.119896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.029 [2024-07-14 15:10:11.119952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.029 qpair failed and we were unable to recover it. 00:37:32.029 [2024-07-14 15:10:11.120065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.029 [2024-07-14 15:10:11.120104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.029 qpair failed and we were unable to recover it. 00:37:32.029 [2024-07-14 15:10:11.120261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.029 [2024-07-14 15:10:11.120299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.029 qpair failed and we were unable to recover it. 00:37:32.029 [2024-07-14 15:10:11.120476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.029 [2024-07-14 15:10:11.120516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.029 qpair failed and we were unable to recover it. 00:37:32.029 [2024-07-14 15:10:11.120655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.029 [2024-07-14 15:10:11.120693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.029 qpair failed and we were unable to recover it. 00:37:32.029 [2024-07-14 15:10:11.120895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.029 [2024-07-14 15:10:11.120945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.029 qpair failed and we were unable to recover it. 00:37:32.029 [2024-07-14 15:10:11.121083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.029 [2024-07-14 15:10:11.121131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.029 qpair failed and we were unable to recover it. 00:37:32.029 [2024-07-14 15:10:11.121295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.029 [2024-07-14 15:10:11.121335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.029 qpair failed and we were unable to recover it. 00:37:32.029 [2024-07-14 15:10:11.121499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.029 [2024-07-14 15:10:11.121558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.029 qpair failed and we were unable to recover it. 00:37:32.029 [2024-07-14 15:10:11.121694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.029 [2024-07-14 15:10:11.121746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.029 qpair failed and we were unable to recover it. 00:37:32.029 [2024-07-14 15:10:11.121874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.029 [2024-07-14 15:10:11.121933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.029 qpair failed and we were unable to recover it. 00:37:32.029 [2024-07-14 15:10:11.122042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.029 [2024-07-14 15:10:11.122075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.029 qpair failed and we were unable to recover it. 00:37:32.029 [2024-07-14 15:10:11.122219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.029 [2024-07-14 15:10:11.122253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.029 qpair failed and we were unable to recover it. 00:37:32.029 [2024-07-14 15:10:11.122417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.029 [2024-07-14 15:10:11.122455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.029 qpair failed and we were unable to recover it. 00:37:32.029 [2024-07-14 15:10:11.122580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.029 [2024-07-14 15:10:11.122617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.029 qpair failed and we were unable to recover it. 00:37:32.029 [2024-07-14 15:10:11.122742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.029 [2024-07-14 15:10:11.122792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.029 qpair failed and we were unable to recover it. 00:37:32.029 [2024-07-14 15:10:11.122957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.029 [2024-07-14 15:10:11.122991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.029 qpair failed and we were unable to recover it. 00:37:32.029 [2024-07-14 15:10:11.123134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.029 [2024-07-14 15:10:11.123185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.029 qpair failed and we were unable to recover it. 00:37:32.029 [2024-07-14 15:10:11.123343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.029 [2024-07-14 15:10:11.123376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.029 qpair failed and we were unable to recover it. 00:37:32.029 [2024-07-14 15:10:11.123530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.029 [2024-07-14 15:10:11.123567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.029 qpair failed and we were unable to recover it. 00:37:32.029 [2024-07-14 15:10:11.123709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.029 [2024-07-14 15:10:11.123746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.030 qpair failed and we were unable to recover it. 00:37:32.030 [2024-07-14 15:10:11.123893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.030 [2024-07-14 15:10:11.123927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.030 qpair failed and we were unable to recover it. 00:37:32.030 [2024-07-14 15:10:11.124032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.030 [2024-07-14 15:10:11.124065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.030 qpair failed and we were unable to recover it. 00:37:32.030 [2024-07-14 15:10:11.124213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.030 [2024-07-14 15:10:11.124250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.030 qpair failed and we were unable to recover it. 00:37:32.030 [2024-07-14 15:10:11.124416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.030 [2024-07-14 15:10:11.124453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.030 qpair failed and we were unable to recover it. 00:37:32.030 [2024-07-14 15:10:11.124611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.030 [2024-07-14 15:10:11.124648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.030 qpair failed and we were unable to recover it. 00:37:32.030 [2024-07-14 15:10:11.124774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.030 [2024-07-14 15:10:11.124811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.030 qpair failed and we were unable to recover it. 00:37:32.030 [2024-07-14 15:10:11.124966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.030 [2024-07-14 15:10:11.125014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.030 qpair failed and we were unable to recover it. 00:37:32.030 [2024-07-14 15:10:11.125166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.030 [2024-07-14 15:10:11.125202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.030 qpair failed and we were unable to recover it. 00:37:32.030 [2024-07-14 15:10:11.125358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.030 [2024-07-14 15:10:11.125410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.030 qpair failed and we were unable to recover it. 00:37:32.030 [2024-07-14 15:10:11.125540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.030 [2024-07-14 15:10:11.125592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.030 qpair failed and we were unable to recover it. 00:37:32.030 [2024-07-14 15:10:11.125713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.030 [2024-07-14 15:10:11.125747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.030 qpair failed and we were unable to recover it. 00:37:32.030 [2024-07-14 15:10:11.125886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.030 [2024-07-14 15:10:11.125920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.030 qpair failed and we were unable to recover it. 00:37:32.030 [2024-07-14 15:10:11.126024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.030 [2024-07-14 15:10:11.126059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.030 qpair failed and we were unable to recover it. 00:37:32.030 [2024-07-14 15:10:11.126200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.030 [2024-07-14 15:10:11.126249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.030 qpair failed and we were unable to recover it. 00:37:32.030 [2024-07-14 15:10:11.126393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.030 [2024-07-14 15:10:11.126429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.030 qpair failed and we were unable to recover it. 00:37:32.030 [2024-07-14 15:10:11.126542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.030 [2024-07-14 15:10:11.126577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.030 qpair failed and we were unable to recover it. 00:37:32.030 [2024-07-14 15:10:11.126748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.030 [2024-07-14 15:10:11.126782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.030 qpair failed and we were unable to recover it. 00:37:32.030 [2024-07-14 15:10:11.126921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.030 [2024-07-14 15:10:11.126962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.030 qpair failed and we were unable to recover it. 00:37:32.030 [2024-07-14 15:10:11.127070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.030 [2024-07-14 15:10:11.127105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.030 qpair failed and we were unable to recover it. 00:37:32.030 [2024-07-14 15:10:11.127265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.030 [2024-07-14 15:10:11.127330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.030 qpair failed and we were unable to recover it. 00:37:32.030 [2024-07-14 15:10:11.127516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.030 [2024-07-14 15:10:11.127571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.030 qpair failed and we were unable to recover it. 00:37:32.030 [2024-07-14 15:10:11.127696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.030 [2024-07-14 15:10:11.127734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.030 qpair failed and we were unable to recover it. 00:37:32.030 [2024-07-14 15:10:11.127899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.030 [2024-07-14 15:10:11.127934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.030 qpair failed and we were unable to recover it. 00:37:32.030 [2024-07-14 15:10:11.128049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.030 [2024-07-14 15:10:11.128083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.030 qpair failed and we were unable to recover it. 00:37:32.030 [2024-07-14 15:10:11.128276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.030 [2024-07-14 15:10:11.128348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.030 qpair failed and we were unable to recover it. 00:37:32.030 [2024-07-14 15:10:11.128548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.030 [2024-07-14 15:10:11.128586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.030 qpair failed and we were unable to recover it. 00:37:32.030 [2024-07-14 15:10:11.128761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.030 [2024-07-14 15:10:11.128798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.030 qpair failed and we were unable to recover it. 00:37:32.030 [2024-07-14 15:10:11.128929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.030 [2024-07-14 15:10:11.128975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.030 qpair failed and we were unable to recover it. 00:37:32.030 [2024-07-14 15:10:11.129110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.030 [2024-07-14 15:10:11.129143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.030 qpair failed and we were unable to recover it. 00:37:32.030 [2024-07-14 15:10:11.129271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.030 [2024-07-14 15:10:11.129323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.030 qpair failed and we were unable to recover it. 00:37:32.030 [2024-07-14 15:10:11.129448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.030 [2024-07-14 15:10:11.129484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.030 qpair failed and we were unable to recover it. 00:37:32.030 [2024-07-14 15:10:11.129607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.030 [2024-07-14 15:10:11.129657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.030 qpair failed and we were unable to recover it. 00:37:32.030 [2024-07-14 15:10:11.129806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.030 [2024-07-14 15:10:11.129843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.030 qpair failed and we were unable to recover it. 00:37:32.030 [2024-07-14 15:10:11.129979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.030 [2024-07-14 15:10:11.130013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.030 qpair failed and we were unable to recover it. 00:37:32.030 [2024-07-14 15:10:11.130150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.030 [2024-07-14 15:10:11.130183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.030 qpair failed and we were unable to recover it. 00:37:32.030 [2024-07-14 15:10:11.130333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.030 [2024-07-14 15:10:11.130370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.030 qpair failed and we were unable to recover it. 00:37:32.030 [2024-07-14 15:10:11.130527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.030 [2024-07-14 15:10:11.130564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.030 qpair failed and we were unable to recover it. 00:37:32.030 [2024-07-14 15:10:11.130768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.031 [2024-07-14 15:10:11.130806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.031 qpair failed and we were unable to recover it. 00:37:32.031 [2024-07-14 15:10:11.130952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.031 [2024-07-14 15:10:11.130986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.031 qpair failed and we were unable to recover it. 00:37:32.031 [2024-07-14 15:10:11.131117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.031 [2024-07-14 15:10:11.131151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.031 qpair failed and we were unable to recover it. 00:37:32.031 [2024-07-14 15:10:11.131302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.031 [2024-07-14 15:10:11.131354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.031 qpair failed and we were unable to recover it. 00:37:32.031 [2024-07-14 15:10:11.131499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.031 [2024-07-14 15:10:11.131535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.031 qpair failed and we were unable to recover it. 00:37:32.031 [2024-07-14 15:10:11.131684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.031 [2024-07-14 15:10:11.131721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.031 qpair failed and we were unable to recover it. 00:37:32.031 [2024-07-14 15:10:11.131854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.031 [2024-07-14 15:10:11.131896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.031 qpair failed and we were unable to recover it. 00:37:32.031 [2024-07-14 15:10:11.132047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.031 [2024-07-14 15:10:11.132095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.031 qpair failed and we were unable to recover it. 00:37:32.031 [2024-07-14 15:10:11.132255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.031 [2024-07-14 15:10:11.132309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.031 qpair failed and we were unable to recover it. 00:37:32.031 [2024-07-14 15:10:11.132490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.031 [2024-07-14 15:10:11.132530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.031 qpair failed and we were unable to recover it. 00:37:32.031 [2024-07-14 15:10:11.132694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.031 [2024-07-14 15:10:11.132732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.031 qpair failed and we were unable to recover it. 00:37:32.031 [2024-07-14 15:10:11.132874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.031 [2024-07-14 15:10:11.132920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.031 qpair failed and we were unable to recover it. 00:37:32.031 [2024-07-14 15:10:11.133078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.031 [2024-07-14 15:10:11.133112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.031 qpair failed and we were unable to recover it. 00:37:32.031 [2024-07-14 15:10:11.133293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.031 [2024-07-14 15:10:11.133331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.031 qpair failed and we were unable to recover it. 00:37:32.031 [2024-07-14 15:10:11.133484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.031 [2024-07-14 15:10:11.133526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.031 qpair failed and we were unable to recover it. 00:37:32.031 [2024-07-14 15:10:11.133707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.031 [2024-07-14 15:10:11.133745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.031 qpair failed and we were unable to recover it. 00:37:32.031 [2024-07-14 15:10:11.133911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.031 [2024-07-14 15:10:11.133963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.031 qpair failed and we were unable to recover it. 00:37:32.031 [2024-07-14 15:10:11.134115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.031 [2024-07-14 15:10:11.134168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.031 qpair failed and we were unable to recover it. 00:37:32.031 [2024-07-14 15:10:11.134357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.031 [2024-07-14 15:10:11.134393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.031 qpair failed and we were unable to recover it. 00:37:32.031 [2024-07-14 15:10:11.134566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.031 [2024-07-14 15:10:11.134605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.031 qpair failed and we were unable to recover it. 00:37:32.031 [2024-07-14 15:10:11.134756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.031 [2024-07-14 15:10:11.134799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.031 qpair failed and we were unable to recover it. 00:37:32.031 [2024-07-14 15:10:11.134963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.031 [2024-07-14 15:10:11.134998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.031 qpair failed and we were unable to recover it. 00:37:32.031 [2024-07-14 15:10:11.135107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.031 [2024-07-14 15:10:11.135140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.031 qpair failed and we were unable to recover it. 00:37:32.031 [2024-07-14 15:10:11.135350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.031 [2024-07-14 15:10:11.135383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.031 qpair failed and we were unable to recover it. 00:37:32.031 [2024-07-14 15:10:11.135618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.031 [2024-07-14 15:10:11.135689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.031 qpair failed and we were unable to recover it. 00:37:32.031 [2024-07-14 15:10:11.135808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.031 [2024-07-14 15:10:11.135867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.031 qpair failed and we were unable to recover it. 00:37:32.031 [2024-07-14 15:10:11.135993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.031 [2024-07-14 15:10:11.136027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.031 qpair failed and we were unable to recover it. 00:37:32.031 [2024-07-14 15:10:11.136159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.031 [2024-07-14 15:10:11.136192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.031 qpair failed and we were unable to recover it. 00:37:32.031 [2024-07-14 15:10:11.136325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.031 [2024-07-14 15:10:11.136358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.031 qpair failed and we were unable to recover it. 00:37:32.031 [2024-07-14 15:10:11.136463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.031 [2024-07-14 15:10:11.136514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.031 qpair failed and we were unable to recover it. 00:37:32.031 [2024-07-14 15:10:11.136653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.032 [2024-07-14 15:10:11.136689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.032 qpair failed and we were unable to recover it. 00:37:32.032 [2024-07-14 15:10:11.136857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.032 [2024-07-14 15:10:11.136934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.032 qpair failed and we were unable to recover it. 00:37:32.032 [2024-07-14 15:10:11.137056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.032 [2024-07-14 15:10:11.137092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.032 qpair failed and we were unable to recover it. 00:37:32.032 [2024-07-14 15:10:11.137286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.032 [2024-07-14 15:10:11.137325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.032 qpair failed and we were unable to recover it. 00:37:32.032 [2024-07-14 15:10:11.137456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.032 [2024-07-14 15:10:11.137494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.032 qpair failed and we were unable to recover it. 00:37:32.032 [2024-07-14 15:10:11.137607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.032 [2024-07-14 15:10:11.137644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.032 qpair failed and we were unable to recover it. 00:37:32.032 [2024-07-14 15:10:11.137782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.032 [2024-07-14 15:10:11.137814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.032 qpair failed and we were unable to recover it. 00:37:32.032 [2024-07-14 15:10:11.137947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.032 [2024-07-14 15:10:11.137980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.032 qpair failed and we were unable to recover it. 00:37:32.032 [2024-07-14 15:10:11.138110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.032 [2024-07-14 15:10:11.138143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.032 qpair failed and we were unable to recover it. 00:37:32.032 [2024-07-14 15:10:11.138261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.032 [2024-07-14 15:10:11.138297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.032 qpair failed and we were unable to recover it. 00:37:32.032 [2024-07-14 15:10:11.138468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.032 [2024-07-14 15:10:11.138504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.032 qpair failed and we were unable to recover it. 00:37:32.032 [2024-07-14 15:10:11.138611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.032 [2024-07-14 15:10:11.138647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.032 qpair failed and we were unable to recover it. 00:37:32.032 [2024-07-14 15:10:11.138810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.032 [2024-07-14 15:10:11.138858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.032 qpair failed and we were unable to recover it. 00:37:32.032 [2024-07-14 15:10:11.139010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:32.032 [2024-07-14 15:10:11.139172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.032 [2024-07-14 15:10:11.139241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.032 qpair failed and we were unable to recover it. 00:37:32.032 [2024-07-14 15:10:11.139429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.032 [2024-07-14 15:10:11.139527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.032 qpair failed and we were unable to recover it. 00:37:32.032 [2024-07-14 15:10:11.139733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.032 [2024-07-14 15:10:11.139790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.032 qpair failed and we were unable to recover it. 00:37:32.032 [2024-07-14 15:10:11.139958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.032 [2024-07-14 15:10:11.139993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.032 qpair failed and we were unable to recover it. 00:37:32.032 [2024-07-14 15:10:11.140120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.032 [2024-07-14 15:10:11.140172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.032 qpair failed and we were unable to recover it. 00:37:32.032 [2024-07-14 15:10:11.140407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.032 [2024-07-14 15:10:11.140464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.032 qpair failed and we were unable to recover it. 00:37:32.032 [2024-07-14 15:10:11.140674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.032 [2024-07-14 15:10:11.140735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.032 qpair failed and we were unable to recover it. 00:37:32.032 [2024-07-14 15:10:11.140868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.032 [2024-07-14 15:10:11.140917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.032 qpair failed and we were unable to recover it. 00:37:32.032 [2024-07-14 15:10:11.141063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.032 [2024-07-14 15:10:11.141097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.032 qpair failed and we were unable to recover it. 00:37:32.032 [2024-07-14 15:10:11.141232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.032 [2024-07-14 15:10:11.141265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.032 qpair failed and we were unable to recover it. 00:37:32.032 [2024-07-14 15:10:11.141389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.032 [2024-07-14 15:10:11.141426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.032 qpair failed and we were unable to recover it. 00:37:32.032 [2024-07-14 15:10:11.141609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.032 [2024-07-14 15:10:11.141647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.032 qpair failed and we were unable to recover it. 00:37:32.032 [2024-07-14 15:10:11.141795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.032 [2024-07-14 15:10:11.141832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.032 qpair failed and we were unable to recover it. 00:37:32.032 [2024-07-14 15:10:11.141967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.032 [2024-07-14 15:10:11.142001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.032 qpair failed and we were unable to recover it. 00:37:32.032 [2024-07-14 15:10:11.142135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.032 [2024-07-14 15:10:11.142168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.032 qpair failed and we were unable to recover it. 00:37:32.032 [2024-07-14 15:10:11.142385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.032 [2024-07-14 15:10:11.142421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.032 qpair failed and we were unable to recover it. 00:37:32.032 [2024-07-14 15:10:11.142538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.032 [2024-07-14 15:10:11.142575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.032 qpair failed and we were unable to recover it. 00:37:32.032 [2024-07-14 15:10:11.142731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.032 [2024-07-14 15:10:11.142768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.032 qpair failed and we were unable to recover it. 00:37:32.032 [2024-07-14 15:10:11.142922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.032 [2024-07-14 15:10:11.142956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.032 qpair failed and we were unable to recover it. 00:37:32.032 [2024-07-14 15:10:11.143110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.032 [2024-07-14 15:10:11.143143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.032 qpair failed and we were unable to recover it. 00:37:32.032 [2024-07-14 15:10:11.143282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.032 [2024-07-14 15:10:11.143319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.032 qpair failed and we were unable to recover it. 00:37:32.032 [2024-07-14 15:10:11.143461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.032 [2024-07-14 15:10:11.143513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.032 qpair failed and we were unable to recover it. 00:37:32.032 [2024-07-14 15:10:11.143636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.032 [2024-07-14 15:10:11.143672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.032 qpair failed and we were unable to recover it. 00:37:32.032 [2024-07-14 15:10:11.143853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.032 [2024-07-14 15:10:11.143897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.032 qpair failed and we were unable to recover it. 00:37:32.032 [2024-07-14 15:10:11.144060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.033 [2024-07-14 15:10:11.144108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.033 qpair failed and we were unable to recover it. 00:37:32.033 [2024-07-14 15:10:11.144250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.033 [2024-07-14 15:10:11.144307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.033 qpair failed and we were unable to recover it. 00:37:32.033 [2024-07-14 15:10:11.144492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.033 [2024-07-14 15:10:11.144545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.033 qpair failed and we were unable to recover it. 00:37:32.033 [2024-07-14 15:10:11.144750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.033 [2024-07-14 15:10:11.144809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.033 qpair failed and we were unable to recover it. 00:37:32.033 [2024-07-14 15:10:11.144955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.033 [2024-07-14 15:10:11.144990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.033 qpair failed and we were unable to recover it. 00:37:32.033 [2024-07-14 15:10:11.145150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.033 [2024-07-14 15:10:11.145200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.033 qpair failed and we were unable to recover it. 00:37:32.033 [2024-07-14 15:10:11.145397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.033 [2024-07-14 15:10:11.145471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.033 qpair failed and we were unable to recover it. 00:37:32.033 [2024-07-14 15:10:11.145701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.033 [2024-07-14 15:10:11.145767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.033 qpair failed and we were unable to recover it. 00:37:32.033 [2024-07-14 15:10:11.145894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.033 [2024-07-14 15:10:11.145944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.033 qpair failed and we were unable to recover it. 00:37:32.033 [2024-07-14 15:10:11.146101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.033 [2024-07-14 15:10:11.146138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.033 qpair failed and we were unable to recover it. 00:37:32.033 [2024-07-14 15:10:11.146284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.033 [2024-07-14 15:10:11.146360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.033 qpair failed and we were unable to recover it. 00:37:32.033 [2024-07-14 15:10:11.146601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.033 [2024-07-14 15:10:11.146660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.033 qpair failed and we were unable to recover it. 00:37:32.033 [2024-07-14 15:10:11.146828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.033 [2024-07-14 15:10:11.146864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.033 qpair failed and we were unable to recover it. 00:37:32.033 [2024-07-14 15:10:11.147012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.033 [2024-07-14 15:10:11.147047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.033 qpair failed and we were unable to recover it. 00:37:32.033 [2024-07-14 15:10:11.147229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.033 [2024-07-14 15:10:11.147284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.033 qpair failed and we were unable to recover it. 00:37:32.033 [2024-07-14 15:10:11.147493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.033 [2024-07-14 15:10:11.147546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.033 qpair failed and we were unable to recover it. 00:37:32.033 [2024-07-14 15:10:11.147681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.033 [2024-07-14 15:10:11.147714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.033 qpair failed and we were unable to recover it. 00:37:32.033 [2024-07-14 15:10:11.147821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.033 [2024-07-14 15:10:11.147855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.033 qpair failed and we were unable to recover it. 00:37:32.033 [2024-07-14 15:10:11.148027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.033 [2024-07-14 15:10:11.148065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.033 qpair failed and we were unable to recover it. 00:37:32.033 [2024-07-14 15:10:11.148208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.033 [2024-07-14 15:10:11.148245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.033 qpair failed and we were unable to recover it. 00:37:32.033 [2024-07-14 15:10:11.148368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.033 [2024-07-14 15:10:11.148405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.033 qpair failed and we were unable to recover it. 00:37:32.033 [2024-07-14 15:10:11.148597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.033 [2024-07-14 15:10:11.148634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.033 qpair failed and we were unable to recover it. 00:37:32.033 [2024-07-14 15:10:11.148806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.033 [2024-07-14 15:10:11.148843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.033 qpair failed and we were unable to recover it. 00:37:32.033 [2024-07-14 15:10:11.148985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.033 [2024-07-14 15:10:11.149019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.033 qpair failed and we were unable to recover it. 00:37:32.033 [2024-07-14 15:10:11.149173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.033 [2024-07-14 15:10:11.149228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.033 qpair failed and we were unable to recover it. 00:37:32.033 [2024-07-14 15:10:11.149483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.033 [2024-07-14 15:10:11.149551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.033 qpair failed and we were unable to recover it. 00:37:32.033 [2024-07-14 15:10:11.149800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.033 [2024-07-14 15:10:11.149853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.033 qpair failed and we were unable to recover it. 00:37:32.033 [2024-07-14 15:10:11.150001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.033 [2024-07-14 15:10:11.150035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.033 qpair failed and we were unable to recover it. 00:37:32.033 [2024-07-14 15:10:11.150165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.033 [2024-07-14 15:10:11.150203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.033 qpair failed and we were unable to recover it. 00:37:32.033 [2024-07-14 15:10:11.150360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.033 [2024-07-14 15:10:11.150417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.033 qpair failed and we were unable to recover it. 00:37:32.033 [2024-07-14 15:10:11.150581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.033 [2024-07-14 15:10:11.150642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.033 qpair failed and we were unable to recover it. 00:37:32.033 [2024-07-14 15:10:11.150820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.033 [2024-07-14 15:10:11.150857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.033 qpair failed and we were unable to recover it. 00:37:32.033 [2024-07-14 15:10:11.151049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.033 [2024-07-14 15:10:11.151098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.033 qpair failed and we were unable to recover it. 00:37:32.033 [2024-07-14 15:10:11.151298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.033 [2024-07-14 15:10:11.151339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.033 qpair failed and we were unable to recover it. 00:37:32.033 [2024-07-14 15:10:11.151578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.033 [2024-07-14 15:10:11.151635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.033 qpair failed and we were unable to recover it. 00:37:32.033 [2024-07-14 15:10:11.151839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.033 [2024-07-14 15:10:11.151886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.033 qpair failed and we were unable to recover it. 00:37:32.033 [2024-07-14 15:10:11.152017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.033 [2024-07-14 15:10:11.152062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.033 qpair failed and we were unable to recover it. 00:37:32.033 [2024-07-14 15:10:11.152199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.033 [2024-07-14 15:10:11.152252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.033 qpair failed and we were unable to recover it. 00:37:32.033 [2024-07-14 15:10:11.152375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.034 [2024-07-14 15:10:11.152413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.034 qpair failed and we were unable to recover it. 00:37:32.034 [2024-07-14 15:10:11.152633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.034 [2024-07-14 15:10:11.152683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.034 qpair failed and we were unable to recover it. 00:37:32.034 [2024-07-14 15:10:11.152920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.034 [2024-07-14 15:10:11.152969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.034 qpair failed and we were unable to recover it. 00:37:32.034 [2024-07-14 15:10:11.153108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.034 [2024-07-14 15:10:11.153173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.034 qpair failed and we were unable to recover it. 00:37:32.034 [2024-07-14 15:10:11.153433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.034 [2024-07-14 15:10:11.153483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.034 qpair failed and we were unable to recover it. 00:37:32.034 [2024-07-14 15:10:11.153667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.034 [2024-07-14 15:10:11.153726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.034 qpair failed and we were unable to recover it. 00:37:32.034 [2024-07-14 15:10:11.153871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.034 [2024-07-14 15:10:11.153915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.034 qpair failed and we were unable to recover it. 00:37:32.034 [2024-07-14 15:10:11.154060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.034 [2024-07-14 15:10:11.154109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.034 qpair failed and we were unable to recover it. 00:37:32.034 [2024-07-14 15:10:11.154301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.034 [2024-07-14 15:10:11.154359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.034 qpair failed and we were unable to recover it. 00:37:32.034 [2024-07-14 15:10:11.154516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.034 [2024-07-14 15:10:11.154613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.034 qpair failed and we were unable to recover it. 00:37:32.034 [2024-07-14 15:10:11.154723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.034 [2024-07-14 15:10:11.154757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.034 qpair failed and we were unable to recover it. 00:37:32.034 [2024-07-14 15:10:11.154894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.034 [2024-07-14 15:10:11.154928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.034 qpair failed and we were unable to recover it. 00:37:32.034 [2024-07-14 15:10:11.155074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.034 [2024-07-14 15:10:11.155129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.034 qpair failed and we were unable to recover it. 00:37:32.034 [2024-07-14 15:10:11.155269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.034 [2024-07-14 15:10:11.155303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.034 qpair failed and we were unable to recover it. 00:37:32.034 [2024-07-14 15:10:11.155442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.034 [2024-07-14 15:10:11.155476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.034 qpair failed and we were unable to recover it. 00:37:32.034 [2024-07-14 15:10:11.155603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.034 [2024-07-14 15:10:11.155636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.034 qpair failed and we were unable to recover it. 00:37:32.034 [2024-07-14 15:10:11.155758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.034 [2024-07-14 15:10:11.155793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.034 qpair failed and we were unable to recover it. 00:37:32.034 [2024-07-14 15:10:11.155934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.034 [2024-07-14 15:10:11.155969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.034 qpair failed and we were unable to recover it. 00:37:32.034 [2024-07-14 15:10:11.156082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.034 [2024-07-14 15:10:11.156115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.034 qpair failed and we were unable to recover it. 00:37:32.034 [2024-07-14 15:10:11.156242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.034 [2024-07-14 15:10:11.156276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.034 qpair failed and we were unable to recover it. 00:37:32.034 [2024-07-14 15:10:11.156436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.034 [2024-07-14 15:10:11.156469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.034 qpair failed and we were unable to recover it. 00:37:32.034 [2024-07-14 15:10:11.156598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.034 [2024-07-14 15:10:11.156631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.034 qpair failed and we were unable to recover it. 00:37:32.034 [2024-07-14 15:10:11.156771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.034 [2024-07-14 15:10:11.156807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.034 qpair failed and we were unable to recover it. 00:37:32.034 [2024-07-14 15:10:11.156980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.034 [2024-07-14 15:10:11.157030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.034 qpair failed and we were unable to recover it. 00:37:32.034 [2024-07-14 15:10:11.157205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.034 [2024-07-14 15:10:11.157258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.034 qpair failed and we were unable to recover it. 00:37:32.034 [2024-07-14 15:10:11.157475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.034 [2024-07-14 15:10:11.157535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.034 qpair failed and we were unable to recover it. 00:37:32.034 [2024-07-14 15:10:11.157742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.034 [2024-07-14 15:10:11.157798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.034 qpair failed and we were unable to recover it. 00:37:32.034 [2024-07-14 15:10:11.157933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.034 [2024-07-14 15:10:11.157984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.034 qpair failed and we were unable to recover it. 00:37:32.034 [2024-07-14 15:10:11.158121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.034 [2024-07-14 15:10:11.158155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.034 qpair failed and we were unable to recover it. 00:37:32.034 [2024-07-14 15:10:11.158428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.034 [2024-07-14 15:10:11.158487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.034 qpair failed and we were unable to recover it. 00:37:32.034 [2024-07-14 15:10:11.158712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.034 [2024-07-14 15:10:11.158754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.034 qpair failed and we were unable to recover it. 00:37:32.034 [2024-07-14 15:10:11.158916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.034 [2024-07-14 15:10:11.158950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.034 qpair failed and we were unable to recover it. 00:37:32.034 [2024-07-14 15:10:11.159069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.034 [2024-07-14 15:10:11.159102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.034 qpair failed and we were unable to recover it. 00:37:32.034 [2024-07-14 15:10:11.159210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.034 [2024-07-14 15:10:11.159260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.034 qpair failed and we were unable to recover it. 00:37:32.034 [2024-07-14 15:10:11.159417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.034 [2024-07-14 15:10:11.159455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.034 qpair failed and we were unable to recover it. 00:37:32.034 [2024-07-14 15:10:11.159635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.034 [2024-07-14 15:10:11.159673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.034 qpair failed and we were unable to recover it. 00:37:32.034 [2024-07-14 15:10:11.159827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.034 [2024-07-14 15:10:11.159864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.034 qpair failed and we were unable to recover it. 00:37:32.034 [2024-07-14 15:10:11.160019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.034 [2024-07-14 15:10:11.160067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.035 qpair failed and we were unable to recover it. 00:37:32.035 [2024-07-14 15:10:11.160231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.035 [2024-07-14 15:10:11.160280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.035 qpair failed and we were unable to recover it. 00:37:32.035 [2024-07-14 15:10:11.160443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.035 [2024-07-14 15:10:11.160497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.035 qpair failed and we were unable to recover it. 00:37:32.035 [2024-07-14 15:10:11.160618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.035 [2024-07-14 15:10:11.160656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.035 qpair failed and we were unable to recover it. 00:37:32.035 [2024-07-14 15:10:11.160836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.035 [2024-07-14 15:10:11.160870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.035 qpair failed and we were unable to recover it. 00:37:32.035 [2024-07-14 15:10:11.161036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.035 [2024-07-14 15:10:11.161085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.035 qpair failed and we were unable to recover it. 00:37:32.035 [2024-07-14 15:10:11.161245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.035 [2024-07-14 15:10:11.161283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.035 qpair failed and we were unable to recover it. 00:37:32.035 [2024-07-14 15:10:11.161501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.035 [2024-07-14 15:10:11.161560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.035 qpair failed and we were unable to recover it. 00:37:32.035 [2024-07-14 15:10:11.161740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.035 [2024-07-14 15:10:11.161778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.035 qpair failed and we were unable to recover it. 00:37:32.035 [2024-07-14 15:10:11.161894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.035 [2024-07-14 15:10:11.161945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.035 qpair failed and we were unable to recover it. 00:37:32.035 [2024-07-14 15:10:11.162125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.035 [2024-07-14 15:10:11.162173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.035 qpair failed and we were unable to recover it. 00:37:32.035 [2024-07-14 15:10:11.162447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.035 [2024-07-14 15:10:11.162519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.035 qpair failed and we were unable to recover it. 00:37:32.035 [2024-07-14 15:10:11.162795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.035 [2024-07-14 15:10:11.162852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.035 qpair failed and we were unable to recover it. 00:37:32.035 [2024-07-14 15:10:11.163007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.035 [2024-07-14 15:10:11.163042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.035 qpair failed and we were unable to recover it. 00:37:32.035 [2024-07-14 15:10:11.163233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.035 [2024-07-14 15:10:11.163286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.035 qpair failed and we were unable to recover it. 00:37:32.035 [2024-07-14 15:10:11.163506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.035 [2024-07-14 15:10:11.163602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.035 qpair failed and we were unable to recover it. 00:37:32.035 [2024-07-14 15:10:11.163739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.035 [2024-07-14 15:10:11.163773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.035 qpair failed and we were unable to recover it. 00:37:32.035 [2024-07-14 15:10:11.163911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.035 [2024-07-14 15:10:11.163945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.035 qpair failed and we were unable to recover it. 00:37:32.035 [2024-07-14 15:10:11.164096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.035 [2024-07-14 15:10:11.164150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.035 qpair failed and we were unable to recover it. 00:37:32.035 [2024-07-14 15:10:11.164386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.035 [2024-07-14 15:10:11.164446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.035 qpair failed and we were unable to recover it. 00:37:32.035 [2024-07-14 15:10:11.164729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.035 [2024-07-14 15:10:11.164794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.035 qpair failed and we were unable to recover it. 00:37:32.035 [2024-07-14 15:10:11.164972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.035 [2024-07-14 15:10:11.165008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.035 qpair failed and we were unable to recover it. 00:37:32.035 [2024-07-14 15:10:11.165180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.035 [2024-07-14 15:10:11.165233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.035 qpair failed and we were unable to recover it. 00:37:32.035 [2024-07-14 15:10:11.165432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.035 [2024-07-14 15:10:11.165473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.035 qpair failed and we were unable to recover it. 00:37:32.035 [2024-07-14 15:10:11.165693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.035 [2024-07-14 15:10:11.165762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.035 qpair failed and we were unable to recover it. 00:37:32.035 [2024-07-14 15:10:11.165968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.035 [2024-07-14 15:10:11.166002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.035 qpair failed and we were unable to recover it. 00:37:32.035 [2024-07-14 15:10:11.166114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.035 [2024-07-14 15:10:11.166147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.035 qpair failed and we were unable to recover it. 00:37:32.035 [2024-07-14 15:10:11.166368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.035 [2024-07-14 15:10:11.166405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.035 qpair failed and we were unable to recover it. 00:37:32.035 [2024-07-14 15:10:11.166534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.035 [2024-07-14 15:10:11.166582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.035 qpair failed and we were unable to recover it. 00:37:32.035 [2024-07-14 15:10:11.166731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.035 [2024-07-14 15:10:11.166769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.035 qpair failed and we were unable to recover it. 00:37:32.035 [2024-07-14 15:10:11.166926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.035 [2024-07-14 15:10:11.166960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.035 qpair failed and we were unable to recover it. 00:37:32.035 [2024-07-14 15:10:11.167079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.036 [2024-07-14 15:10:11.167116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.036 qpair failed and we were unable to recover it. 00:37:32.036 [2024-07-14 15:10:11.167286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.036 [2024-07-14 15:10:11.167324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.036 qpair failed and we were unable to recover it. 00:37:32.036 [2024-07-14 15:10:11.167525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.036 [2024-07-14 15:10:11.167563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.036 qpair failed and we were unable to recover it. 00:37:32.036 [2024-07-14 15:10:11.167681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.036 [2024-07-14 15:10:11.167719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.036 qpair failed and we were unable to recover it. 00:37:32.036 [2024-07-14 15:10:11.167895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.036 [2024-07-14 15:10:11.167952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.036 qpair failed and we were unable to recover it. 00:37:32.036 [2024-07-14 15:10:11.168068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.036 [2024-07-14 15:10:11.168102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.036 qpair failed and we were unable to recover it. 00:37:32.036 [2024-07-14 15:10:11.168324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.036 [2024-07-14 15:10:11.168382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.036 qpair failed and we were unable to recover it. 00:37:32.036 [2024-07-14 15:10:11.168573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.036 [2024-07-14 15:10:11.168611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.036 qpair failed and we were unable to recover it. 00:37:32.036 [2024-07-14 15:10:11.168823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.036 [2024-07-14 15:10:11.168861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.036 qpair failed and we were unable to recover it. 00:37:32.036 [2024-07-14 15:10:11.169034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.036 [2024-07-14 15:10:11.169069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.036 qpair failed and we were unable to recover it. 00:37:32.036 [2024-07-14 15:10:11.169260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.036 [2024-07-14 15:10:11.169309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.036 qpair failed and we were unable to recover it. 00:37:32.036 [2024-07-14 15:10:11.169515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.036 [2024-07-14 15:10:11.169584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.036 qpair failed and we were unable to recover it. 00:37:32.036 [2024-07-14 15:10:11.169728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.036 [2024-07-14 15:10:11.169766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.036 qpair failed and we were unable to recover it. 00:37:32.036 [2024-07-14 15:10:11.169936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.036 [2024-07-14 15:10:11.169971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.036 qpair failed and we were unable to recover it. 00:37:32.036 [2024-07-14 15:10:11.170074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.036 [2024-07-14 15:10:11.170119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.036 qpair failed and we were unable to recover it. 00:37:32.036 [2024-07-14 15:10:11.170318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.036 [2024-07-14 15:10:11.170378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.036 qpair failed and we were unable to recover it. 00:37:32.036 [2024-07-14 15:10:11.170576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.036 [2024-07-14 15:10:11.170609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.036 qpair failed and we were unable to recover it. 00:37:32.036 [2024-07-14 15:10:11.170777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.036 [2024-07-14 15:10:11.170814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.036 qpair failed and we were unable to recover it. 00:37:32.036 [2024-07-14 15:10:11.170981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.036 [2024-07-14 15:10:11.171015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.036 qpair failed and we were unable to recover it. 00:37:32.036 [2024-07-14 15:10:11.171198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.036 [2024-07-14 15:10:11.171251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.036 qpair failed and we were unable to recover it. 00:37:32.036 [2024-07-14 15:10:11.171519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.036 [2024-07-14 15:10:11.171586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.036 qpair failed and we were unable to recover it. 00:37:32.036 [2024-07-14 15:10:11.171707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.036 [2024-07-14 15:10:11.171744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.036 qpair failed and we were unable to recover it. 00:37:32.036 [2024-07-14 15:10:11.171923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.036 [2024-07-14 15:10:11.171974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.036 qpair failed and we were unable to recover it. 00:37:32.036 [2024-07-14 15:10:11.172098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.036 [2024-07-14 15:10:11.172146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.036 qpair failed and we were unable to recover it. 00:37:32.036 [2024-07-14 15:10:11.172327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.036 [2024-07-14 15:10:11.172384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.036 qpair failed and we were unable to recover it. 00:37:32.036 [2024-07-14 15:10:11.172641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.036 [2024-07-14 15:10:11.172680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.036 qpair failed and we were unable to recover it. 00:37:32.036 [2024-07-14 15:10:11.172807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.036 [2024-07-14 15:10:11.172845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.036 qpair failed and we were unable to recover it. 00:37:32.036 [2024-07-14 15:10:11.173014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.036 [2024-07-14 15:10:11.173048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.036 qpair failed and we were unable to recover it. 00:37:32.036 [2024-07-14 15:10:11.173179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.036 [2024-07-14 15:10:11.173213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.036 qpair failed and we were unable to recover it. 00:37:32.036 [2024-07-14 15:10:11.173394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.036 [2024-07-14 15:10:11.173463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.036 qpair failed and we were unable to recover it. 00:37:32.036 [2024-07-14 15:10:11.173670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.036 [2024-07-14 15:10:11.173727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.036 qpair failed and we were unable to recover it. 00:37:32.036 [2024-07-14 15:10:11.173888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.036 [2024-07-14 15:10:11.173922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.036 qpair failed and we were unable to recover it. 00:37:32.036 [2024-07-14 15:10:11.174100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.036 [2024-07-14 15:10:11.174149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.036 qpair failed and we were unable to recover it. 00:37:32.036 [2024-07-14 15:10:11.174329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.036 [2024-07-14 15:10:11.174369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.036 qpair failed and we were unable to recover it. 00:37:32.036 [2024-07-14 15:10:11.174550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.036 [2024-07-14 15:10:11.174588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.036 qpair failed and we were unable to recover it. 00:37:32.036 [2024-07-14 15:10:11.174769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.036 [2024-07-14 15:10:11.174807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.036 qpair failed and we were unable to recover it. 00:37:32.036 [2024-07-14 15:10:11.174965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.036 [2024-07-14 15:10:11.174999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.036 qpair failed and we were unable to recover it. 00:37:32.036 [2024-07-14 15:10:11.175151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.036 [2024-07-14 15:10:11.175199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.036 qpair failed and we were unable to recover it. 00:37:32.036 [2024-07-14 15:10:11.175366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.037 [2024-07-14 15:10:11.175417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.037 qpair failed and we were unable to recover it. 00:37:32.037 [2024-07-14 15:10:11.175604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.037 [2024-07-14 15:10:11.175642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.037 qpair failed and we were unable to recover it. 00:37:32.037 [2024-07-14 15:10:11.175800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.037 [2024-07-14 15:10:11.175837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.037 qpair failed and we were unable to recover it. 00:37:32.037 [2024-07-14 15:10:11.176003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.037 [2024-07-14 15:10:11.176037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.037 qpair failed and we were unable to recover it. 00:37:32.037 [2024-07-14 15:10:11.176164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.037 [2024-07-14 15:10:11.176197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.037 qpair failed and we were unable to recover it. 00:37:32.037 [2024-07-14 15:10:11.176404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.037 [2024-07-14 15:10:11.176472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.037 qpair failed and we were unable to recover it. 00:37:32.037 [2024-07-14 15:10:11.176660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.037 [2024-07-14 15:10:11.176727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.037 qpair failed and we were unable to recover it. 00:37:32.037 [2024-07-14 15:10:11.176931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.037 [2024-07-14 15:10:11.176965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.037 qpair failed and we were unable to recover it. 00:37:32.037 [2024-07-14 15:10:11.177101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.037 [2024-07-14 15:10:11.177134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.037 qpair failed and we were unable to recover it. 00:37:32.037 [2024-07-14 15:10:11.177396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.037 [2024-07-14 15:10:11.177433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.037 qpair failed and we were unable to recover it. 00:37:32.037 [2024-07-14 15:10:11.177585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.037 [2024-07-14 15:10:11.177622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.037 qpair failed and we were unable to recover it. 00:37:32.037 [2024-07-14 15:10:11.177794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.037 [2024-07-14 15:10:11.177831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.037 qpair failed and we were unable to recover it. 00:37:32.037 [2024-07-14 15:10:11.177970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.037 [2024-07-14 15:10:11.178004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.037 qpair failed and we were unable to recover it. 00:37:32.037 [2024-07-14 15:10:11.178180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.037 [2024-07-14 15:10:11.178218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.037 qpair failed and we were unable to recover it. 00:37:32.037 [2024-07-14 15:10:11.178360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.037 [2024-07-14 15:10:11.178397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.037 qpair failed and we were unable to recover it. 00:37:32.037 [2024-07-14 15:10:11.178592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.037 [2024-07-14 15:10:11.178629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.037 qpair failed and we were unable to recover it. 00:37:32.037 [2024-07-14 15:10:11.178741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.037 [2024-07-14 15:10:11.178778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.037 qpair failed and we were unable to recover it. 00:37:32.037 [2024-07-14 15:10:11.178944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.037 [2024-07-14 15:10:11.178979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.037 qpair failed and we were unable to recover it. 00:37:32.037 [2024-07-14 15:10:11.179096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.037 [2024-07-14 15:10:11.179130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.037 qpair failed and we were unable to recover it. 00:37:32.037 [2024-07-14 15:10:11.179273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.037 [2024-07-14 15:10:11.179306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.037 qpair failed and we were unable to recover it. 00:37:32.037 [2024-07-14 15:10:11.179447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.037 [2024-07-14 15:10:11.179481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.037 qpair failed and we were unable to recover it. 00:37:32.037 [2024-07-14 15:10:11.179631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.037 [2024-07-14 15:10:11.179668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.037 qpair failed and we were unable to recover it. 00:37:32.037 [2024-07-14 15:10:11.179780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.037 [2024-07-14 15:10:11.179823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.037 qpair failed and we were unable to recover it. 00:37:32.037 [2024-07-14 15:10:11.179987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.037 [2024-07-14 15:10:11.180021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.037 qpair failed and we were unable to recover it. 00:37:32.037 [2024-07-14 15:10:11.180152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.037 [2024-07-14 15:10:11.180186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.037 qpair failed and we were unable to recover it. 00:37:32.037 [2024-07-14 15:10:11.180342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.037 [2024-07-14 15:10:11.180380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.037 qpair failed and we were unable to recover it. 00:37:32.037 [2024-07-14 15:10:11.180519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.037 [2024-07-14 15:10:11.180557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.037 qpair failed and we were unable to recover it. 00:37:32.037 [2024-07-14 15:10:11.180732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.037 [2024-07-14 15:10:11.180770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.037 qpair failed and we were unable to recover it. 00:37:32.037 [2024-07-14 15:10:11.180939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.037 [2024-07-14 15:10:11.180973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.037 qpair failed and we were unable to recover it. 00:37:32.037 [2024-07-14 15:10:11.181133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.037 [2024-07-14 15:10:11.181167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.037 qpair failed and we were unable to recover it. 00:37:32.037 [2024-07-14 15:10:11.181308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.037 [2024-07-14 15:10:11.181342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.037 qpair failed and we were unable to recover it. 00:37:32.037 [2024-07-14 15:10:11.181511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.037 [2024-07-14 15:10:11.181548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.037 qpair failed and we were unable to recover it. 00:37:32.037 [2024-07-14 15:10:11.181664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.037 [2024-07-14 15:10:11.181702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.037 qpair failed and we were unable to recover it. 00:37:32.037 [2024-07-14 15:10:11.181884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.037 [2024-07-14 15:10:11.181919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.037 qpair failed and we were unable to recover it. 00:37:32.037 [2024-07-14 15:10:11.182051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.037 [2024-07-14 15:10:11.182100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.037 qpair failed and we were unable to recover it. 00:37:32.037 [2024-07-14 15:10:11.182313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.037 [2024-07-14 15:10:11.182367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.037 qpair failed and we were unable to recover it. 00:37:32.037 [2024-07-14 15:10:11.182525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.037 [2024-07-14 15:10:11.182564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.037 qpair failed and we were unable to recover it. 00:37:32.037 [2024-07-14 15:10:11.182746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.038 [2024-07-14 15:10:11.182784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.038 qpair failed and we were unable to recover it. 00:37:32.038 [2024-07-14 15:10:11.182953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.038 [2024-07-14 15:10:11.182988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.038 qpair failed and we were unable to recover it. 00:37:32.038 [2024-07-14 15:10:11.183119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.038 [2024-07-14 15:10:11.183152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.038 qpair failed and we were unable to recover it. 00:37:32.038 [2024-07-14 15:10:11.183364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.038 [2024-07-14 15:10:11.183437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.038 qpair failed and we were unable to recover it. 00:37:32.038 [2024-07-14 15:10:11.183600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.038 [2024-07-14 15:10:11.183655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.038 qpair failed and we were unable to recover it. 00:37:32.038 [2024-07-14 15:10:11.183775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.038 [2024-07-14 15:10:11.183808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.038 qpair failed and we were unable to recover it. 00:37:32.038 [2024-07-14 15:10:11.183972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.038 [2024-07-14 15:10:11.184006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.038 qpair failed and we were unable to recover it. 00:37:32.038 [2024-07-14 15:10:11.184202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.038 [2024-07-14 15:10:11.184255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.038 qpair failed and we were unable to recover it. 00:37:32.038 [2024-07-14 15:10:11.184437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.038 [2024-07-14 15:10:11.184474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.038 qpair failed and we were unable to recover it. 00:37:32.038 [2024-07-14 15:10:11.184680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.038 [2024-07-14 15:10:11.184718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.038 qpair failed and we were unable to recover it. 00:37:32.038 [2024-07-14 15:10:11.184905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.038 [2024-07-14 15:10:11.184957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.038 qpair failed and we were unable to recover it. 00:37:32.038 [2024-07-14 15:10:11.185062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.038 [2024-07-14 15:10:11.185095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.038 qpair failed and we were unable to recover it. 00:37:32.038 [2024-07-14 15:10:11.185244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.038 [2024-07-14 15:10:11.185299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.038 qpair failed and we were unable to recover it. 00:37:32.038 [2024-07-14 15:10:11.185434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.038 [2024-07-14 15:10:11.185474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.038 qpair failed and we were unable to recover it. 00:37:32.038 [2024-07-14 15:10:11.185635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.038 [2024-07-14 15:10:11.185669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.038 qpair failed and we were unable to recover it. 00:37:32.038 [2024-07-14 15:10:11.185800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.038 [2024-07-14 15:10:11.185834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.038 qpair failed and we were unable to recover it. 00:37:32.038 [2024-07-14 15:10:11.186036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.038 [2024-07-14 15:10:11.186085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.038 qpair failed and we were unable to recover it. 00:37:32.038 [2024-07-14 15:10:11.186258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.038 [2024-07-14 15:10:11.186294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.038 qpair failed and we were unable to recover it. 00:37:32.038 [2024-07-14 15:10:11.186424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.038 [2024-07-14 15:10:11.186462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.038 qpair failed and we were unable to recover it. 00:37:32.038 [2024-07-14 15:10:11.186640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.038 [2024-07-14 15:10:11.186677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.038 qpair failed and we were unable to recover it. 00:37:32.038 [2024-07-14 15:10:11.186810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.038 [2024-07-14 15:10:11.186844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.038 qpair failed and we were unable to recover it. 00:37:32.038 [2024-07-14 15:10:11.186994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.038 [2024-07-14 15:10:11.187028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.038 qpair failed and we were unable to recover it. 00:37:32.038 [2024-07-14 15:10:11.187131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.038 [2024-07-14 15:10:11.187183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.038 qpair failed and we were unable to recover it. 00:37:32.038 [2024-07-14 15:10:11.187343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.038 [2024-07-14 15:10:11.187377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.038 qpair failed and we were unable to recover it. 00:37:32.038 [2024-07-14 15:10:11.187506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.038 [2024-07-14 15:10:11.187540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.038 qpair failed and we were unable to recover it. 00:37:32.038 [2024-07-14 15:10:11.187658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.038 [2024-07-14 15:10:11.187699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.038 qpair failed and we were unable to recover it. 00:37:32.038 [2024-07-14 15:10:11.187818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.038 [2024-07-14 15:10:11.187854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.038 qpair failed and we were unable to recover it. 00:37:32.038 [2024-07-14 15:10:11.188000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.038 [2024-07-14 15:10:11.188034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.038 qpair failed and we were unable to recover it. 00:37:32.038 [2024-07-14 15:10:11.188168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.038 [2024-07-14 15:10:11.188202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.038 qpair failed and we were unable to recover it. 00:37:32.038 [2024-07-14 15:10:11.188399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.038 [2024-07-14 15:10:11.188432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.038 qpair failed and we were unable to recover it. 00:37:32.038 [2024-07-14 15:10:11.188567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.038 [2024-07-14 15:10:11.188619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.038 qpair failed and we were unable to recover it. 00:37:32.038 [2024-07-14 15:10:11.188773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.038 [2024-07-14 15:10:11.188812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.038 qpair failed and we were unable to recover it. 00:37:32.038 [2024-07-14 15:10:11.188965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.038 [2024-07-14 15:10:11.189000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.038 qpair failed and we were unable to recover it. 00:37:32.038 [2024-07-14 15:10:11.189128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.038 [2024-07-14 15:10:11.189194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.038 qpair failed and we were unable to recover it. 00:37:32.038 [2024-07-14 15:10:11.189470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.038 [2024-07-14 15:10:11.189529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.038 qpair failed and we were unable to recover it. 00:37:32.038 [2024-07-14 15:10:11.189677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.038 [2024-07-14 15:10:11.189711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.038 qpair failed and we were unable to recover it. 00:37:32.038 [2024-07-14 15:10:11.189821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.038 [2024-07-14 15:10:11.189855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.038 qpair failed and we were unable to recover it. 00:37:32.038 [2024-07-14 15:10:11.190005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.038 [2024-07-14 15:10:11.190038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.039 qpair failed and we were unable to recover it. 00:37:32.039 [2024-07-14 15:10:11.190173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.039 [2024-07-14 15:10:11.190207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.039 qpair failed and we were unable to recover it. 00:37:32.039 [2024-07-14 15:10:11.190372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.039 [2024-07-14 15:10:11.190409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.039 qpair failed and we were unable to recover it. 00:37:32.039 [2024-07-14 15:10:11.190556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.039 [2024-07-14 15:10:11.190593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.039 qpair failed and we were unable to recover it. 00:37:32.039 [2024-07-14 15:10:11.190755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.039 [2024-07-14 15:10:11.190788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.039 qpair failed and we were unable to recover it. 00:37:32.039 [2024-07-14 15:10:11.190950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.039 [2024-07-14 15:10:11.190984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.039 qpair failed and we were unable to recover it. 00:37:32.039 [2024-07-14 15:10:11.191122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.039 [2024-07-14 15:10:11.191172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.039 qpair failed and we were unable to recover it. 00:37:32.039 [2024-07-14 15:10:11.191299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.039 [2024-07-14 15:10:11.191333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.039 qpair failed and we were unable to recover it. 00:37:32.039 [2024-07-14 15:10:11.191496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.039 [2024-07-14 15:10:11.191549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.039 qpair failed and we were unable to recover it. 00:37:32.039 [2024-07-14 15:10:11.191664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.039 [2024-07-14 15:10:11.191700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.039 qpair failed and we were unable to recover it. 00:37:32.039 [2024-07-14 15:10:11.191842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.039 [2024-07-14 15:10:11.191882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.039 qpair failed and we were unable to recover it. 00:37:32.039 [2024-07-14 15:10:11.192024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.039 [2024-07-14 15:10:11.192058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.039 qpair failed and we were unable to recover it. 00:37:32.039 [2024-07-14 15:10:11.192169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.039 [2024-07-14 15:10:11.192221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.039 qpair failed and we were unable to recover it. 00:37:32.039 [2024-07-14 15:10:11.192356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.039 [2024-07-14 15:10:11.192389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.039 qpair failed and we were unable to recover it. 00:37:32.039 [2024-07-14 15:10:11.192554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.039 [2024-07-14 15:10:11.192588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.039 qpair failed and we were unable to recover it. 00:37:32.039 [2024-07-14 15:10:11.192729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.039 [2024-07-14 15:10:11.192768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.039 qpair failed and we were unable to recover it. 00:37:32.039 [2024-07-14 15:10:11.192919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.039 [2024-07-14 15:10:11.192954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.039 qpair failed and we were unable to recover it. 00:37:32.039 [2024-07-14 15:10:11.193086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.039 [2024-07-14 15:10:11.193120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.039 qpair failed and we were unable to recover it. 00:37:32.039 [2024-07-14 15:10:11.193292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.039 [2024-07-14 15:10:11.193331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.039 qpair failed and we were unable to recover it. 00:37:32.039 [2024-07-14 15:10:11.193496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.039 [2024-07-14 15:10:11.193529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.039 qpair failed and we were unable to recover it. 00:37:32.039 [2024-07-14 15:10:11.193689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.039 [2024-07-14 15:10:11.193740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.039 qpair failed and we were unable to recover it. 00:37:32.039 [2024-07-14 15:10:11.193933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.039 [2024-07-14 15:10:11.193968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.039 qpair failed and we were unable to recover it. 00:37:32.039 [2024-07-14 15:10:11.194139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.039 [2024-07-14 15:10:11.194174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.039 qpair failed and we were unable to recover it. 00:37:32.039 [2024-07-14 15:10:11.194326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.039 [2024-07-14 15:10:11.194362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.039 qpair failed and we were unable to recover it. 00:37:32.039 [2024-07-14 15:10:11.194509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.039 [2024-07-14 15:10:11.194546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.039 qpair failed and we were unable to recover it. 00:37:32.039 [2024-07-14 15:10:11.194700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.039 [2024-07-14 15:10:11.194733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.039 qpair failed and we were unable to recover it. 00:37:32.039 [2024-07-14 15:10:11.194874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.039 [2024-07-14 15:10:11.194915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.039 qpair failed and we were unable to recover it. 00:37:32.039 [2024-07-14 15:10:11.195051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.039 [2024-07-14 15:10:11.195084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.039 qpair failed and we were unable to recover it. 00:37:32.039 [2024-07-14 15:10:11.195211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.039 [2024-07-14 15:10:11.195249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.039 qpair failed and we were unable to recover it. 00:37:32.039 [2024-07-14 15:10:11.195407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.039 [2024-07-14 15:10:11.195446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.039 qpair failed and we were unable to recover it. 00:37:32.039 [2024-07-14 15:10:11.195600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.039 [2024-07-14 15:10:11.195637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.039 qpair failed and we were unable to recover it. 00:37:32.039 [2024-07-14 15:10:11.195788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.039 [2024-07-14 15:10:11.195822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.039 qpair failed and we were unable to recover it. 00:37:32.039 [2024-07-14 15:10:11.195999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.039 [2024-07-14 15:10:11.196048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.039 qpair failed and we were unable to recover it. 00:37:32.039 [2024-07-14 15:10:11.196169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.039 [2024-07-14 15:10:11.196205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.039 qpair failed and we were unable to recover it. 00:37:32.039 [2024-07-14 15:10:11.196344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.039 [2024-07-14 15:10:11.196378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.039 qpair failed and we were unable to recover it. 00:37:32.039 [2024-07-14 15:10:11.196534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.039 [2024-07-14 15:10:11.196586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.039 qpair failed and we were unable to recover it. 00:37:32.039 [2024-07-14 15:10:11.196693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.039 [2024-07-14 15:10:11.196731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.039 qpair failed and we were unable to recover it. 00:37:32.039 [2024-07-14 15:10:11.196869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.039 [2024-07-14 15:10:11.196908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.039 qpair failed and we were unable to recover it. 00:37:32.039 [2024-07-14 15:10:11.197050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.040 [2024-07-14 15:10:11.197086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.040 qpair failed and we were unable to recover it. 00:37:32.040 [2024-07-14 15:10:11.197287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.040 [2024-07-14 15:10:11.197340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.040 qpair failed and we were unable to recover it. 00:37:32.040 [2024-07-14 15:10:11.197483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.040 [2024-07-14 15:10:11.197520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.040 qpair failed and we were unable to recover it. 00:37:32.040 [2024-07-14 15:10:11.197660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.040 [2024-07-14 15:10:11.197712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.040 qpair failed and we were unable to recover it. 00:37:32.040 [2024-07-14 15:10:11.197906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.040 [2024-07-14 15:10:11.197959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.040 qpair failed and we were unable to recover it. 00:37:32.040 [2024-07-14 15:10:11.198120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.040 [2024-07-14 15:10:11.198154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.040 qpair failed and we were unable to recover it. 00:37:32.040 [2024-07-14 15:10:11.198359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.040 [2024-07-14 15:10:11.198397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.040 qpair failed and we were unable to recover it. 00:37:32.040 [2024-07-14 15:10:11.198658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.040 [2024-07-14 15:10:11.198715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.040 qpair failed and we were unable to recover it. 00:37:32.040 [2024-07-14 15:10:11.198874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.040 [2024-07-14 15:10:11.198917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.040 qpair failed and we were unable to recover it. 00:37:32.040 [2024-07-14 15:10:11.199059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.040 [2024-07-14 15:10:11.199094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.040 qpair failed and we were unable to recover it. 00:37:32.040 [2024-07-14 15:10:11.199276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.040 [2024-07-14 15:10:11.199357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.040 qpair failed and we were unable to recover it. 00:37:32.040 [2024-07-14 15:10:11.199507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.040 [2024-07-14 15:10:11.199543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.040 qpair failed and we were unable to recover it. 00:37:32.040 [2024-07-14 15:10:11.199725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.040 [2024-07-14 15:10:11.199765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.040 qpair failed and we were unable to recover it. 00:37:32.040 [2024-07-14 15:10:11.199914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.040 [2024-07-14 15:10:11.199965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.040 qpair failed and we were unable to recover it. 00:37:32.040 [2024-07-14 15:10:11.200073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.040 [2024-07-14 15:10:11.200107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.040 qpair failed and we were unable to recover it. 00:37:32.040 [2024-07-14 15:10:11.200300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.040 [2024-07-14 15:10:11.200378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.040 qpair failed and we were unable to recover it. 00:37:32.040 [2024-07-14 15:10:11.200613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.040 [2024-07-14 15:10:11.200668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.040 qpair failed and we were unable to recover it. 00:37:32.040 [2024-07-14 15:10:11.200811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.040 [2024-07-14 15:10:11.200844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.040 qpair failed and we were unable to recover it. 00:37:32.040 [2024-07-14 15:10:11.200974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.040 [2024-07-14 15:10:11.201010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.040 qpair failed and we were unable to recover it. 00:37:32.040 [2024-07-14 15:10:11.201143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.040 [2024-07-14 15:10:11.201196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.040 qpair failed and we were unable to recover it. 00:37:32.040 [2024-07-14 15:10:11.201334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.040 [2024-07-14 15:10:11.201368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.040 qpair failed and we were unable to recover it. 00:37:32.040 [2024-07-14 15:10:11.201497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.040 [2024-07-14 15:10:11.201530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.040 qpair failed and we were unable to recover it. 00:37:32.040 [2024-07-14 15:10:11.201766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.040 [2024-07-14 15:10:11.201840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.040 qpair failed and we were unable to recover it. 00:37:32.040 [2024-07-14 15:10:11.202022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.040 [2024-07-14 15:10:11.202059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.040 qpair failed and we were unable to recover it. 00:37:32.040 [2024-07-14 15:10:11.202183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.040 [2024-07-14 15:10:11.202231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.040 qpair failed and we were unable to recover it. 00:37:32.040 [2024-07-14 15:10:11.202450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.040 [2024-07-14 15:10:11.202489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.040 qpair failed and we were unable to recover it. 00:37:32.040 [2024-07-14 15:10:11.202679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.040 [2024-07-14 15:10:11.202713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.040 qpair failed and we were unable to recover it. 00:37:32.040 [2024-07-14 15:10:11.202854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.040 [2024-07-14 15:10:11.202897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.040 qpair failed and we were unable to recover it. 00:37:32.040 [2024-07-14 15:10:11.203034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.040 [2024-07-14 15:10:11.203082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.040 qpair failed and we were unable to recover it. 00:37:32.040 [2024-07-14 15:10:11.203235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.040 [2024-07-14 15:10:11.203271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.040 qpair failed and we were unable to recover it. 00:37:32.040 [2024-07-14 15:10:11.203525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.040 [2024-07-14 15:10:11.203594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.041 qpair failed and we were unable to recover it. 00:37:32.041 [2024-07-14 15:10:11.203780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.041 [2024-07-14 15:10:11.203817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.041 qpair failed and we were unable to recover it. 00:37:32.041 [2024-07-14 15:10:11.203985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.041 [2024-07-14 15:10:11.204020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.041 qpair failed and we were unable to recover it. 00:37:32.041 [2024-07-14 15:10:11.204175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.041 [2024-07-14 15:10:11.204211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.041 qpair failed and we were unable to recover it. 00:37:32.041 [2024-07-14 15:10:11.204418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.041 [2024-07-14 15:10:11.204486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.041 qpair failed and we were unable to recover it. 00:37:32.041 [2024-07-14 15:10:11.204653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.041 [2024-07-14 15:10:11.204688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.041 qpair failed and we were unable to recover it. 00:37:32.041 [2024-07-14 15:10:11.204826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.041 [2024-07-14 15:10:11.204860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.041 qpair failed and we were unable to recover it. 00:37:32.041 [2024-07-14 15:10:11.204985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.041 [2024-07-14 15:10:11.205020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.041 qpair failed and we were unable to recover it. 00:37:32.041 [2024-07-14 15:10:11.205151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.041 [2024-07-14 15:10:11.205185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.041 qpair failed and we were unable to recover it. 00:37:32.041 [2024-07-14 15:10:11.205329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.041 [2024-07-14 15:10:11.205363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.041 qpair failed and we were unable to recover it. 00:37:32.041 [2024-07-14 15:10:11.205505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.041 [2024-07-14 15:10:11.205555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.041 qpair failed and we were unable to recover it. 00:37:32.041 [2024-07-14 15:10:11.205740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.041 [2024-07-14 15:10:11.205775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.041 qpair failed and we were unable to recover it. 00:37:32.041 [2024-07-14 15:10:11.205946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.041 [2024-07-14 15:10:11.205982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.041 qpair failed and we were unable to recover it. 00:37:32.041 [2024-07-14 15:10:11.206116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.041 [2024-07-14 15:10:11.206165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.041 qpair failed and we were unable to recover it. 00:37:32.041 [2024-07-14 15:10:11.206345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.041 [2024-07-14 15:10:11.206381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.041 qpair failed and we were unable to recover it. 00:37:32.041 [2024-07-14 15:10:11.206612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.041 [2024-07-14 15:10:11.206650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.041 qpair failed and we were unable to recover it. 00:37:32.041 [2024-07-14 15:10:11.206803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.041 [2024-07-14 15:10:11.206840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.041 qpair failed and we were unable to recover it. 00:37:32.041 [2024-07-14 15:10:11.206976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.041 [2024-07-14 15:10:11.207009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.041 qpair failed and we were unable to recover it. 00:37:32.041 [2024-07-14 15:10:11.207117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.041 [2024-07-14 15:10:11.207151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.041 qpair failed and we were unable to recover it. 00:37:32.041 [2024-07-14 15:10:11.207317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.041 [2024-07-14 15:10:11.207354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.041 qpair failed and we were unable to recover it. 00:37:32.041 [2024-07-14 15:10:11.207511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.041 [2024-07-14 15:10:11.207544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.041 qpair failed and we were unable to recover it. 00:37:32.041 [2024-07-14 15:10:11.207652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.041 [2024-07-14 15:10:11.207715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.041 qpair failed and we were unable to recover it. 00:37:32.041 [2024-07-14 15:10:11.207871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.041 [2024-07-14 15:10:11.207917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.041 qpair failed and we were unable to recover it. 00:37:32.041 [2024-07-14 15:10:11.208096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.041 [2024-07-14 15:10:11.208129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.041 qpair failed and we were unable to recover it. 00:37:32.041 [2024-07-14 15:10:11.208357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.041 [2024-07-14 15:10:11.208412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.041 qpair failed and we were unable to recover it. 00:37:32.041 [2024-07-14 15:10:11.208608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.041 [2024-07-14 15:10:11.208667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.041 qpair failed and we were unable to recover it. 00:37:32.041 [2024-07-14 15:10:11.208824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.041 [2024-07-14 15:10:11.208858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.041 qpair failed and we were unable to recover it. 00:37:32.041 [2024-07-14 15:10:11.209016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.041 [2024-07-14 15:10:11.209065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.041 qpair failed and we were unable to recover it. 00:37:32.041 [2024-07-14 15:10:11.209260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.041 [2024-07-14 15:10:11.209314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.041 qpair failed and we were unable to recover it. 00:37:32.041 [2024-07-14 15:10:11.209482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.041 [2024-07-14 15:10:11.209519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.041 qpair failed and we were unable to recover it. 00:37:32.041 [2024-07-14 15:10:11.209736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.041 [2024-07-14 15:10:11.209796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.041 qpair failed and we were unable to recover it. 00:37:32.041 [2024-07-14 15:10:11.209968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.041 [2024-07-14 15:10:11.210002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.041 qpair failed and we were unable to recover it. 00:37:32.041 [2024-07-14 15:10:11.210123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.041 [2024-07-14 15:10:11.210158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.041 qpair failed and we were unable to recover it. 00:37:32.041 [2024-07-14 15:10:11.210316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.041 [2024-07-14 15:10:11.210364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.041 qpair failed and we were unable to recover it. 00:37:32.041 [2024-07-14 15:10:11.210564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.041 [2024-07-14 15:10:11.210624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.041 qpair failed and we were unable to recover it. 00:37:32.041 [2024-07-14 15:10:11.210759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.041 [2024-07-14 15:10:11.210793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.041 qpair failed and we were unable to recover it. 00:37:32.041 [2024-07-14 15:10:11.210932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.041 [2024-07-14 15:10:11.210966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.041 qpair failed and we were unable to recover it. 00:37:32.041 [2024-07-14 15:10:11.211075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.041 [2024-07-14 15:10:11.211109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.041 qpair failed and we were unable to recover it. 00:37:32.041 [2024-07-14 15:10:11.211241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.041 [2024-07-14 15:10:11.211274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.041 qpair failed and we were unable to recover it. 00:37:32.042 [2024-07-14 15:10:11.211385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.042 [2024-07-14 15:10:11.211437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.042 qpair failed and we were unable to recover it. 00:37:32.042 [2024-07-14 15:10:11.211586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.042 [2024-07-14 15:10:11.211628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.042 qpair failed and we were unable to recover it. 00:37:32.042 [2024-07-14 15:10:11.211757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.042 [2024-07-14 15:10:11.211790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.042 qpair failed and we were unable to recover it. 00:37:32.042 [2024-07-14 15:10:11.211929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.042 [2024-07-14 15:10:11.211962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.042 qpair failed and we were unable to recover it. 00:37:32.042 [2024-07-14 15:10:11.212124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.042 [2024-07-14 15:10:11.212157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.042 qpair failed and we were unable to recover it. 00:37:32.042 [2024-07-14 15:10:11.212362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.042 [2024-07-14 15:10:11.212395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.042 qpair failed and we were unable to recover it. 00:37:32.042 [2024-07-14 15:10:11.212495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.042 [2024-07-14 15:10:11.212546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.042 qpair failed and we were unable to recover it. 00:37:32.042 [2024-07-14 15:10:11.212701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.042 [2024-07-14 15:10:11.212737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.042 qpair failed and we were unable to recover it. 00:37:32.042 [2024-07-14 15:10:11.212885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.042 [2024-07-14 15:10:11.212918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.042 qpair failed and we were unable to recover it. 00:37:32.042 [2024-07-14 15:10:11.213083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.042 [2024-07-14 15:10:11.213116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.042 qpair failed and we were unable to recover it. 00:37:32.042 [2024-07-14 15:10:11.213226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.042 [2024-07-14 15:10:11.213263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.042 qpair failed and we were unable to recover it. 00:37:32.042 [2024-07-14 15:10:11.213412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.042 [2024-07-14 15:10:11.213445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.042 qpair failed and we were unable to recover it. 00:37:32.042 [2024-07-14 15:10:11.213572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.042 [2024-07-14 15:10:11.213621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.042 qpair failed and we were unable to recover it. 00:37:32.042 [2024-07-14 15:10:11.213766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.042 [2024-07-14 15:10:11.213803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.042 qpair failed and we were unable to recover it. 00:37:32.042 [2024-07-14 15:10:11.213970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.042 [2024-07-14 15:10:11.214003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.042 qpair failed and we were unable to recover it. 00:37:32.042 [2024-07-14 15:10:11.214160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.042 [2024-07-14 15:10:11.214226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.042 qpair failed and we were unable to recover it. 00:37:32.042 [2024-07-14 15:10:11.214401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.042 [2024-07-14 15:10:11.214442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.042 qpair failed and we were unable to recover it. 00:37:32.042 [2024-07-14 15:10:11.214605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.042 [2024-07-14 15:10:11.214640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.042 qpair failed and we were unable to recover it. 00:37:32.042 [2024-07-14 15:10:11.214752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.042 [2024-07-14 15:10:11.214802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.042 qpair failed and we were unable to recover it. 00:37:32.042 [2024-07-14 15:10:11.214978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.042 [2024-07-14 15:10:11.215017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.042 qpair failed and we were unable to recover it. 00:37:32.042 [2024-07-14 15:10:11.215170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.042 [2024-07-14 15:10:11.215204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.042 qpair failed and we were unable to recover it. 00:37:32.042 [2024-07-14 15:10:11.215342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.042 [2024-07-14 15:10:11.215375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.042 qpair failed and we were unable to recover it. 00:37:32.042 [2024-07-14 15:10:11.215484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.042 [2024-07-14 15:10:11.215519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.042 qpair failed and we were unable to recover it. 00:37:32.042 [2024-07-14 15:10:11.215652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.042 [2024-07-14 15:10:11.215685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.042 qpair failed and we were unable to recover it. 00:37:32.042 [2024-07-14 15:10:11.215843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.042 [2024-07-14 15:10:11.215888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.042 qpair failed and we were unable to recover it. 00:37:32.042 [2024-07-14 15:10:11.216026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.042 [2024-07-14 15:10:11.216074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.042 qpair failed and we were unable to recover it. 00:37:32.042 [2024-07-14 15:10:11.216258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.042 [2024-07-14 15:10:11.216293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.042 qpair failed and we were unable to recover it. 00:37:32.042 [2024-07-14 15:10:11.216425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.042 [2024-07-14 15:10:11.216463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.042 qpair failed and we were unable to recover it. 00:37:32.042 [2024-07-14 15:10:11.216613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.042 [2024-07-14 15:10:11.216656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.042 qpair failed and we were unable to recover it. 00:37:32.042 [2024-07-14 15:10:11.216801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.042 [2024-07-14 15:10:11.216835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.042 qpair failed and we were unable to recover it. 00:37:32.042 [2024-07-14 15:10:11.216980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.042 [2024-07-14 15:10:11.217015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.042 qpair failed and we were unable to recover it. 00:37:32.042 [2024-07-14 15:10:11.217123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.042 [2024-07-14 15:10:11.217158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.042 qpair failed and we were unable to recover it. 00:37:32.042 [2024-07-14 15:10:11.217300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.042 [2024-07-14 15:10:11.217334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.042 qpair failed and we were unable to recover it. 00:37:32.042 [2024-07-14 15:10:11.217487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.042 [2024-07-14 15:10:11.217524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.042 qpair failed and we were unable to recover it. 00:37:32.042 [2024-07-14 15:10:11.217704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.042 [2024-07-14 15:10:11.217742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.042 qpair failed and we were unable to recover it. 00:37:32.042 [2024-07-14 15:10:11.217870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.042 [2024-07-14 15:10:11.217914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.042 qpair failed and we were unable to recover it. 00:37:32.042 [2024-07-14 15:10:11.218031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.042 [2024-07-14 15:10:11.218065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.042 qpair failed and we were unable to recover it. 00:37:32.042 [2024-07-14 15:10:11.218225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.042 [2024-07-14 15:10:11.218263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.042 qpair failed and we were unable to recover it. 00:37:32.042 [2024-07-14 15:10:11.218428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.043 [2024-07-14 15:10:11.218461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.043 qpair failed and we were unable to recover it. 00:37:32.043 [2024-07-14 15:10:11.218628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.043 [2024-07-14 15:10:11.218680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.043 qpair failed and we were unable to recover it. 00:37:32.043 [2024-07-14 15:10:11.218830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.043 [2024-07-14 15:10:11.218867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.043 qpair failed and we were unable to recover it. 00:37:32.043 [2024-07-14 15:10:11.219014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.043 [2024-07-14 15:10:11.219047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.043 qpair failed and we were unable to recover it. 00:37:32.043 [2024-07-14 15:10:11.219197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.043 [2024-07-14 15:10:11.219231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.043 qpair failed and we were unable to recover it. 00:37:32.043 [2024-07-14 15:10:11.219399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.043 [2024-07-14 15:10:11.219433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.043 qpair failed and we were unable to recover it. 00:37:32.043 [2024-07-14 15:10:11.219603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.043 [2024-07-14 15:10:11.219637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.043 qpair failed and we were unable to recover it. 00:37:32.043 [2024-07-14 15:10:11.219799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.043 [2024-07-14 15:10:11.219836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.043 qpair failed and we were unable to recover it. 00:37:32.043 [2024-07-14 15:10:11.219999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.043 [2024-07-14 15:10:11.220033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.043 qpair failed and we were unable to recover it. 00:37:32.043 [2024-07-14 15:10:11.220167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.043 [2024-07-14 15:10:11.220200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.043 qpair failed and we were unable to recover it. 00:37:32.043 [2024-07-14 15:10:11.220309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.043 [2024-07-14 15:10:11.220359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.043 qpair failed and we were unable to recover it. 00:37:32.043 [2024-07-14 15:10:11.220502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.043 [2024-07-14 15:10:11.220539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.043 qpair failed and we were unable to recover it. 00:37:32.043 [2024-07-14 15:10:11.220671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.043 [2024-07-14 15:10:11.220704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.043 qpair failed and we were unable to recover it. 00:37:32.043 [2024-07-14 15:10:11.220841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.043 [2024-07-14 15:10:11.220897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.043 qpair failed and we were unable to recover it. 00:37:32.043 [2024-07-14 15:10:11.221053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.043 [2024-07-14 15:10:11.221086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.043 qpair failed and we were unable to recover it. 00:37:32.043 [2024-07-14 15:10:11.221228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.043 [2024-07-14 15:10:11.221261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.043 qpair failed and we were unable to recover it. 00:37:32.043 [2024-07-14 15:10:11.221422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.043 [2024-07-14 15:10:11.221456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.043 qpair failed and we were unable to recover it. 00:37:32.043 [2024-07-14 15:10:11.221597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.043 [2024-07-14 15:10:11.221634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.043 qpair failed and we were unable to recover it. 00:37:32.043 [2024-07-14 15:10:11.221755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.043 [2024-07-14 15:10:11.221788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.043 qpair failed and we were unable to recover it. 00:37:32.043 [2024-07-14 15:10:11.221936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.043 [2024-07-14 15:10:11.221970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.043 qpair failed and we were unable to recover it. 00:37:32.043 [2024-07-14 15:10:11.222111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.043 [2024-07-14 15:10:11.222144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.043 qpair failed and we were unable to recover it. 00:37:32.043 [2024-07-14 15:10:11.222301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.043 [2024-07-14 15:10:11.222334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.043 qpair failed and we were unable to recover it. 00:37:32.043 [2024-07-14 15:10:11.222462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.043 [2024-07-14 15:10:11.222495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.043 qpair failed and we were unable to recover it. 00:37:32.043 [2024-07-14 15:10:11.222647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.043 [2024-07-14 15:10:11.222683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.043 qpair failed and we were unable to recover it. 00:37:32.043 [2024-07-14 15:10:11.222863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.043 [2024-07-14 15:10:11.222902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.043 qpair failed and we were unable to recover it. 00:37:32.043 [2024-07-14 15:10:11.223029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.043 [2024-07-14 15:10:11.223076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.043 qpair failed and we were unable to recover it. 00:37:32.043 [2024-07-14 15:10:11.223210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.043 [2024-07-14 15:10:11.223258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.043 qpair failed and we were unable to recover it. 00:37:32.043 [2024-07-14 15:10:11.223448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.043 [2024-07-14 15:10:11.223485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.043 qpair failed and we were unable to recover it. 00:37:32.043 [2024-07-14 15:10:11.223666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.043 [2024-07-14 15:10:11.223704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.043 qpair failed and we were unable to recover it. 00:37:32.043 [2024-07-14 15:10:11.223849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.043 [2024-07-14 15:10:11.223892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.043 qpair failed and we were unable to recover it. 00:37:32.043 [2024-07-14 15:10:11.224044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.043 [2024-07-14 15:10:11.224083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.043 qpair failed and we were unable to recover it. 00:37:32.043 [2024-07-14 15:10:11.224253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.043 [2024-07-14 15:10:11.224322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.043 qpair failed and we were unable to recover it. 00:37:32.043 [2024-07-14 15:10:11.224458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.043 [2024-07-14 15:10:11.224530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.043 qpair failed and we were unable to recover it. 00:37:32.043 [2024-07-14 15:10:11.224679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.043 [2024-07-14 15:10:11.224713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.043 qpair failed and we were unable to recover it. 00:37:32.043 [2024-07-14 15:10:11.224855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.043 [2024-07-14 15:10:11.224914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.043 qpair failed and we were unable to recover it. 00:37:32.043 [2024-07-14 15:10:11.225097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.043 [2024-07-14 15:10:11.225145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.043 qpair failed and we were unable to recover it. 00:37:32.043 [2024-07-14 15:10:11.225305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.043 [2024-07-14 15:10:11.225342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.043 qpair failed and we were unable to recover it. 00:37:32.043 [2024-07-14 15:10:11.225531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.043 [2024-07-14 15:10:11.225603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.043 qpair failed and we were unable to recover it. 00:37:32.044 [2024-07-14 15:10:11.225728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.044 [2024-07-14 15:10:11.225765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.044 qpair failed and we were unable to recover it. 00:37:32.044 [2024-07-14 15:10:11.225929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.044 [2024-07-14 15:10:11.225963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.044 qpair failed and we were unable to recover it. 00:37:32.044 [2024-07-14 15:10:11.226109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.044 [2024-07-14 15:10:11.226143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.044 qpair failed and we were unable to recover it. 00:37:32.044 [2024-07-14 15:10:11.226282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.044 [2024-07-14 15:10:11.226320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.044 qpair failed and we were unable to recover it. 00:37:32.044 [2024-07-14 15:10:11.226500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.044 [2024-07-14 15:10:11.226534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.044 qpair failed and we were unable to recover it. 00:37:32.044 [2024-07-14 15:10:11.226644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.044 [2024-07-14 15:10:11.226695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.044 qpair failed and we were unable to recover it. 00:37:32.044 [2024-07-14 15:10:11.226858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.044 [2024-07-14 15:10:11.226907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.044 qpair failed and we were unable to recover it. 00:37:32.044 [2024-07-14 15:10:11.227059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.044 [2024-07-14 15:10:11.227092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.044 qpair failed and we were unable to recover it. 00:37:32.044 [2024-07-14 15:10:11.227244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.044 [2024-07-14 15:10:11.227281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.044 qpair failed and we were unable to recover it. 00:37:32.044 [2024-07-14 15:10:11.227453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.044 [2024-07-14 15:10:11.227504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.044 qpair failed and we were unable to recover it. 00:37:32.044 [2024-07-14 15:10:11.227672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.044 [2024-07-14 15:10:11.227706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.044 qpair failed and we were unable to recover it. 00:37:32.044 [2024-07-14 15:10:11.227815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.044 [2024-07-14 15:10:11.227868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.044 qpair failed and we were unable to recover it. 00:37:32.044 [2024-07-14 15:10:11.228012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.044 [2024-07-14 15:10:11.228045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.044 qpair failed and we were unable to recover it. 00:37:32.044 [2024-07-14 15:10:11.228182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.044 [2024-07-14 15:10:11.228215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.044 qpair failed and we were unable to recover it. 00:37:32.044 [2024-07-14 15:10:11.228374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.044 [2024-07-14 15:10:11.228407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.044 qpair failed and we were unable to recover it. 00:37:32.044 [2024-07-14 15:10:11.228543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.044 [2024-07-14 15:10:11.228580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.044 qpair failed and we were unable to recover it. 00:37:32.044 [2024-07-14 15:10:11.228745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.044 [2024-07-14 15:10:11.228779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.044 qpair failed and we were unable to recover it. 00:37:32.044 [2024-07-14 15:10:11.228888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.044 [2024-07-14 15:10:11.228922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.044 qpair failed and we were unable to recover it. 00:37:32.044 [2024-07-14 15:10:11.229046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.044 [2024-07-14 15:10:11.229081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.044 qpair failed and we were unable to recover it. 00:37:32.044 [2024-07-14 15:10:11.229222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.044 [2024-07-14 15:10:11.229255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.044 qpair failed and we were unable to recover it. 00:37:32.044 [2024-07-14 15:10:11.229362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.044 [2024-07-14 15:10:11.229414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.044 qpair failed and we were unable to recover it. 00:37:32.044 [2024-07-14 15:10:11.229563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.044 [2024-07-14 15:10:11.229601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.044 qpair failed and we were unable to recover it. 00:37:32.044 [2024-07-14 15:10:11.229762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.044 [2024-07-14 15:10:11.229796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.044 qpair failed and we were unable to recover it. 00:37:32.044 [2024-07-14 15:10:11.229903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.044 [2024-07-14 15:10:11.229937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.044 qpair failed and we were unable to recover it. 00:37:32.044 [2024-07-14 15:10:11.230053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.044 [2024-07-14 15:10:11.230086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.044 qpair failed and we were unable to recover it. 00:37:32.044 [2024-07-14 15:10:11.230214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.044 [2024-07-14 15:10:11.230263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.044 qpair failed and we were unable to recover it. 00:37:32.044 [2024-07-14 15:10:11.230411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.044 [2024-07-14 15:10:11.230447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.044 qpair failed and we were unable to recover it. 00:37:32.044 [2024-07-14 15:10:11.230602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.044 [2024-07-14 15:10:11.230699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.044 qpair failed and we were unable to recover it. 00:37:32.044 [2024-07-14 15:10:11.230859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.044 [2024-07-14 15:10:11.230903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.044 qpair failed and we were unable to recover it. 00:37:32.044 [2024-07-14 15:10:11.231044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.044 [2024-07-14 15:10:11.231077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.044 qpair failed and we were unable to recover it. 00:37:32.044 [2024-07-14 15:10:11.231183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.044 [2024-07-14 15:10:11.231216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.044 qpair failed and we were unable to recover it. 00:37:32.044 [2024-07-14 15:10:11.231462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.044 [2024-07-14 15:10:11.231524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.044 qpair failed and we were unable to recover it. 00:37:32.044 [2024-07-14 15:10:11.231803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.044 [2024-07-14 15:10:11.231884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.044 qpair failed and we were unable to recover it. 00:37:32.044 [2024-07-14 15:10:11.232041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.044 [2024-07-14 15:10:11.232076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.044 qpair failed and we were unable to recover it. 00:37:32.044 [2024-07-14 15:10:11.232245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.044 [2024-07-14 15:10:11.232279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.044 qpair failed and we were unable to recover it. 00:37:32.044 [2024-07-14 15:10:11.232463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.044 [2024-07-14 15:10:11.232530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.044 qpair failed and we were unable to recover it. 00:37:32.044 [2024-07-14 15:10:11.232735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.044 [2024-07-14 15:10:11.232772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.044 qpair failed and we were unable to recover it. 00:37:32.044 [2024-07-14 15:10:11.232975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.044 [2024-07-14 15:10:11.233008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.044 qpair failed and we were unable to recover it. 00:37:32.045 [2024-07-14 15:10:11.233124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.045 [2024-07-14 15:10:11.233158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.045 qpair failed and we were unable to recover it. 00:37:32.045 [2024-07-14 15:10:11.233321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.045 [2024-07-14 15:10:11.233354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.045 qpair failed and we were unable to recover it. 00:37:32.045 [2024-07-14 15:10:11.233554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.045 [2024-07-14 15:10:11.233591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.045 qpair failed and we were unable to recover it. 00:37:32.045 [2024-07-14 15:10:11.233759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.045 [2024-07-14 15:10:11.233792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.045 qpair failed and we were unable to recover it. 00:37:32.045 [2024-07-14 15:10:11.233943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.045 [2024-07-14 15:10:11.233992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.045 qpair failed and we were unable to recover it. 00:37:32.045 [2024-07-14 15:10:11.234115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.045 [2024-07-14 15:10:11.234150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.045 qpair failed and we were unable to recover it. 00:37:32.045 [2024-07-14 15:10:11.234308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.045 [2024-07-14 15:10:11.234345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.045 qpair failed and we were unable to recover it. 00:37:32.045 [2024-07-14 15:10:11.234566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.045 [2024-07-14 15:10:11.234626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.045 qpair failed and we were unable to recover it. 00:37:32.045 [2024-07-14 15:10:11.234778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.045 [2024-07-14 15:10:11.234830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.045 qpair failed and we were unable to recover it. 00:37:32.045 [2024-07-14 15:10:11.234976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.045 [2024-07-14 15:10:11.235010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.045 qpair failed and we were unable to recover it. 00:37:32.045 [2024-07-14 15:10:11.235144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.045 [2024-07-14 15:10:11.235196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.045 qpair failed and we were unable to recover it. 00:37:32.045 [2024-07-14 15:10:11.235380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.045 [2024-07-14 15:10:11.235413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.045 qpair failed and we were unable to recover it. 00:37:32.045 [2024-07-14 15:10:11.235576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.045 [2024-07-14 15:10:11.235642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.045 qpair failed and we were unable to recover it. 00:37:32.045 [2024-07-14 15:10:11.235792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.045 [2024-07-14 15:10:11.235831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.045 qpair failed and we were unable to recover it. 00:37:32.045 [2024-07-14 15:10:11.235977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.045 [2024-07-14 15:10:11.236012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.045 qpair failed and we were unable to recover it. 00:37:32.045 [2024-07-14 15:10:11.236143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.045 [2024-07-14 15:10:11.236196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.045 qpair failed and we were unable to recover it. 00:37:32.045 [2024-07-14 15:10:11.236430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.045 [2024-07-14 15:10:11.236487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.045 qpair failed and we were unable to recover it. 00:37:32.045 [2024-07-14 15:10:11.236687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.045 [2024-07-14 15:10:11.236724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.045 qpair failed and we were unable to recover it. 00:37:32.045 [2024-07-14 15:10:11.236873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.045 [2024-07-14 15:10:11.236916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.045 qpair failed and we were unable to recover it. 00:37:32.045 [2024-07-14 15:10:11.237077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.045 [2024-07-14 15:10:11.237112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.045 qpair failed and we were unable to recover it. 00:37:32.045 [2024-07-14 15:10:11.237272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.045 [2024-07-14 15:10:11.237305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.045 qpair failed and we were unable to recover it. 00:37:32.045 [2024-07-14 15:10:11.237445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.045 [2024-07-14 15:10:11.237495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.045 qpair failed and we were unable to recover it. 00:37:32.045 [2024-07-14 15:10:11.237719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.045 [2024-07-14 15:10:11.237774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.045 qpair failed and we were unable to recover it. 00:37:32.045 [2024-07-14 15:10:11.237956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.045 [2024-07-14 15:10:11.237990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.045 qpair failed and we were unable to recover it. 00:37:32.045 [2024-07-14 15:10:11.238139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.045 [2024-07-14 15:10:11.238209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.045 qpair failed and we were unable to recover it. 00:37:32.045 [2024-07-14 15:10:11.238406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.045 [2024-07-14 15:10:11.238441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.045 qpair failed and we were unable to recover it. 00:37:32.045 [2024-07-14 15:10:11.238645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.045 [2024-07-14 15:10:11.238688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.045 qpair failed and we were unable to recover it. 00:37:32.045 [2024-07-14 15:10:11.238832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.045 [2024-07-14 15:10:11.238870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.045 qpair failed and we were unable to recover it. 00:37:32.045 [2024-07-14 15:10:11.239026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.045 [2024-07-14 15:10:11.239059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.045 qpair failed and we were unable to recover it. 00:37:32.045 [2024-07-14 15:10:11.239195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.045 [2024-07-14 15:10:11.239242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.045 qpair failed and we were unable to recover it. 00:37:32.045 [2024-07-14 15:10:11.239483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.045 [2024-07-14 15:10:11.239536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.046 qpair failed and we were unable to recover it. 00:37:32.046 [2024-07-14 15:10:11.239731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.046 [2024-07-14 15:10:11.239791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.046 qpair failed and we were unable to recover it. 00:37:32.046 [2024-07-14 15:10:11.239936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.046 [2024-07-14 15:10:11.239971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.046 qpair failed and we were unable to recover it. 00:37:32.046 [2024-07-14 15:10:11.240104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.046 [2024-07-14 15:10:11.240152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.046 qpair failed and we were unable to recover it. 00:37:32.046 [2024-07-14 15:10:11.240283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.046 [2024-07-14 15:10:11.240329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.046 qpair failed and we were unable to recover it. 00:37:32.046 [2024-07-14 15:10:11.240535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.046 [2024-07-14 15:10:11.240595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.046 qpair failed and we were unable to recover it. 00:37:32.046 [2024-07-14 15:10:11.240772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.046 [2024-07-14 15:10:11.240810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.046 qpair failed and we were unable to recover it. 00:37:32.046 [2024-07-14 15:10:11.240946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.046 [2024-07-14 15:10:11.240980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.046 qpair failed and we were unable to recover it. 00:37:32.046 [2024-07-14 15:10:11.241116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.046 [2024-07-14 15:10:11.241180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.046 qpair failed and we were unable to recover it. 00:37:32.046 [2024-07-14 15:10:11.241393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.046 [2024-07-14 15:10:11.241493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.046 qpair failed and we were unable to recover it. 00:37:32.046 [2024-07-14 15:10:11.241717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.046 [2024-07-14 15:10:11.241751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.046 qpair failed and we were unable to recover it. 00:37:32.046 [2024-07-14 15:10:11.241859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.046 [2024-07-14 15:10:11.241901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.046 qpair failed and we were unable to recover it. 00:37:32.046 [2024-07-14 15:10:11.242044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.046 [2024-07-14 15:10:11.242077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.046 qpair failed and we were unable to recover it. 00:37:32.046 [2024-07-14 15:10:11.242282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.046 [2024-07-14 15:10:11.242359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.046 qpair failed and we were unable to recover it. 00:37:32.046 [2024-07-14 15:10:11.242528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.046 [2024-07-14 15:10:11.242598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.046 qpair failed and we were unable to recover it. 00:37:32.046 [2024-07-14 15:10:11.242781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.046 [2024-07-14 15:10:11.242835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.046 qpair failed and we were unable to recover it. 00:37:32.046 [2024-07-14 15:10:11.242993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.046 [2024-07-14 15:10:11.243030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.046 qpair failed and we were unable to recover it. 00:37:32.046 [2024-07-14 15:10:11.243213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.046 [2024-07-14 15:10:11.243250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.046 qpair failed and we were unable to recover it. 00:37:32.046 [2024-07-14 15:10:11.243388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.046 [2024-07-14 15:10:11.243440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.046 qpair failed and we were unable to recover it. 00:37:32.046 [2024-07-14 15:10:11.243604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.046 [2024-07-14 15:10:11.243657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.046 qpair failed and we were unable to recover it. 00:37:32.046 [2024-07-14 15:10:11.243813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.046 [2024-07-14 15:10:11.243852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.046 qpair failed and we were unable to recover it. 00:37:32.046 [2024-07-14 15:10:11.244018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.046 [2024-07-14 15:10:11.244053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.046 qpair failed and we were unable to recover it. 00:37:32.046 [2024-07-14 15:10:11.244205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.046 [2024-07-14 15:10:11.244242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.046 qpair failed and we were unable to recover it. 00:37:32.046 [2024-07-14 15:10:11.244390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.046 [2024-07-14 15:10:11.244427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.046 qpair failed and we were unable to recover it. 00:37:32.046 [2024-07-14 15:10:11.244630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.046 [2024-07-14 15:10:11.244668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.046 qpair failed and we were unable to recover it. 00:37:32.046 [2024-07-14 15:10:11.244848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.046 [2024-07-14 15:10:11.244892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.046 qpair failed and we were unable to recover it. 00:37:32.046 [2024-07-14 15:10:11.245030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.046 [2024-07-14 15:10:11.245064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.046 qpair failed and we were unable to recover it. 00:37:32.046 [2024-07-14 15:10:11.245256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.046 [2024-07-14 15:10:11.245304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.046 qpair failed and we were unable to recover it. 00:37:32.046 [2024-07-14 15:10:11.245534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.046 [2024-07-14 15:10:11.245606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.046 qpair failed and we were unable to recover it. 00:37:32.046 [2024-07-14 15:10:11.245757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.046 [2024-07-14 15:10:11.245797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.046 qpair failed and we were unable to recover it. 00:37:32.046 [2024-07-14 15:10:11.245962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.046 [2024-07-14 15:10:11.245996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.046 qpair failed and we were unable to recover it. 00:37:32.046 [2024-07-14 15:10:11.246119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.046 [2024-07-14 15:10:11.246153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.046 qpair failed and we were unable to recover it. 00:37:32.046 [2024-07-14 15:10:11.246328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.046 [2024-07-14 15:10:11.246365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.046 qpair failed and we were unable to recover it. 00:37:32.046 [2024-07-14 15:10:11.246592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.046 [2024-07-14 15:10:11.246652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.046 qpair failed and we were unable to recover it. 00:37:32.046 [2024-07-14 15:10:11.246833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.046 [2024-07-14 15:10:11.246883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.046 qpair failed and we were unable to recover it. 00:37:32.046 [2024-07-14 15:10:11.247023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.046 [2024-07-14 15:10:11.247057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.046 qpair failed and we were unable to recover it. 00:37:32.046 [2024-07-14 15:10:11.247214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.046 [2024-07-14 15:10:11.247310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.046 qpair failed and we were unable to recover it. 00:37:32.046 [2024-07-14 15:10:11.247475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.046 [2024-07-14 15:10:11.247525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.046 qpair failed and we were unable to recover it. 00:37:32.046 [2024-07-14 15:10:11.247699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.046 [2024-07-14 15:10:11.247736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.046 qpair failed and we were unable to recover it. 00:37:32.047 [2024-07-14 15:10:11.247909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.047 [2024-07-14 15:10:11.247975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.047 qpair failed and we were unable to recover it. 00:37:32.047 [2024-07-14 15:10:11.248128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.047 [2024-07-14 15:10:11.248165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.047 qpair failed and we were unable to recover it. 00:37:32.047 [2024-07-14 15:10:11.248318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.047 [2024-07-14 15:10:11.248356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.047 qpair failed and we were unable to recover it. 00:37:32.047 [2024-07-14 15:10:11.248549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.047 [2024-07-14 15:10:11.248583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.047 qpair failed and we were unable to recover it. 00:37:32.047 [2024-07-14 15:10:11.248835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.047 [2024-07-14 15:10:11.248873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.047 qpair failed and we were unable to recover it. 00:37:32.047 [2024-07-14 15:10:11.249077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.047 [2024-07-14 15:10:11.249115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.047 qpair failed and we were unable to recover it. 00:37:32.047 [2024-07-14 15:10:11.249296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.047 [2024-07-14 15:10:11.249329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.047 qpair failed and we were unable to recover it. 00:37:32.047 [2024-07-14 15:10:11.249577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.047 [2024-07-14 15:10:11.249613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.047 qpair failed and we were unable to recover it. 00:37:32.047 [2024-07-14 15:10:11.249791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.047 [2024-07-14 15:10:11.249828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.047 qpair failed and we were unable to recover it. 00:37:32.047 [2024-07-14 15:10:11.249992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.047 [2024-07-14 15:10:11.250026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.047 qpair failed and we were unable to recover it. 00:37:32.047 [2024-07-14 15:10:11.250167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.047 [2024-07-14 15:10:11.250202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.047 qpair failed and we were unable to recover it. 00:37:32.047 [2024-07-14 15:10:11.250375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.047 [2024-07-14 15:10:11.250413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.047 qpair failed and we were unable to recover it. 00:37:32.047 [2024-07-14 15:10:11.250561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.047 [2024-07-14 15:10:11.250598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.047 qpair failed and we were unable to recover it. 00:37:32.047 [2024-07-14 15:10:11.250829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.047 [2024-07-14 15:10:11.250866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.047 qpair failed and we were unable to recover it. 00:37:32.047 [2024-07-14 15:10:11.251062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.047 [2024-07-14 15:10:11.251095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.047 qpair failed and we were unable to recover it. 00:37:32.047 [2024-07-14 15:10:11.251276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.047 [2024-07-14 15:10:11.251313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.047 qpair failed and we were unable to recover it. 00:37:32.047 [2024-07-14 15:10:11.251497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.047 [2024-07-14 15:10:11.251595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.047 qpair failed and we were unable to recover it. 00:37:32.047 [2024-07-14 15:10:11.251755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.047 [2024-07-14 15:10:11.251793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.047 qpair failed and we were unable to recover it. 00:37:32.047 [2024-07-14 15:10:11.251975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.047 [2024-07-14 15:10:11.252010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.047 qpair failed and we were unable to recover it. 00:37:32.047 [2024-07-14 15:10:11.252171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.047 [2024-07-14 15:10:11.252204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.047 qpair failed and we were unable to recover it. 00:37:32.047 [2024-07-14 15:10:11.252396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.047 [2024-07-14 15:10:11.252429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.047 qpair failed and we were unable to recover it. 00:37:32.047 [2024-07-14 15:10:11.252597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.047 [2024-07-14 15:10:11.252634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.047 qpair failed and we were unable to recover it. 00:37:32.047 [2024-07-14 15:10:11.252778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.047 [2024-07-14 15:10:11.252828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.047 qpair failed and we were unable to recover it. 00:37:32.047 [2024-07-14 15:10:11.252989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.047 [2024-07-14 15:10:11.253024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.047 qpair failed and we were unable to recover it. 00:37:32.047 [2024-07-14 15:10:11.253158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.047 [2024-07-14 15:10:11.253191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.047 qpair failed and we were unable to recover it. 00:37:32.047 [2024-07-14 15:10:11.253377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.047 [2024-07-14 15:10:11.253414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.047 qpair failed and we were unable to recover it. 00:37:32.047 [2024-07-14 15:10:11.253563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.047 [2024-07-14 15:10:11.253600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.047 qpair failed and we were unable to recover it. 00:37:32.047 [2024-07-14 15:10:11.253772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.047 [2024-07-14 15:10:11.253810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.047 qpair failed and we were unable to recover it. 00:37:32.047 [2024-07-14 15:10:11.253989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.047 [2024-07-14 15:10:11.254038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.047 qpair failed and we were unable to recover it. 00:37:32.047 [2024-07-14 15:10:11.254186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.047 [2024-07-14 15:10:11.254222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.047 qpair failed and we were unable to recover it. 00:37:32.047 [2024-07-14 15:10:11.254370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.047 [2024-07-14 15:10:11.254422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.047 qpair failed and we were unable to recover it. 00:37:32.047 [2024-07-14 15:10:11.254614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.047 [2024-07-14 15:10:11.254665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.047 qpair failed and we were unable to recover it. 00:37:32.047 [2024-07-14 15:10:11.254823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.047 [2024-07-14 15:10:11.254870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.047 qpair failed and we were unable to recover it. 00:37:32.047 [2024-07-14 15:10:11.255060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.047 [2024-07-14 15:10:11.255108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.047 qpair failed and we were unable to recover it. 00:37:32.047 [2024-07-14 15:10:11.255385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.047 [2024-07-14 15:10:11.255445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.047 qpair failed and we were unable to recover it. 00:37:32.047 [2024-07-14 15:10:11.255673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.047 [2024-07-14 15:10:11.255754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.047 qpair failed and we were unable to recover it. 00:37:32.047 [2024-07-14 15:10:11.255945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.047 [2024-07-14 15:10:11.255979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.047 qpair failed and we were unable to recover it. 00:37:32.047 [2024-07-14 15:10:11.256191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.047 [2024-07-14 15:10:11.256276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.047 qpair failed and we were unable to recover it. 00:37:32.047 [2024-07-14 15:10:11.256628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.048 [2024-07-14 15:10:11.256679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.048 qpair failed and we were unable to recover it. 00:37:32.048 [2024-07-14 15:10:11.256862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.048 [2024-07-14 15:10:11.256908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.048 qpair failed and we were unable to recover it. 00:37:32.048 [2024-07-14 15:10:11.257044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.048 [2024-07-14 15:10:11.257079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.048 qpair failed and we were unable to recover it. 00:37:32.048 [2024-07-14 15:10:11.257233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.048 [2024-07-14 15:10:11.257282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.048 qpair failed and we were unable to recover it. 00:37:32.048 [2024-07-14 15:10:11.257442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.048 [2024-07-14 15:10:11.257496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.048 qpair failed and we were unable to recover it. 00:37:32.048 [2024-07-14 15:10:11.257692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.048 [2024-07-14 15:10:11.257746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.048 qpair failed and we were unable to recover it. 00:37:32.048 [2024-07-14 15:10:11.257889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.048 [2024-07-14 15:10:11.257925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.048 qpair failed and we were unable to recover it. 00:37:32.048 [2024-07-14 15:10:11.258060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.048 [2024-07-14 15:10:11.258099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.048 qpair failed and we were unable to recover it. 00:37:32.048 [2024-07-14 15:10:11.258254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.048 [2024-07-14 15:10:11.258310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.048 qpair failed and we were unable to recover it. 00:37:32.048 [2024-07-14 15:10:11.258493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.048 [2024-07-14 15:10:11.258547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.048 qpair failed and we were unable to recover it. 00:37:32.048 [2024-07-14 15:10:11.258693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.048 [2024-07-14 15:10:11.258728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.048 qpair failed and we were unable to recover it. 00:37:32.048 [2024-07-14 15:10:11.258865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.048 [2024-07-14 15:10:11.258908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.048 qpair failed and we were unable to recover it. 00:37:32.048 [2024-07-14 15:10:11.259069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.048 [2024-07-14 15:10:11.259103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.048 qpair failed and we were unable to recover it. 00:37:32.048 [2024-07-14 15:10:11.259238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.048 [2024-07-14 15:10:11.259272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.048 qpair failed and we were unable to recover it. 00:37:32.048 [2024-07-14 15:10:11.259423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.048 [2024-07-14 15:10:11.259476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.048 qpair failed and we were unable to recover it. 00:37:32.048 [2024-07-14 15:10:11.259616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.048 [2024-07-14 15:10:11.259651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.048 qpair failed and we were unable to recover it. 00:37:32.048 [2024-07-14 15:10:11.259808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.048 [2024-07-14 15:10:11.259856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.048 qpair failed and we were unable to recover it. 00:37:32.048 [2024-07-14 15:10:11.260011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.048 [2024-07-14 15:10:11.260048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.048 qpair failed and we were unable to recover it. 00:37:32.048 [2024-07-14 15:10:11.260214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.048 [2024-07-14 15:10:11.260248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.048 qpair failed and we were unable to recover it. 00:37:32.048 [2024-07-14 15:10:11.260482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.048 [2024-07-14 15:10:11.260539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.048 qpair failed and we were unable to recover it. 00:37:32.048 [2024-07-14 15:10:11.260750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.048 [2024-07-14 15:10:11.260810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.048 qpair failed and we were unable to recover it. 00:37:32.048 [2024-07-14 15:10:11.261016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.048 [2024-07-14 15:10:11.261051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.048 qpair failed and we were unable to recover it. 00:37:32.048 [2024-07-14 15:10:11.261210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.048 [2024-07-14 15:10:11.261262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.048 qpair failed and we were unable to recover it. 00:37:32.048 [2024-07-14 15:10:11.261370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.048 [2024-07-14 15:10:11.261403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.048 qpair failed and we were unable to recover it. 00:37:32.048 [2024-07-14 15:10:11.261613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.048 [2024-07-14 15:10:11.261674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.048 qpair failed and we were unable to recover it. 00:37:32.048 [2024-07-14 15:10:11.261812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.048 [2024-07-14 15:10:11.261846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.048 qpair failed and we were unable to recover it. 00:37:32.048 [2024-07-14 15:10:11.262002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.048 [2024-07-14 15:10:11.262055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.048 qpair failed and we were unable to recover it. 00:37:32.048 [2024-07-14 15:10:11.262238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.048 [2024-07-14 15:10:11.262292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.048 qpair failed and we were unable to recover it. 00:37:32.048 [2024-07-14 15:10:11.262432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.048 [2024-07-14 15:10:11.262473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.048 qpair failed and we were unable to recover it. 00:37:32.048 [2024-07-14 15:10:11.262669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.048 [2024-07-14 15:10:11.262728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.048 qpair failed and we were unable to recover it. 00:37:32.048 [2024-07-14 15:10:11.262897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.048 [2024-07-14 15:10:11.262932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.048 qpair failed and we were unable to recover it. 00:37:32.048 [2024-07-14 15:10:11.263115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.048 [2024-07-14 15:10:11.263181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.048 qpair failed and we were unable to recover it. 00:37:32.048 [2024-07-14 15:10:11.263336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.048 [2024-07-14 15:10:11.263375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.048 qpair failed and we were unable to recover it. 00:37:32.048 [2024-07-14 15:10:11.263571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.048 [2024-07-14 15:10:11.263638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.048 qpair failed and we were unable to recover it. 00:37:32.048 [2024-07-14 15:10:11.263796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.048 [2024-07-14 15:10:11.263833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.048 qpair failed and we were unable to recover it. 00:37:32.048 [2024-07-14 15:10:11.264004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.048 [2024-07-14 15:10:11.264044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.048 qpair failed and we were unable to recover it. 00:37:32.048 [2024-07-14 15:10:11.264163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.048 [2024-07-14 15:10:11.264198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.048 qpair failed and we were unable to recover it. 00:37:32.048 [2024-07-14 15:10:11.264351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.048 [2024-07-14 15:10:11.264388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.048 qpair failed and we were unable to recover it. 00:37:32.048 [2024-07-14 15:10:11.264540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.049 [2024-07-14 15:10:11.264578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.049 qpair failed and we were unable to recover it. 00:37:32.049 [2024-07-14 15:10:11.264786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.049 [2024-07-14 15:10:11.264823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.049 qpair failed and we were unable to recover it. 00:37:32.049 [2024-07-14 15:10:11.264959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.049 [2024-07-14 15:10:11.264993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.049 qpair failed and we were unable to recover it. 00:37:32.049 [2024-07-14 15:10:11.265104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.049 [2024-07-14 15:10:11.265138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.049 qpair failed and we were unable to recover it. 00:37:32.049 [2024-07-14 15:10:11.265299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.049 [2024-07-14 15:10:11.265336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.049 qpair failed and we were unable to recover it. 00:37:32.049 [2024-07-14 15:10:11.265494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.049 [2024-07-14 15:10:11.265530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.049 qpair failed and we were unable to recover it. 00:37:32.049 [2024-07-14 15:10:11.265684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.049 [2024-07-14 15:10:11.265721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.049 qpair failed and we were unable to recover it. 00:37:32.049 [2024-07-14 15:10:11.265926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.049 [2024-07-14 15:10:11.265975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.049 qpair failed and we were unable to recover it. 00:37:32.049 [2024-07-14 15:10:11.266145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.049 [2024-07-14 15:10:11.266194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.049 qpair failed and we were unable to recover it. 00:37:32.049 [2024-07-14 15:10:11.266392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.049 [2024-07-14 15:10:11.266441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.049 qpair failed and we were unable to recover it. 00:37:32.049 [2024-07-14 15:10:11.266589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.049 [2024-07-14 15:10:11.266628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.049 qpair failed and we were unable to recover it. 00:37:32.049 [2024-07-14 15:10:11.266832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.049 [2024-07-14 15:10:11.266870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.049 qpair failed and we were unable to recover it. 00:37:32.049 [2024-07-14 15:10:11.267042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.049 [2024-07-14 15:10:11.267076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.049 qpair failed and we were unable to recover it. 00:37:32.049 [2024-07-14 15:10:11.267257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.049 [2024-07-14 15:10:11.267294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.049 qpair failed and we were unable to recover it. 00:37:32.049 [2024-07-14 15:10:11.267488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.049 [2024-07-14 15:10:11.267559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.049 qpair failed and we were unable to recover it. 00:37:32.049 [2024-07-14 15:10:11.267706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.049 [2024-07-14 15:10:11.267743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.049 qpair failed and we were unable to recover it. 00:37:32.049 [2024-07-14 15:10:11.267905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.049 [2024-07-14 15:10:11.267957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.049 qpair failed and we were unable to recover it. 00:37:32.049 [2024-07-14 15:10:11.268096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.049 [2024-07-14 15:10:11.268131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.049 qpair failed and we were unable to recover it. 00:37:32.049 [2024-07-14 15:10:11.268263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.049 [2024-07-14 15:10:11.268314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.049 qpair failed and we were unable to recover it. 00:37:32.049 [2024-07-14 15:10:11.268486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.049 [2024-07-14 15:10:11.268523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.049 qpair failed and we were unable to recover it. 00:37:32.049 [2024-07-14 15:10:11.268679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.049 [2024-07-14 15:10:11.268733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.049 qpair failed and we were unable to recover it. 00:37:32.049 [2024-07-14 15:10:11.268865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.049 [2024-07-14 15:10:11.268929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.049 qpair failed and we were unable to recover it. 00:37:32.049 [2024-07-14 15:10:11.269043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.049 [2024-07-14 15:10:11.269076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.049 qpair failed and we were unable to recover it. 00:37:32.049 [2024-07-14 15:10:11.269196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.049 [2024-07-14 15:10:11.269231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.049 qpair failed and we were unable to recover it. 00:37:32.049 [2024-07-14 15:10:11.269370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.049 [2024-07-14 15:10:11.269403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.049 qpair failed and we were unable to recover it. 00:37:32.049 [2024-07-14 15:10:11.269573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.049 [2024-07-14 15:10:11.269606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.049 qpair failed and we were unable to recover it. 00:37:32.049 [2024-07-14 15:10:11.269790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.049 [2024-07-14 15:10:11.269827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.049 qpair failed and we were unable to recover it. 00:37:32.049 [2024-07-14 15:10:11.269951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.049 [2024-07-14 15:10:11.269985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.049 qpair failed and we were unable to recover it. 00:37:32.049 [2024-07-14 15:10:11.270141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.049 [2024-07-14 15:10:11.270210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.049 qpair failed and we were unable to recover it. 00:37:32.049 [2024-07-14 15:10:11.270374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.049 [2024-07-14 15:10:11.270410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.049 qpair failed and we were unable to recover it. 00:37:32.049 [2024-07-14 15:10:11.270563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.049 [2024-07-14 15:10:11.270600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.049 qpair failed and we were unable to recover it. 00:37:32.049 [2024-07-14 15:10:11.270740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.050 [2024-07-14 15:10:11.270777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.050 qpair failed and we were unable to recover it. 00:37:32.050 [2024-07-14 15:10:11.270929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.050 [2024-07-14 15:10:11.270963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.050 qpair failed and we were unable to recover it. 00:37:32.050 [2024-07-14 15:10:11.271103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.050 [2024-07-14 15:10:11.271137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.050 qpair failed and we were unable to recover it. 00:37:32.050 [2024-07-14 15:10:11.271331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.050 [2024-07-14 15:10:11.271368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.050 qpair failed and we were unable to recover it. 00:37:32.050 [2024-07-14 15:10:11.271502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.050 [2024-07-14 15:10:11.271554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.050 qpair failed and we were unable to recover it. 00:37:32.050 [2024-07-14 15:10:11.271707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.050 [2024-07-14 15:10:11.271744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.050 qpair failed and we were unable to recover it. 00:37:32.050 [2024-07-14 15:10:11.271930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.050 [2024-07-14 15:10:11.271976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.050 qpair failed and we were unable to recover it. 00:37:32.050 [2024-07-14 15:10:11.272150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.050 [2024-07-14 15:10:11.272199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.050 qpair failed and we were unable to recover it. 00:37:32.050 [2024-07-14 15:10:11.272362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.050 [2024-07-14 15:10:11.272425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.050 qpair failed and we were unable to recover it. 00:37:32.050 [2024-07-14 15:10:11.272610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.050 [2024-07-14 15:10:11.272662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.050 qpair failed and we were unable to recover it. 00:37:32.050 [2024-07-14 15:10:11.272798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.050 [2024-07-14 15:10:11.272832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.050 qpair failed and we were unable to recover it. 00:37:32.050 [2024-07-14 15:10:11.273016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.050 [2024-07-14 15:10:11.273051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.050 qpair failed and we were unable to recover it. 00:37:32.050 [2024-07-14 15:10:11.273229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.050 [2024-07-14 15:10:11.273284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.050 qpair failed and we were unable to recover it. 00:37:32.050 [2024-07-14 15:10:11.273421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.050 [2024-07-14 15:10:11.273462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.050 qpair failed and we were unable to recover it. 00:37:32.050 [2024-07-14 15:10:11.273618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.050 [2024-07-14 15:10:11.273656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.050 qpair failed and we were unable to recover it. 00:37:32.050 [2024-07-14 15:10:11.273838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.050 [2024-07-14 15:10:11.273872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.050 qpair failed and we were unable to recover it. 00:37:32.050 [2024-07-14 15:10:11.274048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.050 [2024-07-14 15:10:11.274097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.050 qpair failed and we were unable to recover it. 00:37:32.050 [2024-07-14 15:10:11.274278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.050 [2024-07-14 15:10:11.274330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.050 qpair failed and we were unable to recover it. 00:37:32.050 [2024-07-14 15:10:11.274657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.050 [2024-07-14 15:10:11.274721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.050 qpair failed and we were unable to recover it. 00:37:32.050 [2024-07-14 15:10:11.274890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.050 [2024-07-14 15:10:11.274929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.050 qpair failed and we were unable to recover it. 00:37:32.050 [2024-07-14 15:10:11.275061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.050 [2024-07-14 15:10:11.275095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.050 qpair failed and we were unable to recover it. 00:37:32.050 [2024-07-14 15:10:11.275245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.050 [2024-07-14 15:10:11.275293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.050 qpair failed and we were unable to recover it. 00:37:32.050 [2024-07-14 15:10:11.275464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.050 [2024-07-14 15:10:11.275522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.050 qpair failed and we were unable to recover it. 00:37:32.050 [2024-07-14 15:10:11.275671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.050 [2024-07-14 15:10:11.275767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.050 qpair failed and we were unable to recover it. 00:37:32.050 [2024-07-14 15:10:11.275904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.050 [2024-07-14 15:10:11.275940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.050 qpair failed and we were unable to recover it. 00:37:32.050 [2024-07-14 15:10:11.276084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.050 [2024-07-14 15:10:11.276135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.050 qpair failed and we were unable to recover it. 00:37:32.050 [2024-07-14 15:10:11.276323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.050 [2024-07-14 15:10:11.276360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.050 qpair failed and we were unable to recover it. 00:37:32.050 [2024-07-14 15:10:11.276481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.050 [2024-07-14 15:10:11.276516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.050 qpair failed and we were unable to recover it. 00:37:32.050 [2024-07-14 15:10:11.276631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.050 [2024-07-14 15:10:11.276665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.050 qpair failed and we were unable to recover it. 00:37:32.050 [2024-07-14 15:10:11.276800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.050 [2024-07-14 15:10:11.276834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.050 qpair failed and we were unable to recover it. 00:37:32.050 [2024-07-14 15:10:11.277021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.050 [2024-07-14 15:10:11.277070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.050 qpair failed and we were unable to recover it. 00:37:32.050 [2024-07-14 15:10:11.277253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.050 [2024-07-14 15:10:11.277307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.050 qpair failed and we were unable to recover it. 00:37:32.050 [2024-07-14 15:10:11.277551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.050 [2024-07-14 15:10:11.277592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.050 qpair failed and we were unable to recover it. 00:37:32.050 [2024-07-14 15:10:11.277719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.050 [2024-07-14 15:10:11.277758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.050 qpair failed and we were unable to recover it. 00:37:32.050 [2024-07-14 15:10:11.277927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.050 [2024-07-14 15:10:11.277963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.050 qpair failed and we were unable to recover it. 00:37:32.050 [2024-07-14 15:10:11.278105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.050 [2024-07-14 15:10:11.278139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.050 qpair failed and we were unable to recover it. 00:37:32.050 [2024-07-14 15:10:11.278294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.050 [2024-07-14 15:10:11.278333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.050 qpair failed and we were unable to recover it. 00:37:32.050 [2024-07-14 15:10:11.278599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.050 [2024-07-14 15:10:11.278659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.051 qpair failed and we were unable to recover it. 00:37:32.051 [2024-07-14 15:10:11.278824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.051 [2024-07-14 15:10:11.278858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.051 qpair failed and we were unable to recover it. 00:37:32.051 [2024-07-14 15:10:11.279028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.051 [2024-07-14 15:10:11.279076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.051 qpair failed and we were unable to recover it. 00:37:32.051 [2024-07-14 15:10:11.279301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.051 [2024-07-14 15:10:11.279361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.051 qpair failed and we were unable to recover it. 00:37:32.051 [2024-07-14 15:10:11.279626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.051 [2024-07-14 15:10:11.279689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.051 qpair failed and we were unable to recover it. 00:37:32.051 [2024-07-14 15:10:11.279869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.051 [2024-07-14 15:10:11.279929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.051 qpair failed and we were unable to recover it. 00:37:32.051 [2024-07-14 15:10:11.280043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.051 [2024-07-14 15:10:11.280078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.051 qpair failed and we were unable to recover it. 00:37:32.051 [2024-07-14 15:10:11.280234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.051 [2024-07-14 15:10:11.280287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.051 qpair failed and we were unable to recover it. 00:37:32.051 [2024-07-14 15:10:11.280425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.051 [2024-07-14 15:10:11.280466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.051 qpair failed and we were unable to recover it. 00:37:32.051 [2024-07-14 15:10:11.280716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.051 [2024-07-14 15:10:11.280772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.051 qpair failed and we were unable to recover it. 00:37:32.051 [2024-07-14 15:10:11.280920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.051 [2024-07-14 15:10:11.280956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.051 qpair failed and we were unable to recover it. 00:37:32.051 [2024-07-14 15:10:11.281065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.051 [2024-07-14 15:10:11.281099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.051 qpair failed and we were unable to recover it. 00:37:32.051 [2024-07-14 15:10:11.281230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.051 [2024-07-14 15:10:11.281269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.051 qpair failed and we were unable to recover it. 00:37:32.051 [2024-07-14 15:10:11.281479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.051 [2024-07-14 15:10:11.281516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.051 qpair failed and we were unable to recover it. 00:37:32.051 [2024-07-14 15:10:11.281793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.051 [2024-07-14 15:10:11.281861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.051 qpair failed and we were unable to recover it. 00:37:32.051 [2024-07-14 15:10:11.282013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.051 [2024-07-14 15:10:11.282046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.051 qpair failed and we were unable to recover it. 00:37:32.051 [2024-07-14 15:10:11.282182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.051 [2024-07-14 15:10:11.282233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.051 qpair failed and we were unable to recover it. 00:37:32.051 [2024-07-14 15:10:11.282443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.051 [2024-07-14 15:10:11.282508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.051 qpair failed and we were unable to recover it. 00:37:32.051 [2024-07-14 15:10:11.282717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.051 [2024-07-14 15:10:11.282813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.051 qpair failed and we were unable to recover it. 00:37:32.051 [2024-07-14 15:10:11.282988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.051 [2024-07-14 15:10:11.283023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.051 qpair failed and we were unable to recover it. 00:37:32.051 [2024-07-14 15:10:11.283177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.051 [2024-07-14 15:10:11.283211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.051 qpair failed and we were unable to recover it. 00:37:32.051 [2024-07-14 15:10:11.283439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.051 [2024-07-14 15:10:11.283481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.051 qpair failed and we were unable to recover it. 00:37:32.051 [2024-07-14 15:10:11.283688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.051 [2024-07-14 15:10:11.283722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.051 qpair failed and we were unable to recover it. 00:37:32.051 [2024-07-14 15:10:11.283895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.051 [2024-07-14 15:10:11.283947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.051 qpair failed and we were unable to recover it. 00:37:32.051 [2024-07-14 15:10:11.284088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.051 [2024-07-14 15:10:11.284121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.051 qpair failed and we were unable to recover it. 00:37:32.051 [2024-07-14 15:10:11.284267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.051 [2024-07-14 15:10:11.284303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.051 qpair failed and we were unable to recover it. 00:37:32.051 [2024-07-14 15:10:11.284473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.051 [2024-07-14 15:10:11.284559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.051 qpair failed and we were unable to recover it. 00:37:32.051 [2024-07-14 15:10:11.284724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.051 [2024-07-14 15:10:11.284760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.051 qpair failed and we were unable to recover it. 00:37:32.051 [2024-07-14 15:10:11.284887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.051 [2024-07-14 15:10:11.284940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.051 qpair failed and we were unable to recover it. 00:37:32.051 [2024-07-14 15:10:11.285047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.051 [2024-07-14 15:10:11.285081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.051 qpair failed and we were unable to recover it. 00:37:32.051 [2024-07-14 15:10:11.285206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.051 [2024-07-14 15:10:11.285239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.051 qpair failed and we were unable to recover it. 00:37:32.051 [2024-07-14 15:10:11.285393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.051 [2024-07-14 15:10:11.285431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.051 qpair failed and we were unable to recover it. 00:37:32.051 [2024-07-14 15:10:11.285616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.051 [2024-07-14 15:10:11.285650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.051 qpair failed and we were unable to recover it. 00:37:32.051 [2024-07-14 15:10:11.285821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.051 [2024-07-14 15:10:11.285855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.051 qpair failed and we were unable to recover it. 00:37:32.051 [2024-07-14 15:10:11.285994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.051 [2024-07-14 15:10:11.286043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.051 qpair failed and we were unable to recover it. 00:37:32.051 [2024-07-14 15:10:11.286264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.051 [2024-07-14 15:10:11.286300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.051 qpair failed and we were unable to recover it. 00:37:32.051 [2024-07-14 15:10:11.286500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.051 [2024-07-14 15:10:11.286538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.051 qpair failed and we were unable to recover it. 00:37:32.051 [2024-07-14 15:10:11.286690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.052 [2024-07-14 15:10:11.286728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.052 qpair failed and we were unable to recover it. 00:37:32.052 [2024-07-14 15:10:11.286851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.052 [2024-07-14 15:10:11.286896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.052 qpair failed and we were unable to recover it. 00:37:32.052 [2024-07-14 15:10:11.287099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.052 [2024-07-14 15:10:11.287147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.052 qpair failed and we were unable to recover it. 00:37:32.052 [2024-07-14 15:10:11.287319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.052 [2024-07-14 15:10:11.287374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.052 qpair failed and we were unable to recover it. 00:37:32.052 [2024-07-14 15:10:11.287513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.052 [2024-07-14 15:10:11.287566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.052 qpair failed and we were unable to recover it. 00:37:32.052 [2024-07-14 15:10:11.287752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.052 [2024-07-14 15:10:11.287810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.052 qpair failed and we were unable to recover it. 00:37:32.052 [2024-07-14 15:10:11.287971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.052 [2024-07-14 15:10:11.288021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.052 qpair failed and we were unable to recover it. 00:37:32.052 [2024-07-14 15:10:11.288177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.052 [2024-07-14 15:10:11.288225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.052 qpair failed and we were unable to recover it. 00:37:32.052 [2024-07-14 15:10:11.288344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.052 [2024-07-14 15:10:11.288380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.052 qpair failed and we were unable to recover it. 00:37:32.052 [2024-07-14 15:10:11.288520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.052 [2024-07-14 15:10:11.288554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.052 qpair failed and we were unable to recover it. 00:37:32.052 [2024-07-14 15:10:11.288697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.052 [2024-07-14 15:10:11.288732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.052 qpair failed and we were unable to recover it. 00:37:32.052 [2024-07-14 15:10:11.288861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.052 [2024-07-14 15:10:11.288917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.052 qpair failed and we were unable to recover it. 00:37:32.052 [2024-07-14 15:10:11.289083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.052 [2024-07-14 15:10:11.289118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.052 qpair failed and we were unable to recover it. 00:37:32.052 [2024-07-14 15:10:11.289281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.052 [2024-07-14 15:10:11.289332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.052 qpair failed and we were unable to recover it. 00:37:32.052 [2024-07-14 15:10:11.289488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.052 [2024-07-14 15:10:11.289521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.052 qpair failed and we were unable to recover it. 00:37:32.052 [2024-07-14 15:10:11.289658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.052 [2024-07-14 15:10:11.289691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.052 qpair failed and we were unable to recover it. 00:37:32.052 [2024-07-14 15:10:11.289803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.052 [2024-07-14 15:10:11.289837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.052 qpair failed and we were unable to recover it. 00:37:32.052 [2024-07-14 15:10:11.290050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.052 [2024-07-14 15:10:11.290103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.052 qpair failed and we were unable to recover it. 00:37:32.052 [2024-07-14 15:10:11.290267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.052 [2024-07-14 15:10:11.290308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.052 qpair failed and we were unable to recover it. 00:37:32.052 [2024-07-14 15:10:11.290448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.052 [2024-07-14 15:10:11.290498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.052 qpair failed and we were unable to recover it. 00:37:32.052 [2024-07-14 15:10:11.290731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.052 [2024-07-14 15:10:11.290769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.052 qpair failed and we were unable to recover it. 00:37:32.052 [2024-07-14 15:10:11.290962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.052 [2024-07-14 15:10:11.290996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.052 qpair failed and we were unable to recover it. 00:37:32.052 [2024-07-14 15:10:11.291101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.052 [2024-07-14 15:10:11.291135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.052 qpair failed and we were unable to recover it. 00:37:32.052 [2024-07-14 15:10:11.291360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.052 [2024-07-14 15:10:11.291397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.052 qpair failed and we were unable to recover it. 00:37:32.052 [2024-07-14 15:10:11.291585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.052 [2024-07-14 15:10:11.291628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.052 qpair failed and we were unable to recover it. 00:37:32.052 [2024-07-14 15:10:11.291802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.052 [2024-07-14 15:10:11.291839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.052 qpair failed and we were unable to recover it. 00:37:32.052 [2024-07-14 15:10:11.291966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.052 [2024-07-14 15:10:11.291999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.052 qpair failed and we were unable to recover it. 00:37:32.052 [2024-07-14 15:10:11.292129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.052 [2024-07-14 15:10:11.292176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.052 qpair failed and we were unable to recover it. 00:37:32.052 [2024-07-14 15:10:11.292359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.052 [2024-07-14 15:10:11.292424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.052 qpair failed and we were unable to recover it. 00:37:32.052 [2024-07-14 15:10:11.292622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.052 [2024-07-14 15:10:11.292681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.052 qpair failed and we were unable to recover it. 00:37:32.052 [2024-07-14 15:10:11.292833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.052 [2024-07-14 15:10:11.292886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.052 qpair failed and we were unable to recover it. 00:37:32.052 [2024-07-14 15:10:11.293031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.052 [2024-07-14 15:10:11.293065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.052 qpair failed and we were unable to recover it. 00:37:32.052 [2024-07-14 15:10:11.293257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.052 [2024-07-14 15:10:11.293306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.052 qpair failed and we were unable to recover it. 00:37:32.052 [2024-07-14 15:10:11.293575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.052 [2024-07-14 15:10:11.293634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.052 qpair failed and we were unable to recover it. 00:37:32.052 [2024-07-14 15:10:11.293749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.052 [2024-07-14 15:10:11.293783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.052 qpair failed and we were unable to recover it. 00:37:32.052 [2024-07-14 15:10:11.293936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.052 [2024-07-14 15:10:11.293971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.052 qpair failed and we were unable to recover it. 00:37:32.052 [2024-07-14 15:10:11.294139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.052 [2024-07-14 15:10:11.294195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.052 qpair failed and we were unable to recover it. 00:37:32.052 [2024-07-14 15:10:11.294355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.052 [2024-07-14 15:10:11.294410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.052 qpair failed and we were unable to recover it. 00:37:32.052 [2024-07-14 15:10:11.294562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.053 [2024-07-14 15:10:11.294596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.053 qpair failed and we were unable to recover it. 00:37:32.053 [2024-07-14 15:10:11.294756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.053 [2024-07-14 15:10:11.294790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.053 qpair failed and we were unable to recover it. 00:37:32.053 [2024-07-14 15:10:11.294940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.053 [2024-07-14 15:10:11.294975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.053 qpair failed and we were unable to recover it. 00:37:32.053 [2024-07-14 15:10:11.295125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.053 [2024-07-14 15:10:11.295161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.053 qpair failed and we were unable to recover it. 00:37:32.053 [2024-07-14 15:10:11.295293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.053 [2024-07-14 15:10:11.295341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.053 qpair failed and we were unable to recover it. 00:37:32.053 [2024-07-14 15:10:11.295471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.053 [2024-07-14 15:10:11.295508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.053 qpair failed and we were unable to recover it. 00:37:32.053 [2024-07-14 15:10:11.295729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.053 [2024-07-14 15:10:11.295795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.053 qpair failed and we were unable to recover it. 00:37:32.053 [2024-07-14 15:10:11.295959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.053 [2024-07-14 15:10:11.295994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.053 qpair failed and we were unable to recover it. 00:37:32.053 [2024-07-14 15:10:11.296159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.053 [2024-07-14 15:10:11.296196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.053 qpair failed and we were unable to recover it. 00:37:32.053 [2024-07-14 15:10:11.296451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.053 [2024-07-14 15:10:11.296517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.053 qpair failed and we were unable to recover it. 00:37:32.053 [2024-07-14 15:10:11.296822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.053 [2024-07-14 15:10:11.296860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.053 qpair failed and we were unable to recover it. 00:37:32.053 [2024-07-14 15:10:11.297011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.053 [2024-07-14 15:10:11.297045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.053 qpair failed and we were unable to recover it. 00:37:32.053 [2024-07-14 15:10:11.297220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.053 [2024-07-14 15:10:11.297283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.053 qpair failed and we were unable to recover it. 00:37:32.053 [2024-07-14 15:10:11.297540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.053 [2024-07-14 15:10:11.297597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.053 qpair failed and we were unable to recover it. 00:37:32.053 [2024-07-14 15:10:11.297743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.053 [2024-07-14 15:10:11.297780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.053 qpair failed and we were unable to recover it. 00:37:32.053 [2024-07-14 15:10:11.297935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.053 [2024-07-14 15:10:11.297969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.053 qpair failed and we were unable to recover it. 00:37:32.053 [2024-07-14 15:10:11.298081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.053 [2024-07-14 15:10:11.298124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.053 qpair failed and we were unable to recover it. 00:37:32.053 [2024-07-14 15:10:11.298267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.053 [2024-07-14 15:10:11.298304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.053 qpair failed and we were unable to recover it. 00:37:32.053 [2024-07-14 15:10:11.298431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.053 [2024-07-14 15:10:11.298483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.053 qpair failed and we were unable to recover it. 00:37:32.053 [2024-07-14 15:10:11.298642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.053 [2024-07-14 15:10:11.298679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.053 qpair failed and we were unable to recover it. 00:37:32.053 [2024-07-14 15:10:11.298895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.338 [2024-07-14 15:10:11.298962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.338 qpair failed and we were unable to recover it. 00:37:32.338 [2024-07-14 15:10:11.299125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.338 [2024-07-14 15:10:11.299162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.338 qpair failed and we were unable to recover it. 00:37:32.338 [2024-07-14 15:10:11.299371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.338 [2024-07-14 15:10:11.299411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.338 qpair failed and we were unable to recover it. 00:37:32.338 [2024-07-14 15:10:11.299587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.338 [2024-07-14 15:10:11.299626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.338 qpair failed and we were unable to recover it. 00:37:32.338 [2024-07-14 15:10:11.299831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.338 [2024-07-14 15:10:11.299869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.338 qpair failed and we were unable to recover it. 00:37:32.338 [2024-07-14 15:10:11.300039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.338 [2024-07-14 15:10:11.300073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.338 qpair failed and we were unable to recover it. 00:37:32.338 [2024-07-14 15:10:11.300215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.338 [2024-07-14 15:10:11.300260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.338 qpair failed and we were unable to recover it. 00:37:32.338 [2024-07-14 15:10:11.300385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.338 [2024-07-14 15:10:11.300437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.338 qpair failed and we were unable to recover it. 00:37:32.338 [2024-07-14 15:10:11.300570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.338 [2024-07-14 15:10:11.300647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.338 qpair failed and we were unable to recover it. 00:37:32.338 [2024-07-14 15:10:11.300799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.338 [2024-07-14 15:10:11.300837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.338 qpair failed and we were unable to recover it. 00:37:32.338 [2024-07-14 15:10:11.301044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.339 [2024-07-14 15:10:11.301093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.339 qpair failed and we were unable to recover it. 00:37:32.339 [2024-07-14 15:10:11.301240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.339 [2024-07-14 15:10:11.301276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.339 qpair failed and we were unable to recover it. 00:37:32.339 [2024-07-14 15:10:11.301463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.339 [2024-07-14 15:10:11.301516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.339 qpair failed and we were unable to recover it. 00:37:32.339 [2024-07-14 15:10:11.301778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.339 [2024-07-14 15:10:11.301844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.339 qpair failed and we were unable to recover it. 00:37:32.339 [2024-07-14 15:10:11.302003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.339 [2024-07-14 15:10:11.302037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.339 qpair failed and we were unable to recover it. 00:37:32.339 [2024-07-14 15:10:11.302184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.339 [2024-07-14 15:10:11.302238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.339 qpair failed and we were unable to recover it. 00:37:32.339 [2024-07-14 15:10:11.302403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.339 [2024-07-14 15:10:11.302482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.339 qpair failed and we were unable to recover it. 00:37:32.339 [2024-07-14 15:10:11.302642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.339 [2024-07-14 15:10:11.302706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.339 qpair failed and we were unable to recover it. 00:37:32.339 [2024-07-14 15:10:11.302891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.339 [2024-07-14 15:10:11.302926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.339 qpair failed and we were unable to recover it. 00:37:32.339 [2024-07-14 15:10:11.303085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.339 [2024-07-14 15:10:11.303119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.339 qpair failed and we were unable to recover it. 00:37:32.339 [2024-07-14 15:10:11.303296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.339 [2024-07-14 15:10:11.303349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.339 qpair failed and we were unable to recover it. 00:37:32.339 [2024-07-14 15:10:11.303484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.339 [2024-07-14 15:10:11.303538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.339 qpair failed and we were unable to recover it. 00:37:32.339 [2024-07-14 15:10:11.303822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.339 [2024-07-14 15:10:11.303888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.339 qpair failed and we were unable to recover it. 00:37:32.339 [2024-07-14 15:10:11.304028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.339 [2024-07-14 15:10:11.304061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.339 qpair failed and we were unable to recover it. 00:37:32.339 [2024-07-14 15:10:11.304166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.339 [2024-07-14 15:10:11.304200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.339 qpair failed and we were unable to recover it. 00:37:32.339 [2024-07-14 15:10:11.304334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.339 [2024-07-14 15:10:11.304386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.339 qpair failed and we were unable to recover it. 00:37:32.339 [2024-07-14 15:10:11.304567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.339 [2024-07-14 15:10:11.304631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.339 qpair failed and we were unable to recover it. 00:37:32.339 [2024-07-14 15:10:11.304783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.339 [2024-07-14 15:10:11.304820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.339 qpair failed and we were unable to recover it. 00:37:32.339 [2024-07-14 15:10:11.304989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.339 [2024-07-14 15:10:11.305023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.339 qpair failed and we were unable to recover it. 00:37:32.339 [2024-07-14 15:10:11.305191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.339 [2024-07-14 15:10:11.305232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.339 qpair failed and we were unable to recover it. 00:37:32.339 [2024-07-14 15:10:11.305496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.339 [2024-07-14 15:10:11.305587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.339 qpair failed and we were unable to recover it. 00:37:32.339 [2024-07-14 15:10:11.305762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.339 [2024-07-14 15:10:11.305799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.339 qpair failed and we were unable to recover it. 00:37:32.339 [2024-07-14 15:10:11.305982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.339 [2024-07-14 15:10:11.306017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.339 qpair failed and we were unable to recover it. 00:37:32.339 [2024-07-14 15:10:11.306175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.339 [2024-07-14 15:10:11.306223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.339 qpair failed and we were unable to recover it. 00:37:32.339 [2024-07-14 15:10:11.306431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.339 [2024-07-14 15:10:11.306484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.339 qpair failed and we were unable to recover it. 00:37:32.339 [2024-07-14 15:10:11.306661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.339 [2024-07-14 15:10:11.306728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.339 qpair failed and we were unable to recover it. 00:37:32.339 [2024-07-14 15:10:11.306927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.339 [2024-07-14 15:10:11.306962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.339 qpair failed and we were unable to recover it. 00:37:32.339 [2024-07-14 15:10:11.307090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.339 [2024-07-14 15:10:11.307123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.339 qpair failed and we were unable to recover it. 00:37:32.339 [2024-07-14 15:10:11.307233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.339 [2024-07-14 15:10:11.307284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.339 qpair failed and we were unable to recover it. 00:37:32.339 [2024-07-14 15:10:11.307408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.339 [2024-07-14 15:10:11.307445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.339 qpair failed and we were unable to recover it. 00:37:32.339 [2024-07-14 15:10:11.307670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.340 [2024-07-14 15:10:11.307719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.340 qpair failed and we were unable to recover it. 00:37:32.340 [2024-07-14 15:10:11.307895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.340 [2024-07-14 15:10:11.307948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.340 qpair failed and we were unable to recover it. 00:37:32.340 [2024-07-14 15:10:11.308057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.340 [2024-07-14 15:10:11.308091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.340 qpair failed and we were unable to recover it. 00:37:32.340 [2024-07-14 15:10:11.308231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.340 [2024-07-14 15:10:11.308275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.340 qpair failed and we were unable to recover it. 00:37:32.340 [2024-07-14 15:10:11.308521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.340 [2024-07-14 15:10:11.308577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.340 qpair failed and we were unable to recover it. 00:37:32.340 [2024-07-14 15:10:11.308754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.340 [2024-07-14 15:10:11.308791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.340 qpair failed and we were unable to recover it. 00:37:32.340 [2024-07-14 15:10:11.308920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.340 [2024-07-14 15:10:11.308959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.340 qpair failed and we were unable to recover it. 00:37:32.340 [2024-07-14 15:10:11.309063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.340 [2024-07-14 15:10:11.309096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.340 qpair failed and we were unable to recover it. 00:37:32.340 [2024-07-14 15:10:11.309252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.340 [2024-07-14 15:10:11.309288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.340 qpair failed and we were unable to recover it. 00:37:32.340 [2024-07-14 15:10:11.309549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.340 [2024-07-14 15:10:11.309608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.340 qpair failed and we were unable to recover it. 00:37:32.340 [2024-07-14 15:10:11.309756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.340 [2024-07-14 15:10:11.309792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.340 qpair failed and we were unable to recover it. 00:37:32.340 [2024-07-14 15:10:11.309986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.340 [2024-07-14 15:10:11.310036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.340 qpair failed and we were unable to recover it. 00:37:32.340 [2024-07-14 15:10:11.310209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.340 [2024-07-14 15:10:11.310257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.340 qpair failed and we were unable to recover it. 00:37:32.340 [2024-07-14 15:10:11.310440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.340 [2024-07-14 15:10:11.310500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.340 qpair failed and we were unable to recover it. 00:37:32.340 [2024-07-14 15:10:11.310654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.340 [2024-07-14 15:10:11.310750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.340 qpair failed and we were unable to recover it. 00:37:32.340 [2024-07-14 15:10:11.310922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.340 [2024-07-14 15:10:11.310958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.340 qpair failed and we were unable to recover it. 00:37:32.340 [2024-07-14 15:10:11.311094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.340 [2024-07-14 15:10:11.311146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.340 qpair failed and we were unable to recover it. 00:37:32.340 [2024-07-14 15:10:11.311300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.340 [2024-07-14 15:10:11.311351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.340 qpair failed and we were unable to recover it. 00:37:32.340 [2024-07-14 15:10:11.311548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.340 [2024-07-14 15:10:11.311606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.340 qpair failed and we were unable to recover it. 00:37:32.340 [2024-07-14 15:10:11.311770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.340 [2024-07-14 15:10:11.311803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.340 qpair failed and we were unable to recover it. 00:37:32.340 [2024-07-14 15:10:11.311987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.340 [2024-07-14 15:10:11.312023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.340 qpair failed and we were unable to recover it. 00:37:32.340 [2024-07-14 15:10:11.312188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.340 [2024-07-14 15:10:11.312226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.340 qpair failed and we were unable to recover it. 00:37:32.340 [2024-07-14 15:10:11.312402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.340 [2024-07-14 15:10:11.312439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.340 qpair failed and we were unable to recover it. 00:37:32.340 [2024-07-14 15:10:11.312561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.340 [2024-07-14 15:10:11.312599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.340 qpair failed and we were unable to recover it. 00:37:32.340 [2024-07-14 15:10:11.312755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.340 [2024-07-14 15:10:11.312788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.340 qpair failed and we were unable to recover it. 00:37:32.340 [2024-07-14 15:10:11.312949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.340 [2024-07-14 15:10:11.312984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.340 qpair failed and we were unable to recover it. 00:37:32.340 [2024-07-14 15:10:11.313109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.340 [2024-07-14 15:10:11.313146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.340 qpair failed and we were unable to recover it. 00:37:32.340 [2024-07-14 15:10:11.313319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.340 [2024-07-14 15:10:11.313356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.340 qpair failed and we were unable to recover it. 00:37:32.340 [2024-07-14 15:10:11.313506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.340 [2024-07-14 15:10:11.313543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.340 qpair failed and we were unable to recover it. 00:37:32.340 [2024-07-14 15:10:11.313751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.340 [2024-07-14 15:10:11.313805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.340 qpair failed and we were unable to recover it. 00:37:32.340 [2024-07-14 15:10:11.313955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.340 [2024-07-14 15:10:11.314009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.341 qpair failed and we were unable to recover it. 00:37:32.341 [2024-07-14 15:10:11.314149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.341 [2024-07-14 15:10:11.314190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.341 qpair failed and we were unable to recover it. 00:37:32.341 [2024-07-14 15:10:11.314344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.341 [2024-07-14 15:10:11.314384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.341 qpair failed and we were unable to recover it. 00:37:32.341 [2024-07-14 15:10:11.314542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.341 [2024-07-14 15:10:11.314596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.341 qpair failed and we were unable to recover it. 00:37:32.341 [2024-07-14 15:10:11.314700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.341 [2024-07-14 15:10:11.314734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.341 qpair failed and we were unable to recover it. 00:37:32.341 [2024-07-14 15:10:11.314903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.341 [2024-07-14 15:10:11.314938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.341 qpair failed and we were unable to recover it. 00:37:32.341 [2024-07-14 15:10:11.315088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.341 [2024-07-14 15:10:11.315122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.341 qpair failed and we were unable to recover it. 00:37:32.341 [2024-07-14 15:10:11.315294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.341 [2024-07-14 15:10:11.315327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.341 qpair failed and we were unable to recover it. 00:37:32.341 [2024-07-14 15:10:11.315547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.341 [2024-07-14 15:10:11.315605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.341 qpair failed and we were unable to recover it. 00:37:32.341 [2024-07-14 15:10:11.315778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.341 [2024-07-14 15:10:11.315815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.341 qpair failed and we were unable to recover it. 00:37:32.341 [2024-07-14 15:10:11.315976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.341 [2024-07-14 15:10:11.316010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.341 qpair failed and we were unable to recover it. 00:37:32.341 [2024-07-14 15:10:11.316148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.341 [2024-07-14 15:10:11.316182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.341 qpair failed and we were unable to recover it. 00:37:32.341 [2024-07-14 15:10:11.316309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.341 [2024-07-14 15:10:11.316361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.341 qpair failed and we were unable to recover it. 00:37:32.341 [2024-07-14 15:10:11.316512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.341 [2024-07-14 15:10:11.316549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.341 qpair failed and we were unable to recover it. 00:37:32.341 [2024-07-14 15:10:11.316682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.341 [2024-07-14 15:10:11.316733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.341 qpair failed and we were unable to recover it. 00:37:32.341 [2024-07-14 15:10:11.316960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.341 [2024-07-14 15:10:11.317009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.341 qpair failed and we were unable to recover it. 00:37:32.341 [2024-07-14 15:10:11.317172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.341 [2024-07-14 15:10:11.317218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.341 qpair failed and we were unable to recover it. 00:37:32.341 [2024-07-14 15:10:11.317434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.341 [2024-07-14 15:10:11.317473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.341 qpair failed and we were unable to recover it. 00:37:32.341 [2024-07-14 15:10:11.317624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.341 [2024-07-14 15:10:11.317663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.341 qpair failed and we were unable to recover it. 00:37:32.341 [2024-07-14 15:10:11.317821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.341 [2024-07-14 15:10:11.317859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.341 qpair failed and we were unable to recover it. 00:37:32.341 [2024-07-14 15:10:11.318034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.341 [2024-07-14 15:10:11.318082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.341 qpair failed and we were unable to recover it. 00:37:32.341 [2024-07-14 15:10:11.318309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.341 [2024-07-14 15:10:11.318363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.341 qpair failed and we were unable to recover it. 00:37:32.341 [2024-07-14 15:10:11.318563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.341 [2024-07-14 15:10:11.318620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.341 qpair failed and we were unable to recover it. 00:37:32.341 [2024-07-14 15:10:11.318747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.341 [2024-07-14 15:10:11.318784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.341 qpair failed and we were unable to recover it. 00:37:32.341 [2024-07-14 15:10:11.318923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.341 [2024-07-14 15:10:11.318957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.341 qpair failed and we were unable to recover it. 00:37:32.341 [2024-07-14 15:10:11.319092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.341 [2024-07-14 15:10:11.319126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.341 qpair failed and we were unable to recover it. 00:37:32.341 [2024-07-14 15:10:11.319340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.341 [2024-07-14 15:10:11.319397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.341 qpair failed and we were unable to recover it. 00:37:32.341 [2024-07-14 15:10:11.319574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.341 [2024-07-14 15:10:11.319630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.341 qpair failed and we were unable to recover it. 00:37:32.341 [2024-07-14 15:10:11.319805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.341 [2024-07-14 15:10:11.319843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.341 qpair failed and we were unable to recover it. 00:37:32.341 [2024-07-14 15:10:11.320041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.341 [2024-07-14 15:10:11.320091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.341 qpair failed and we were unable to recover it. 00:37:32.341 [2024-07-14 15:10:11.320270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.341 [2024-07-14 15:10:11.320336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.341 qpair failed and we were unable to recover it. 00:37:32.342 [2024-07-14 15:10:11.320581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.342 [2024-07-14 15:10:11.320641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.342 qpair failed and we were unable to recover it. 00:37:32.342 [2024-07-14 15:10:11.320776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.342 [2024-07-14 15:10:11.320811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.342 qpair failed and we were unable to recover it. 00:37:32.342 [2024-07-14 15:10:11.320944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.342 [2024-07-14 15:10:11.320978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.342 qpair failed and we were unable to recover it. 00:37:32.342 [2024-07-14 15:10:11.321076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.342 [2024-07-14 15:10:11.321109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.342 qpair failed and we were unable to recover it. 00:37:32.342 [2024-07-14 15:10:11.321272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.342 [2024-07-14 15:10:11.321311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.342 qpair failed and we were unable to recover it. 00:37:32.342 [2024-07-14 15:10:11.321523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.342 [2024-07-14 15:10:11.321581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.342 qpair failed and we were unable to recover it. 00:37:32.342 [2024-07-14 15:10:11.321731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.342 [2024-07-14 15:10:11.321782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.342 qpair failed and we were unable to recover it. 00:37:32.342 [2024-07-14 15:10:11.321891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.342 [2024-07-14 15:10:11.321924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.342 qpair failed and we were unable to recover it. 00:37:32.342 [2024-07-14 15:10:11.322038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.342 [2024-07-14 15:10:11.322087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.342 qpair failed and we were unable to recover it. 00:37:32.342 [2024-07-14 15:10:11.322251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.342 [2024-07-14 15:10:11.322292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.342 qpair failed and we were unable to recover it. 00:37:32.342 [2024-07-14 15:10:11.322458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.342 [2024-07-14 15:10:11.322498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.342 qpair failed and we were unable to recover it. 00:37:32.342 [2024-07-14 15:10:11.322650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.342 [2024-07-14 15:10:11.322688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.342 qpair failed and we were unable to recover it. 00:37:32.342 [2024-07-14 15:10:11.322827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.342 [2024-07-14 15:10:11.322892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.342 qpair failed and we were unable to recover it. 00:37:32.342 [2024-07-14 15:10:11.323104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.342 [2024-07-14 15:10:11.323152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.342 qpair failed and we were unable to recover it. 00:37:32.342 [2024-07-14 15:10:11.323381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.342 [2024-07-14 15:10:11.323439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.342 qpair failed and we were unable to recover it. 00:37:32.342 [2024-07-14 15:10:11.323592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.342 [2024-07-14 15:10:11.323661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.342 qpair failed and we were unable to recover it. 00:37:32.342 [2024-07-14 15:10:11.323830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.342 [2024-07-14 15:10:11.323863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.342 qpair failed and we were unable to recover it. 00:37:32.342 [2024-07-14 15:10:11.324012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.342 [2024-07-14 15:10:11.324045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.342 qpair failed and we were unable to recover it. 00:37:32.342 [2024-07-14 15:10:11.324194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.342 [2024-07-14 15:10:11.324249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.342 qpair failed and we were unable to recover it. 00:37:32.342 [2024-07-14 15:10:11.324412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.342 [2024-07-14 15:10:11.324453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.342 qpair failed and we were unable to recover it. 00:37:32.342 [2024-07-14 15:10:11.324699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.342 [2024-07-14 15:10:11.324765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.342 qpair failed and we were unable to recover it. 00:37:32.342 [2024-07-14 15:10:11.324957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.342 [2024-07-14 15:10:11.324993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.342 qpair failed and we were unable to recover it. 00:37:32.342 [2024-07-14 15:10:11.325108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.342 [2024-07-14 15:10:11.325142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.342 qpair failed and we were unable to recover it. 00:37:32.342 [2024-07-14 15:10:11.325352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.342 [2024-07-14 15:10:11.325389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.342 qpair failed and we were unable to recover it. 00:37:32.342 [2024-07-14 15:10:11.325571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.342 [2024-07-14 15:10:11.325609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.342 qpair failed and we were unable to recover it. 00:37:32.342 [2024-07-14 15:10:11.325791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.342 [2024-07-14 15:10:11.325844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.342 qpair failed and we were unable to recover it. 00:37:32.342 [2024-07-14 15:10:11.326014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.342 [2024-07-14 15:10:11.326063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.342 qpair failed and we were unable to recover it. 00:37:32.342 [2024-07-14 15:10:11.326195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.342 [2024-07-14 15:10:11.326243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.342 qpair failed and we were unable to recover it. 00:37:32.342 [2024-07-14 15:10:11.326410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.342 [2024-07-14 15:10:11.326450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.342 qpair failed and we were unable to recover it. 00:37:32.342 [2024-07-14 15:10:11.326665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.342 [2024-07-14 15:10:11.326732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.342 qpair failed and we were unable to recover it. 00:37:32.342 [2024-07-14 15:10:11.326908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.342 [2024-07-14 15:10:11.326943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.342 qpair failed and we were unable to recover it. 00:37:32.342 [2024-07-14 15:10:11.327051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.343 [2024-07-14 15:10:11.327085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.343 qpair failed and we were unable to recover it. 00:37:32.343 [2024-07-14 15:10:11.327256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.343 [2024-07-14 15:10:11.327308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.343 qpair failed and we were unable to recover it. 00:37:32.343 [2024-07-14 15:10:11.327455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.343 [2024-07-14 15:10:11.327543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.343 qpair failed and we were unable to recover it. 00:37:32.343 [2024-07-14 15:10:11.327727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.343 [2024-07-14 15:10:11.327789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.343 qpair failed and we were unable to recover it. 00:37:32.343 [2024-07-14 15:10:11.327971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.343 [2024-07-14 15:10:11.328005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.343 qpair failed and we were unable to recover it. 00:37:32.343 [2024-07-14 15:10:11.328138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.343 [2024-07-14 15:10:11.328186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.343 qpair failed and we were unable to recover it. 00:37:32.343 [2024-07-14 15:10:11.328297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.343 [2024-07-14 15:10:11.328334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.343 qpair failed and we were unable to recover it. 00:37:32.343 [2024-07-14 15:10:11.328450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.343 [2024-07-14 15:10:11.328487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.343 qpair failed and we were unable to recover it. 00:37:32.343 [2024-07-14 15:10:11.328636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.343 [2024-07-14 15:10:11.328692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.343 qpair failed and we were unable to recover it. 00:37:32.343 [2024-07-14 15:10:11.328828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.343 [2024-07-14 15:10:11.328884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.343 qpair failed and we were unable to recover it. 00:37:32.343 [2024-07-14 15:10:11.329051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.343 [2024-07-14 15:10:11.329086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.343 qpair failed and we were unable to recover it. 00:37:32.343 [2024-07-14 15:10:11.329272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.343 [2024-07-14 15:10:11.329332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.343 qpair failed and we were unable to recover it. 00:37:32.343 [2024-07-14 15:10:11.329582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.343 [2024-07-14 15:10:11.329654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.343 qpair failed and we were unable to recover it. 00:37:32.343 [2024-07-14 15:10:11.329795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.343 [2024-07-14 15:10:11.329832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.343 qpair failed and we were unable to recover it. 00:37:32.343 [2024-07-14 15:10:11.329978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.343 [2024-07-14 15:10:11.330012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.343 qpair failed and we were unable to recover it. 00:37:32.343 [2024-07-14 15:10:11.330143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.343 [2024-07-14 15:10:11.330176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.343 qpair failed and we were unable to recover it. 00:37:32.343 [2024-07-14 15:10:11.330349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.343 [2024-07-14 15:10:11.330417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.343 qpair failed and we were unable to recover it. 00:37:32.343 [2024-07-14 15:10:11.330614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.343 [2024-07-14 15:10:11.330676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.343 qpair failed and we were unable to recover it. 00:37:32.343 [2024-07-14 15:10:11.330833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.343 [2024-07-14 15:10:11.330866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.343 qpair failed and we were unable to recover it. 00:37:32.343 [2024-07-14 15:10:11.331010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.343 [2024-07-14 15:10:11.331044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.343 qpair failed and we were unable to recover it. 00:37:32.343 [2024-07-14 15:10:11.331170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.343 [2024-07-14 15:10:11.331206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.343 qpair failed and we were unable to recover it. 00:37:32.343 [2024-07-14 15:10:11.331377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.343 [2024-07-14 15:10:11.331419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.343 qpair failed and we were unable to recover it. 00:37:32.343 [2024-07-14 15:10:11.331533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.343 [2024-07-14 15:10:11.331570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.343 qpair failed and we were unable to recover it. 00:37:32.343 [2024-07-14 15:10:11.331742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.343 [2024-07-14 15:10:11.331778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.343 qpair failed and we were unable to recover it. 00:37:32.343 [2024-07-14 15:10:11.331903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.343 [2024-07-14 15:10:11.331936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.343 qpair failed and we were unable to recover it. 00:37:32.343 [2024-07-14 15:10:11.332069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.343 [2024-07-14 15:10:11.332102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.343 qpair failed and we were unable to recover it. 00:37:32.343 [2024-07-14 15:10:11.332288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.343 [2024-07-14 15:10:11.332365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.343 qpair failed and we were unable to recover it. 00:37:32.343 [2024-07-14 15:10:11.332516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.343 [2024-07-14 15:10:11.332552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.343 qpair failed and we were unable to recover it. 00:37:32.343 [2024-07-14 15:10:11.332706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.343 [2024-07-14 15:10:11.332744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.343 qpair failed and we were unable to recover it. 00:37:32.343 [2024-07-14 15:10:11.332862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.343 [2024-07-14 15:10:11.332923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.343 qpair failed and we were unable to recover it. 00:37:32.343 [2024-07-14 15:10:11.333059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.343 [2024-07-14 15:10:11.333092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.343 qpair failed and we were unable to recover it. 00:37:32.343 [2024-07-14 15:10:11.333198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.344 [2024-07-14 15:10:11.333231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.344 qpair failed and we were unable to recover it. 00:37:32.344 [2024-07-14 15:10:11.333337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.344 [2024-07-14 15:10:11.333370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.344 qpair failed and we were unable to recover it. 00:37:32.344 [2024-07-14 15:10:11.333549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.344 [2024-07-14 15:10:11.333586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.344 qpair failed and we were unable to recover it. 00:37:32.344 [2024-07-14 15:10:11.333748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.344 [2024-07-14 15:10:11.333802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.344 qpair failed and we were unable to recover it. 00:37:32.344 [2024-07-14 15:10:11.333991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.344 [2024-07-14 15:10:11.334032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.344 qpair failed and we were unable to recover it. 00:37:32.344 [2024-07-14 15:10:11.334163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.344 [2024-07-14 15:10:11.334211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.344 qpair failed and we were unable to recover it. 00:37:32.344 [2024-07-14 15:10:11.334365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.344 [2024-07-14 15:10:11.334403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.344 qpair failed and we were unable to recover it. 00:37:32.344 [2024-07-14 15:10:11.334529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.344 [2024-07-14 15:10:11.334566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.344 qpair failed and we were unable to recover it. 00:37:32.344 [2024-07-14 15:10:11.334692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.344 [2024-07-14 15:10:11.334728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.344 qpair failed and we were unable to recover it. 00:37:32.344 [2024-07-14 15:10:11.334900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.344 [2024-07-14 15:10:11.334934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.344 qpair failed and we were unable to recover it. 00:37:32.344 [2024-07-14 15:10:11.335043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.344 [2024-07-14 15:10:11.335077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.344 qpair failed and we were unable to recover it. 00:37:32.344 [2024-07-14 15:10:11.335228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.344 [2024-07-14 15:10:11.335265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.344 qpair failed and we were unable to recover it. 00:37:32.344 [2024-07-14 15:10:11.335442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.344 [2024-07-14 15:10:11.335478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.344 qpair failed and we were unable to recover it. 00:37:32.344 [2024-07-14 15:10:11.335600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.344 [2024-07-14 15:10:11.335636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.344 qpair failed and we were unable to recover it. 00:37:32.344 [2024-07-14 15:10:11.335784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.344 [2024-07-14 15:10:11.335821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.344 qpair failed and we were unable to recover it. 00:37:32.344 [2024-07-14 15:10:11.335951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.344 [2024-07-14 15:10:11.335986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.344 qpair failed and we were unable to recover it. 00:37:32.344 [2024-07-14 15:10:11.336152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.344 [2024-07-14 15:10:11.336219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.344 qpair failed and we were unable to recover it. 00:37:32.344 [2024-07-14 15:10:11.336365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.344 [2024-07-14 15:10:11.336406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.344 qpair failed and we were unable to recover it. 00:37:32.344 [2024-07-14 15:10:11.336538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.344 [2024-07-14 15:10:11.336591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.344 qpair failed and we were unable to recover it. 00:37:32.344 [2024-07-14 15:10:11.336770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.344 [2024-07-14 15:10:11.336807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.344 qpair failed and we were unable to recover it. 00:37:32.344 [2024-07-14 15:10:11.336997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.344 [2024-07-14 15:10:11.337031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.344 qpair failed and we were unable to recover it. 00:37:32.344 [2024-07-14 15:10:11.337149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.344 [2024-07-14 15:10:11.337198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.344 qpair failed and we were unable to recover it. 00:37:32.344 [2024-07-14 15:10:11.337371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.344 [2024-07-14 15:10:11.337409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.344 qpair failed and we were unable to recover it. 00:37:32.344 [2024-07-14 15:10:11.337562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.345 [2024-07-14 15:10:11.337599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.345 qpair failed and we were unable to recover it. 00:37:32.345 [2024-07-14 15:10:11.337739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.345 [2024-07-14 15:10:11.337776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.345 qpair failed and we were unable to recover it. 00:37:32.345 [2024-07-14 15:10:11.337910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.345 [2024-07-14 15:10:11.337944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.345 qpair failed and we were unable to recover it. 00:37:32.345 [2024-07-14 15:10:11.338079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.345 [2024-07-14 15:10:11.338112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.345 qpair failed and we were unable to recover it. 00:37:32.345 [2024-07-14 15:10:11.338257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.345 [2024-07-14 15:10:11.338293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.345 qpair failed and we were unable to recover it. 00:37:32.345 [2024-07-14 15:10:11.338421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.345 [2024-07-14 15:10:11.338471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.345 qpair failed and we were unable to recover it. 00:37:32.345 [2024-07-14 15:10:11.338645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.345 [2024-07-14 15:10:11.338682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.345 qpair failed and we were unable to recover it. 00:37:32.345 [2024-07-14 15:10:11.338791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.345 [2024-07-14 15:10:11.338836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.345 qpair failed and we were unable to recover it. 00:37:32.345 [2024-07-14 15:10:11.339021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.345 [2024-07-14 15:10:11.339069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.345 qpair failed and we were unable to recover it. 00:37:32.345 [2024-07-14 15:10:11.339251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.345 [2024-07-14 15:10:11.339298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.345 qpair failed and we were unable to recover it. 00:37:32.345 [2024-07-14 15:10:11.339422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.345 [2024-07-14 15:10:11.339474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.345 qpair failed and we were unable to recover it. 00:37:32.345 [2024-07-14 15:10:11.339633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.345 [2024-07-14 15:10:11.339686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.345 qpair failed and we were unable to recover it. 00:37:32.345 [2024-07-14 15:10:11.339829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.345 [2024-07-14 15:10:11.339864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.345 qpair failed and we were unable to recover it. 00:37:32.345 [2024-07-14 15:10:11.340008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.345 [2024-07-14 15:10:11.340042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.345 qpair failed and we were unable to recover it. 00:37:32.345 [2024-07-14 15:10:11.340157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.345 [2024-07-14 15:10:11.340192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.345 qpair failed and we were unable to recover it. 00:37:32.345 [2024-07-14 15:10:11.340310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.345 [2024-07-14 15:10:11.340343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.345 qpair failed and we were unable to recover it. 00:37:32.345 [2024-07-14 15:10:11.340450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.345 [2024-07-14 15:10:11.340483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.345 qpair failed and we were unable to recover it. 00:37:32.345 [2024-07-14 15:10:11.340587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.345 [2024-07-14 15:10:11.340620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.345 qpair failed and we were unable to recover it. 00:37:32.345 [2024-07-14 15:10:11.340749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.345 [2024-07-14 15:10:11.340783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.345 qpair failed and we were unable to recover it. 00:37:32.345 [2024-07-14 15:10:11.340914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.345 [2024-07-14 15:10:11.340948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.345 qpair failed and we were unable to recover it. 00:37:32.345 [2024-07-14 15:10:11.341083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.345 [2024-07-14 15:10:11.341118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.345 qpair failed and we were unable to recover it. 00:37:32.345 [2024-07-14 15:10:11.341260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.345 [2024-07-14 15:10:11.341294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.345 qpair failed and we were unable to recover it. 00:37:32.345 [2024-07-14 15:10:11.341413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.345 [2024-07-14 15:10:11.341448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.345 qpair failed and we were unable to recover it. 00:37:32.345 [2024-07-14 15:10:11.341585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.345 [2024-07-14 15:10:11.341619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.345 qpair failed and we were unable to recover it. 00:37:32.345 [2024-07-14 15:10:11.341742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.345 [2024-07-14 15:10:11.341790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.345 qpair failed and we were unable to recover it. 00:37:32.345 [2024-07-14 15:10:11.341943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.345 [2024-07-14 15:10:11.341991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.345 qpair failed and we were unable to recover it. 00:37:32.345 [2024-07-14 15:10:11.342135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.345 [2024-07-14 15:10:11.342189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.345 qpair failed and we were unable to recover it. 00:37:32.345 [2024-07-14 15:10:11.342366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.345 [2024-07-14 15:10:11.342403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.345 qpair failed and we were unable to recover it. 00:37:32.345 [2024-07-14 15:10:11.342679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.345 [2024-07-14 15:10:11.342736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.345 qpair failed and we were unable to recover it. 00:37:32.345 [2024-07-14 15:10:11.342890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.345 [2024-07-14 15:10:11.342942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.345 qpair failed and we were unable to recover it. 00:37:32.345 [2024-07-14 15:10:11.343079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.345 [2024-07-14 15:10:11.343112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.345 qpair failed and we were unable to recover it. 00:37:32.345 [2024-07-14 15:10:11.343261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.345 [2024-07-14 15:10:11.343297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.345 qpair failed and we were unable to recover it. 00:37:32.346 [2024-07-14 15:10:11.343432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.346 [2024-07-14 15:10:11.343470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.346 qpair failed and we were unable to recover it. 00:37:32.346 [2024-07-14 15:10:11.343599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.346 [2024-07-14 15:10:11.343651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.346 qpair failed and we were unable to recover it. 00:37:32.346 [2024-07-14 15:10:11.343830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.346 [2024-07-14 15:10:11.343867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.346 qpair failed and we were unable to recover it. 00:37:32.346 [2024-07-14 15:10:11.344008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.346 [2024-07-14 15:10:11.344041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.346 qpair failed and we were unable to recover it. 00:37:32.346 [2024-07-14 15:10:11.344168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.346 [2024-07-14 15:10:11.344216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.346 qpair failed and we were unable to recover it. 00:37:32.346 [2024-07-14 15:10:11.344428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.346 [2024-07-14 15:10:11.344482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.346 qpair failed and we were unable to recover it. 00:37:32.346 [2024-07-14 15:10:11.344649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.346 [2024-07-14 15:10:11.344690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.346 qpair failed and we were unable to recover it. 00:37:32.346 [2024-07-14 15:10:11.344807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.346 [2024-07-14 15:10:11.344845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.346 qpair failed and we were unable to recover it. 00:37:32.346 [2024-07-14 15:10:11.344991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.346 [2024-07-14 15:10:11.345025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.346 qpair failed and we were unable to recover it. 00:37:32.346 [2024-07-14 15:10:11.345135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.346 [2024-07-14 15:10:11.345185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.346 qpair failed and we were unable to recover it. 00:37:32.346 [2024-07-14 15:10:11.345390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.346 [2024-07-14 15:10:11.345446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.346 qpair failed and we were unable to recover it. 00:37:32.346 [2024-07-14 15:10:11.345648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.346 [2024-07-14 15:10:11.345707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.346 qpair failed and we were unable to recover it. 00:37:32.346 [2024-07-14 15:10:11.345873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.346 [2024-07-14 15:10:11.345948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.346 qpair failed and we were unable to recover it. 00:37:32.346 [2024-07-14 15:10:11.346073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.346 [2024-07-14 15:10:11.346109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.346 qpair failed and we were unable to recover it. 00:37:32.346 [2024-07-14 15:10:11.346234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.346 [2024-07-14 15:10:11.346272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.346 qpair failed and we were unable to recover it. 00:37:32.346 [2024-07-14 15:10:11.346388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.346 [2024-07-14 15:10:11.346431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.346 qpair failed and we were unable to recover it. 00:37:32.346 [2024-07-14 15:10:11.346616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.346 [2024-07-14 15:10:11.346657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.346 qpair failed and we were unable to recover it. 00:37:32.346 [2024-07-14 15:10:11.346808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.346 [2024-07-14 15:10:11.346857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.346 qpair failed and we were unable to recover it. 00:37:32.346 [2024-07-14 15:10:11.347027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.346 [2024-07-14 15:10:11.347075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.346 qpair failed and we were unable to recover it. 00:37:32.346 [2024-07-14 15:10:11.347254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.346 [2024-07-14 15:10:11.347316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.346 qpair failed and we were unable to recover it. 00:37:32.346 [2024-07-14 15:10:11.347495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.346 [2024-07-14 15:10:11.347563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.346 qpair failed and we were unable to recover it. 00:37:32.346 [2024-07-14 15:10:11.347769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.346 [2024-07-14 15:10:11.347806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.346 qpair failed and we were unable to recover it. 00:37:32.346 [2024-07-14 15:10:11.347946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.346 [2024-07-14 15:10:11.347981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.346 qpair failed and we were unable to recover it. 00:37:32.346 [2024-07-14 15:10:11.348140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.346 [2024-07-14 15:10:11.348192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.346 qpair failed and we were unable to recover it. 00:37:32.346 [2024-07-14 15:10:11.348368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.346 [2024-07-14 15:10:11.348405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.346 qpair failed and we were unable to recover it. 00:37:32.346 [2024-07-14 15:10:11.348580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.346 [2024-07-14 15:10:11.348617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.346 qpair failed and we were unable to recover it. 00:37:32.346 [2024-07-14 15:10:11.348786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.346 [2024-07-14 15:10:11.348824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.346 qpair failed and we were unable to recover it. 00:37:32.346 [2024-07-14 15:10:11.348990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.346 [2024-07-14 15:10:11.349040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.346 qpair failed and we were unable to recover it. 00:37:32.346 [2024-07-14 15:10:11.349220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.346 [2024-07-14 15:10:11.349256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.346 qpair failed and we were unable to recover it. 00:37:32.346 [2024-07-14 15:10:11.349431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.346 [2024-07-14 15:10:11.349484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.346 qpair failed and we were unable to recover it. 00:37:32.346 [2024-07-14 15:10:11.349641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.346 [2024-07-14 15:10:11.349693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.346 qpair failed and we were unable to recover it. 00:37:32.347 [2024-07-14 15:10:11.349815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.347 [2024-07-14 15:10:11.349863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.347 qpair failed and we were unable to recover it. 00:37:32.347 [2024-07-14 15:10:11.349995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.347 [2024-07-14 15:10:11.350031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.347 qpair failed and we were unable to recover it. 00:37:32.347 [2024-07-14 15:10:11.350188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.347 [2024-07-14 15:10:11.350228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.347 qpair failed and we were unable to recover it. 00:37:32.347 [2024-07-14 15:10:11.350429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.347 [2024-07-14 15:10:11.350529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.347 qpair failed and we were unable to recover it. 00:37:32.347 [2024-07-14 15:10:11.350650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.347 [2024-07-14 15:10:11.350686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.347 qpair failed and we were unable to recover it. 00:37:32.347 [2024-07-14 15:10:11.350836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.347 [2024-07-14 15:10:11.350873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.347 qpair failed and we were unable to recover it. 00:37:32.347 [2024-07-14 15:10:11.351030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.347 [2024-07-14 15:10:11.351079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.347 qpair failed and we were unable to recover it. 00:37:32.347 [2024-07-14 15:10:11.351226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.347 [2024-07-14 15:10:11.351280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.347 qpair failed and we were unable to recover it. 00:37:32.347 [2024-07-14 15:10:11.351430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.347 [2024-07-14 15:10:11.351468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.347 qpair failed and we were unable to recover it. 00:37:32.347 [2024-07-14 15:10:11.351657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.347 [2024-07-14 15:10:11.351692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.347 qpair failed and we were unable to recover it. 00:37:32.347 [2024-07-14 15:10:11.351829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.347 [2024-07-14 15:10:11.351889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.347 qpair failed and we were unable to recover it. 00:37:32.347 [2024-07-14 15:10:11.352025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.347 [2024-07-14 15:10:11.352073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.347 qpair failed and we were unable to recover it. 00:37:32.347 [2024-07-14 15:10:11.352213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.347 [2024-07-14 15:10:11.352248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.347 qpair failed and we were unable to recover it. 00:37:32.347 [2024-07-14 15:10:11.352361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.347 [2024-07-14 15:10:11.352394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.347 qpair failed and we were unable to recover it. 00:37:32.347 [2024-07-14 15:10:11.352522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.347 [2024-07-14 15:10:11.352555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.347 qpair failed and we were unable to recover it. 00:37:32.347 [2024-07-14 15:10:11.352660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.347 [2024-07-14 15:10:11.352693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.347 qpair failed and we were unable to recover it. 00:37:32.347 [2024-07-14 15:10:11.352800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.347 [2024-07-14 15:10:11.352833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.347 qpair failed and we were unable to recover it. 00:37:32.347 [2024-07-14 15:10:11.352949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.347 [2024-07-14 15:10:11.352985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.347 qpair failed and we were unable to recover it. 00:37:32.347 [2024-07-14 15:10:11.353134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.347 [2024-07-14 15:10:11.353172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.347 qpair failed and we were unable to recover it. 00:37:32.347 [2024-07-14 15:10:11.353292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.347 [2024-07-14 15:10:11.353328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.347 qpair failed and we were unable to recover it. 00:37:32.347 [2024-07-14 15:10:11.353459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.347 [2024-07-14 15:10:11.353493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.347 qpair failed and we were unable to recover it. 00:37:32.347 [2024-07-14 15:10:11.353661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.347 [2024-07-14 15:10:11.353698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.347 qpair failed and we were unable to recover it. 00:37:32.347 [2024-07-14 15:10:11.353827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.347 [2024-07-14 15:10:11.353865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.347 qpair failed and we were unable to recover it. 00:37:32.347 [2024-07-14 15:10:11.354003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.347 [2024-07-14 15:10:11.354038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.347 qpair failed and we were unable to recover it. 00:37:32.347 [2024-07-14 15:10:11.354174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.347 [2024-07-14 15:10:11.354213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.347 qpair failed and we were unable to recover it. 00:37:32.347 [2024-07-14 15:10:11.354328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.347 [2024-07-14 15:10:11.354380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.347 qpair failed and we were unable to recover it. 00:37:32.347 [2024-07-14 15:10:11.354529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.347 [2024-07-14 15:10:11.354566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.347 qpair failed and we were unable to recover it. 00:37:32.347 [2024-07-14 15:10:11.354698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.347 [2024-07-14 15:10:11.354762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.347 qpair failed and we were unable to recover it. 00:37:32.347 [2024-07-14 15:10:11.354897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.347 [2024-07-14 15:10:11.354961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.347 qpair failed and we were unable to recover it. 00:37:32.347 [2024-07-14 15:10:11.355088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.347 [2024-07-14 15:10:11.355125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.347 qpair failed and we were unable to recover it. 00:37:32.347 [2024-07-14 15:10:11.355282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.347 [2024-07-14 15:10:11.355327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.347 qpair failed and we were unable to recover it. 00:37:32.348 [2024-07-14 15:10:11.355503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.348 [2024-07-14 15:10:11.355543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.348 qpair failed and we were unable to recover it. 00:37:32.348 [2024-07-14 15:10:11.355697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.348 [2024-07-14 15:10:11.355738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.348 qpair failed and we were unable to recover it. 00:37:32.348 [2024-07-14 15:10:11.355916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.348 [2024-07-14 15:10:11.355964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.348 qpair failed and we were unable to recover it. 00:37:32.348 [2024-07-14 15:10:11.356104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.348 [2024-07-14 15:10:11.356138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.348 qpair failed and we were unable to recover it. 00:37:32.348 [2024-07-14 15:10:11.356270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.348 [2024-07-14 15:10:11.356323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.348 qpair failed and we were unable to recover it. 00:37:32.348 [2024-07-14 15:10:11.356546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.348 [2024-07-14 15:10:11.356606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.348 qpair failed and we were unable to recover it. 00:37:32.348 [2024-07-14 15:10:11.356782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.348 [2024-07-14 15:10:11.356818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.348 qpair failed and we were unable to recover it. 00:37:32.348 [2024-07-14 15:10:11.356995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.348 [2024-07-14 15:10:11.357029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.348 qpair failed and we were unable to recover it. 00:37:32.348 [2024-07-14 15:10:11.357182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.348 [2024-07-14 15:10:11.357218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.348 qpair failed and we were unable to recover it. 00:37:32.348 [2024-07-14 15:10:11.357352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.348 [2024-07-14 15:10:11.357404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.348 qpair failed and we were unable to recover it. 00:37:32.348 [2024-07-14 15:10:11.357552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.348 [2024-07-14 15:10:11.357589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.348 qpair failed and we were unable to recover it. 00:37:32.348 [2024-07-14 15:10:11.357705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.348 [2024-07-14 15:10:11.357741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.348 qpair failed and we were unable to recover it. 00:37:32.348 [2024-07-14 15:10:11.357890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.348 [2024-07-14 15:10:11.357943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.348 qpair failed and we were unable to recover it. 00:37:32.348 [2024-07-14 15:10:11.358046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.348 [2024-07-14 15:10:11.358079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.348 qpair failed and we were unable to recover it. 00:37:32.348 [2024-07-14 15:10:11.358214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.348 [2024-07-14 15:10:11.358251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.348 qpair failed and we were unable to recover it. 00:37:32.348 [2024-07-14 15:10:11.358425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.348 [2024-07-14 15:10:11.358462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.348 qpair failed and we were unable to recover it. 00:37:32.348 [2024-07-14 15:10:11.358577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.348 [2024-07-14 15:10:11.358614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.348 qpair failed and we were unable to recover it. 00:37:32.348 [2024-07-14 15:10:11.358764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.348 [2024-07-14 15:10:11.358804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.348 qpair failed and we were unable to recover it. 00:37:32.348 [2024-07-14 15:10:11.358968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.348 [2024-07-14 15:10:11.359017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.348 qpair failed and we were unable to recover it. 00:37:32.348 [2024-07-14 15:10:11.359154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.348 [2024-07-14 15:10:11.359202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.348 qpair failed and we were unable to recover it. 00:37:32.348 [2024-07-14 15:10:11.359447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.348 [2024-07-14 15:10:11.359511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.348 qpair failed and we were unable to recover it. 00:37:32.348 [2024-07-14 15:10:11.359729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.348 [2024-07-14 15:10:11.359791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.348 qpair failed and we were unable to recover it. 00:37:32.348 [2024-07-14 15:10:11.359914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.348 [2024-07-14 15:10:11.359949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.348 qpair failed and we were unable to recover it. 00:37:32.348 [2024-07-14 15:10:11.360067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.348 [2024-07-14 15:10:11.360102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.348 qpair failed and we were unable to recover it. 00:37:32.348 [2024-07-14 15:10:11.360233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.348 [2024-07-14 15:10:11.360270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.348 qpair failed and we were unable to recover it. 00:37:32.348 [2024-07-14 15:10:11.360467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.348 [2024-07-14 15:10:11.360564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.348 qpair failed and we were unable to recover it. 00:37:32.348 [2024-07-14 15:10:11.360715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.348 [2024-07-14 15:10:11.360752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.348 qpair failed and we were unable to recover it. 00:37:32.348 [2024-07-14 15:10:11.360898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.348 [2024-07-14 15:10:11.360935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.348 qpair failed and we were unable to recover it. 00:37:32.348 [2024-07-14 15:10:11.361095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.348 [2024-07-14 15:10:11.361129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.348 qpair failed and we were unable to recover it. 00:37:32.348 [2024-07-14 15:10:11.361261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.348 [2024-07-14 15:10:11.361299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.348 qpair failed and we were unable to recover it. 00:37:32.348 [2024-07-14 15:10:11.361447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.348 [2024-07-14 15:10:11.361485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.348 qpair failed and we were unable to recover it. 00:37:32.348 [2024-07-14 15:10:11.361679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.348 [2024-07-14 15:10:11.361746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.349 qpair failed and we were unable to recover it. 00:37:32.349 [2024-07-14 15:10:11.361892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.349 [2024-07-14 15:10:11.361930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.349 qpair failed and we were unable to recover it. 00:37:32.349 [2024-07-14 15:10:11.362053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.349 [2024-07-14 15:10:11.362093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.349 qpair failed and we were unable to recover it. 00:37:32.349 [2024-07-14 15:10:11.362217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.349 [2024-07-14 15:10:11.362271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.349 qpair failed and we were unable to recover it. 00:37:32.349 [2024-07-14 15:10:11.362452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.349 [2024-07-14 15:10:11.362486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.349 qpair failed and we were unable to recover it. 00:37:32.349 [2024-07-14 15:10:11.362723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.349 [2024-07-14 15:10:11.362785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.349 qpair failed and we were unable to recover it. 00:37:32.349 [2024-07-14 15:10:11.362951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.349 [2024-07-14 15:10:11.362991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.349 qpair failed and we were unable to recover it. 00:37:32.349 [2024-07-14 15:10:11.363114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.349 [2024-07-14 15:10:11.363151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.349 qpair failed and we were unable to recover it. 00:37:32.349 [2024-07-14 15:10:11.363276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.349 [2024-07-14 15:10:11.363313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.349 qpair failed and we were unable to recover it. 00:37:32.349 [2024-07-14 15:10:11.363468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.349 [2024-07-14 15:10:11.363504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.349 qpair failed and we were unable to recover it. 00:37:32.349 [2024-07-14 15:10:11.363662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.349 [2024-07-14 15:10:11.363698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.349 qpair failed and we were unable to recover it. 00:37:32.349 [2024-07-14 15:10:11.363844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.349 [2024-07-14 15:10:11.363886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.349 qpair failed and we were unable to recover it. 00:37:32.349 [2024-07-14 15:10:11.364008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.349 [2024-07-14 15:10:11.364048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.349 qpair failed and we were unable to recover it. 00:37:32.349 [2024-07-14 15:10:11.364217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.349 [2024-07-14 15:10:11.364255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.349 qpair failed and we were unable to recover it. 00:37:32.349 [2024-07-14 15:10:11.364382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.349 [2024-07-14 15:10:11.364420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.349 qpair failed and we were unable to recover it. 00:37:32.349 [2024-07-14 15:10:11.364599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.349 [2024-07-14 15:10:11.364637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.349 qpair failed and we were unable to recover it. 00:37:32.349 [2024-07-14 15:10:11.364773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.349 [2024-07-14 15:10:11.364810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.349 qpair failed and we were unable to recover it. 00:37:32.349 [2024-07-14 15:10:11.364995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.349 [2024-07-14 15:10:11.365044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.349 qpair failed and we were unable to recover it. 00:37:32.349 [2024-07-14 15:10:11.365181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.349 [2024-07-14 15:10:11.365229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.349 qpair failed and we were unable to recover it. 00:37:32.349 [2024-07-14 15:10:11.365367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.349 [2024-07-14 15:10:11.365407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.349 qpair failed and we were unable to recover it. 00:37:32.349 [2024-07-14 15:10:11.365675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.349 [2024-07-14 15:10:11.365732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.349 qpair failed and we were unable to recover it. 00:37:32.349 [2024-07-14 15:10:11.365902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.349 [2024-07-14 15:10:11.365937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.349 qpair failed and we were unable to recover it. 00:37:32.349 [2024-07-14 15:10:11.366093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.349 [2024-07-14 15:10:11.366145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.349 qpair failed and we were unable to recover it. 00:37:32.349 [2024-07-14 15:10:11.366306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.349 [2024-07-14 15:10:11.366364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.349 qpair failed and we were unable to recover it. 00:37:32.349 [2024-07-14 15:10:11.366523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.349 [2024-07-14 15:10:11.366598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.349 qpair failed and we were unable to recover it. 00:37:32.349 [2024-07-14 15:10:11.366789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.349 [2024-07-14 15:10:11.366855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.349 qpair failed and we were unable to recover it. 00:37:32.349 [2024-07-14 15:10:11.367002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.349 [2024-07-14 15:10:11.367036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.349 qpair failed and we were unable to recover it. 00:37:32.349 [2024-07-14 15:10:11.367162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.350 [2024-07-14 15:10:11.367200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.350 qpair failed and we were unable to recover it. 00:37:32.350 [2024-07-14 15:10:11.367345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.350 [2024-07-14 15:10:11.367381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.350 qpair failed and we were unable to recover it. 00:37:32.350 [2024-07-14 15:10:11.367502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.350 [2024-07-14 15:10:11.367539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.350 qpair failed and we were unable to recover it. 00:37:32.350 [2024-07-14 15:10:11.367697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.350 [2024-07-14 15:10:11.367753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.350 qpair failed and we were unable to recover it. 00:37:32.350 [2024-07-14 15:10:11.367893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.350 [2024-07-14 15:10:11.367928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.350 qpair failed and we were unable to recover it. 00:37:32.350 [2024-07-14 15:10:11.368058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.350 [2024-07-14 15:10:11.368112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.350 qpair failed and we were unable to recover it. 00:37:32.350 [2024-07-14 15:10:11.368271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.350 [2024-07-14 15:10:11.368322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.350 qpair failed and we were unable to recover it. 00:37:32.350 [2024-07-14 15:10:11.368465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.350 [2024-07-14 15:10:11.368525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.350 qpair failed and we were unable to recover it. 00:37:32.350 [2024-07-14 15:10:11.368622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.350 [2024-07-14 15:10:11.368656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.350 qpair failed and we were unable to recover it. 00:37:32.350 [2024-07-14 15:10:11.368800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.350 [2024-07-14 15:10:11.368835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.350 qpair failed and we were unable to recover it. 00:37:32.350 [2024-07-14 15:10:11.368969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.350 [2024-07-14 15:10:11.369002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.350 qpair failed and we were unable to recover it. 00:37:32.350 [2024-07-14 15:10:11.369136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.350 [2024-07-14 15:10:11.369213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.350 qpair failed and we were unable to recover it. 00:37:32.350 [2024-07-14 15:10:11.369458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.350 [2024-07-14 15:10:11.369499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.350 qpair failed and we were unable to recover it. 00:37:32.350 [2024-07-14 15:10:11.369725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.350 [2024-07-14 15:10:11.369784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.350 qpair failed and we were unable to recover it. 00:37:32.350 [2024-07-14 15:10:11.369951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.350 [2024-07-14 15:10:11.369985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.350 qpair failed and we were unable to recover it. 00:37:32.350 [2024-07-14 15:10:11.370123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.350 [2024-07-14 15:10:11.370192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.350 qpair failed and we were unable to recover it. 00:37:32.350 [2024-07-14 15:10:11.370343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.350 [2024-07-14 15:10:11.370394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.350 qpair failed and we were unable to recover it. 00:37:32.350 [2024-07-14 15:10:11.370557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.350 [2024-07-14 15:10:11.370615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.350 qpair failed and we were unable to recover it. 00:37:32.350 [2024-07-14 15:10:11.370744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.350 [2024-07-14 15:10:11.370779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.350 qpair failed and we were unable to recover it. 00:37:32.350 [2024-07-14 15:10:11.370904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.350 [2024-07-14 15:10:11.370953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.350 qpair failed and we were unable to recover it. 00:37:32.350 [2024-07-14 15:10:11.371064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.350 [2024-07-14 15:10:11.371101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.350 qpair failed and we were unable to recover it. 00:37:32.350 [2024-07-14 15:10:11.371262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.350 [2024-07-14 15:10:11.371301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.350 qpair failed and we were unable to recover it. 00:37:32.350 [2024-07-14 15:10:11.371419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.350 [2024-07-14 15:10:11.371463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.350 qpair failed and we were unable to recover it. 00:37:32.350 [2024-07-14 15:10:11.371600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.350 [2024-07-14 15:10:11.371636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.350 qpair failed and we were unable to recover it. 00:37:32.350 [2024-07-14 15:10:11.371749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.350 [2024-07-14 15:10:11.371783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.350 qpair failed and we were unable to recover it. 00:37:32.350 [2024-07-14 15:10:11.371900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.350 [2024-07-14 15:10:11.371935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.350 qpair failed and we were unable to recover it. 00:37:32.350 [2024-07-14 15:10:11.372104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.350 [2024-07-14 15:10:11.372156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.350 qpair failed and we were unable to recover it. 00:37:32.350 [2024-07-14 15:10:11.372345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.350 [2024-07-14 15:10:11.372383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.350 qpair failed and we were unable to recover it. 00:37:32.350 [2024-07-14 15:10:11.372546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.350 [2024-07-14 15:10:11.372599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.350 qpair failed and we were unable to recover it. 00:37:32.350 [2024-07-14 15:10:11.372744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.350 [2024-07-14 15:10:11.372778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.350 qpair failed and we were unable to recover it. 00:37:32.350 [2024-07-14 15:10:11.372964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.350 [2024-07-14 15:10:11.373019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.350 qpair failed and we were unable to recover it. 00:37:32.350 [2024-07-14 15:10:11.373175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.350 [2024-07-14 15:10:11.373216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.350 qpair failed and we were unable to recover it. 00:37:32.350 [2024-07-14 15:10:11.373349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.351 [2024-07-14 15:10:11.373387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.351 qpair failed and we were unable to recover it. 00:37:32.351 [2024-07-14 15:10:11.373601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.351 [2024-07-14 15:10:11.373659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.351 qpair failed and we were unable to recover it. 00:37:32.351 [2024-07-14 15:10:11.373808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.351 [2024-07-14 15:10:11.373844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.351 qpair failed and we were unable to recover it. 00:37:32.351 [2024-07-14 15:10:11.374003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.351 [2024-07-14 15:10:11.374037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.351 qpair failed and we were unable to recover it. 00:37:32.351 [2024-07-14 15:10:11.374172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.351 [2024-07-14 15:10:11.374215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.351 qpair failed and we were unable to recover it. 00:37:32.351 [2024-07-14 15:10:11.374363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.351 [2024-07-14 15:10:11.374400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.351 qpair failed and we were unable to recover it. 00:37:32.351 [2024-07-14 15:10:11.374540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.351 [2024-07-14 15:10:11.374577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.351 qpair failed and we were unable to recover it. 00:37:32.351 [2024-07-14 15:10:11.374723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.351 [2024-07-14 15:10:11.374764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.351 qpair failed and we were unable to recover it. 00:37:32.351 [2024-07-14 15:10:11.374912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.351 [2024-07-14 15:10:11.374947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.351 qpair failed and we were unable to recover it. 00:37:32.351 [2024-07-14 15:10:11.375077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.351 [2024-07-14 15:10:11.375128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.351 qpair failed and we were unable to recover it. 00:37:32.351 [2024-07-14 15:10:11.375287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.351 [2024-07-14 15:10:11.375343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.351 qpair failed and we were unable to recover it. 00:37:32.351 [2024-07-14 15:10:11.375524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.351 [2024-07-14 15:10:11.375587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.351 qpair failed and we were unable to recover it. 00:37:32.351 [2024-07-14 15:10:11.375726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.351 [2024-07-14 15:10:11.375761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.351 qpair failed and we were unable to recover it. 00:37:32.351 [2024-07-14 15:10:11.375883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.351 [2024-07-14 15:10:11.375918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.351 qpair failed and we were unable to recover it. 00:37:32.351 [2024-07-14 15:10:11.376037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.351 [2024-07-14 15:10:11.376070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.351 qpair failed and we were unable to recover it. 00:37:32.351 [2024-07-14 15:10:11.376233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.351 [2024-07-14 15:10:11.376266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.351 qpair failed and we were unable to recover it. 00:37:32.351 [2024-07-14 15:10:11.376414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.351 [2024-07-14 15:10:11.376451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.351 qpair failed and we were unable to recover it. 00:37:32.351 [2024-07-14 15:10:11.376623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.351 [2024-07-14 15:10:11.376694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.351 qpair failed and we were unable to recover it. 00:37:32.351 [2024-07-14 15:10:11.376808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.351 [2024-07-14 15:10:11.376845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.351 qpair failed and we were unable to recover it. 00:37:32.351 [2024-07-14 15:10:11.377010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.351 [2024-07-14 15:10:11.377045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.351 qpair failed and we were unable to recover it. 00:37:32.351 [2024-07-14 15:10:11.377150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.351 [2024-07-14 15:10:11.377184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.351 qpair failed and we were unable to recover it. 00:37:32.351 [2024-07-14 15:10:11.377340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.351 [2024-07-14 15:10:11.377391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.351 qpair failed and we were unable to recover it. 00:37:32.351 [2024-07-14 15:10:11.377529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.351 [2024-07-14 15:10:11.377581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.351 qpair failed and we were unable to recover it. 00:37:32.351 [2024-07-14 15:10:11.377692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.351 [2024-07-14 15:10:11.377730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.351 qpair failed and we were unable to recover it. 00:37:32.351 [2024-07-14 15:10:11.377864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.351 [2024-07-14 15:10:11.377920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.351 qpair failed and we were unable to recover it. 00:37:32.351 [2024-07-14 15:10:11.378062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.351 [2024-07-14 15:10:11.378097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.351 qpair failed and we were unable to recover it. 00:37:32.351 [2024-07-14 15:10:11.378210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.351 [2024-07-14 15:10:11.378243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.351 qpair failed and we were unable to recover it. 00:37:32.351 [2024-07-14 15:10:11.378381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.351 [2024-07-14 15:10:11.378414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.351 qpair failed and we were unable to recover it. 00:37:32.351 [2024-07-14 15:10:11.378520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.351 [2024-07-14 15:10:11.378553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.351 qpair failed and we were unable to recover it. 00:37:32.351 [2024-07-14 15:10:11.378680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.351 [2024-07-14 15:10:11.378714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.351 qpair failed and we were unable to recover it. 00:37:32.351 [2024-07-14 15:10:11.378863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.351 [2024-07-14 15:10:11.378929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.351 qpair failed and we were unable to recover it. 00:37:32.351 [2024-07-14 15:10:11.379080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.351 [2024-07-14 15:10:11.379135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.351 qpair failed and we were unable to recover it. 00:37:32.351 [2024-07-14 15:10:11.379300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.351 [2024-07-14 15:10:11.379335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.352 qpair failed and we were unable to recover it. 00:37:32.352 [2024-07-14 15:10:11.379498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.352 [2024-07-14 15:10:11.379542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.352 qpair failed and we were unable to recover it. 00:37:32.352 [2024-07-14 15:10:11.379661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.352 [2024-07-14 15:10:11.379694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.352 qpair failed and we were unable to recover it. 00:37:32.352 [2024-07-14 15:10:11.379828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.352 [2024-07-14 15:10:11.379862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.352 qpair failed and we were unable to recover it. 00:37:32.352 [2024-07-14 15:10:11.379983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.352 [2024-07-14 15:10:11.380018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.352 qpair failed and we were unable to recover it. 00:37:32.352 [2024-07-14 15:10:11.380137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.352 [2024-07-14 15:10:11.380170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.352 qpair failed and we were unable to recover it. 00:37:32.352 [2024-07-14 15:10:11.380271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.352 [2024-07-14 15:10:11.380304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.352 qpair failed and we were unable to recover it. 00:37:32.352 [2024-07-14 15:10:11.380416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.352 [2024-07-14 15:10:11.380450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.352 qpair failed and we were unable to recover it. 00:37:32.352 [2024-07-14 15:10:11.380569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.352 [2024-07-14 15:10:11.380602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.352 qpair failed and we were unable to recover it. 00:37:32.352 [2024-07-14 15:10:11.380733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.352 [2024-07-14 15:10:11.380767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.352 qpair failed and we were unable to recover it. 00:37:32.352 [2024-07-14 15:10:11.380889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.352 [2024-07-14 15:10:11.380924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.352 qpair failed and we were unable to recover it. 00:37:32.352 [2024-07-14 15:10:11.381082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.352 [2024-07-14 15:10:11.381131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.352 qpair failed and we were unable to recover it. 00:37:32.352 [2024-07-14 15:10:11.381321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.352 [2024-07-14 15:10:11.381358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.352 qpair failed and we were unable to recover it. 00:37:32.352 [2024-07-14 15:10:11.381524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.352 [2024-07-14 15:10:11.381570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.352 qpair failed and we were unable to recover it. 00:37:32.352 [2024-07-14 15:10:11.381708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.352 [2024-07-14 15:10:11.381746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.352 qpair failed and we were unable to recover it. 00:37:32.352 [2024-07-14 15:10:11.381889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.352 [2024-07-14 15:10:11.381943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.352 qpair failed and we were unable to recover it. 00:37:32.352 [2024-07-14 15:10:11.382088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.352 [2024-07-14 15:10:11.382134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.352 qpair failed and we were unable to recover it. 00:37:32.352 [2024-07-14 15:10:11.382287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.352 [2024-07-14 15:10:11.382340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.352 qpair failed and we were unable to recover it. 00:37:32.352 [2024-07-14 15:10:11.382475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.352 [2024-07-14 15:10:11.382512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.352 qpair failed and we were unable to recover it. 00:37:32.352 [2024-07-14 15:10:11.382663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.352 [2024-07-14 15:10:11.382700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.352 qpair failed and we were unable to recover it. 00:37:32.352 [2024-07-14 15:10:11.382853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.352 [2024-07-14 15:10:11.382909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.352 qpair failed and we were unable to recover it. 00:37:32.352 [2024-07-14 15:10:11.383091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.352 [2024-07-14 15:10:11.383128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.352 qpair failed and we were unable to recover it. 00:37:32.352 [2024-07-14 15:10:11.383284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.352 [2024-07-14 15:10:11.383323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.352 qpair failed and we were unable to recover it. 00:37:32.352 [2024-07-14 15:10:11.383540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.352 [2024-07-14 15:10:11.383594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.352 qpair failed and we were unable to recover it. 00:37:32.352 [2024-07-14 15:10:11.383720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.352 [2024-07-14 15:10:11.383773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.352 qpair failed and we were unable to recover it. 00:37:32.352 [2024-07-14 15:10:11.383917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.352 [2024-07-14 15:10:11.383952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.352 qpair failed and we were unable to recover it. 00:37:32.352 [2024-07-14 15:10:11.384091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.352 [2024-07-14 15:10:11.384126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.352 qpair failed and we were unable to recover it. 00:37:32.352 [2024-07-14 15:10:11.384261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.352 [2024-07-14 15:10:11.384295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.352 qpair failed and we were unable to recover it. 00:37:32.352 [2024-07-14 15:10:11.384427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.352 [2024-07-14 15:10:11.384461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.352 qpair failed and we were unable to recover it. 00:37:32.352 [2024-07-14 15:10:11.384573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.352 [2024-07-14 15:10:11.384606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.352 qpair failed and we were unable to recover it. 00:37:32.352 [2024-07-14 15:10:11.384738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.352 [2024-07-14 15:10:11.384772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.352 qpair failed and we were unable to recover it. 00:37:32.353 [2024-07-14 15:10:11.384937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.353 [2024-07-14 15:10:11.384975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.353 qpair failed and we were unable to recover it. 00:37:32.353 [2024-07-14 15:10:11.385113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.353 [2024-07-14 15:10:11.385146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.353 qpair failed and we were unable to recover it. 00:37:32.353 [2024-07-14 15:10:11.385279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.353 [2024-07-14 15:10:11.385316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.353 qpair failed and we were unable to recover it. 00:37:32.353 [2024-07-14 15:10:11.385464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.353 [2024-07-14 15:10:11.385501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.353 qpair failed and we were unable to recover it. 00:37:32.353 [2024-07-14 15:10:11.385646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.353 [2024-07-14 15:10:11.385683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.353 qpair failed and we were unable to recover it. 00:37:32.353 [2024-07-14 15:10:11.385841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.353 [2024-07-14 15:10:11.385891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.353 qpair failed and we were unable to recover it. 00:37:32.353 [2024-07-14 15:10:11.386024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.353 [2024-07-14 15:10:11.386058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.353 qpair failed and we were unable to recover it. 00:37:32.353 [2024-07-14 15:10:11.386192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.353 [2024-07-14 15:10:11.386225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.353 qpair failed and we were unable to recover it. 00:37:32.353 [2024-07-14 15:10:11.386328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.353 [2024-07-14 15:10:11.386377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.353 qpair failed and we were unable to recover it. 00:37:32.353 [2024-07-14 15:10:11.386542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.353 [2024-07-14 15:10:11.386580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.353 qpair failed and we were unable to recover it. 00:37:32.353 [2024-07-14 15:10:11.386698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.353 [2024-07-14 15:10:11.386735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.353 qpair failed and we were unable to recover it. 00:37:32.353 [2024-07-14 15:10:11.386925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.353 [2024-07-14 15:10:11.386974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.353 qpair failed and we were unable to recover it. 00:37:32.353 [2024-07-14 15:10:11.387103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.353 [2024-07-14 15:10:11.387139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.353 qpair failed and we were unable to recover it. 00:37:32.353 [2024-07-14 15:10:11.387340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.353 [2024-07-14 15:10:11.387374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.353 qpair failed and we were unable to recover it. 00:37:32.353 [2024-07-14 15:10:11.387542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.353 [2024-07-14 15:10:11.387597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.353 qpair failed and we were unable to recover it. 00:37:32.353 [2024-07-14 15:10:11.387731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.353 [2024-07-14 15:10:11.387768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.353 qpair failed and we were unable to recover it. 00:37:32.353 [2024-07-14 15:10:11.387908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.353 [2024-07-14 15:10:11.387943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.353 qpair failed and we were unable to recover it. 00:37:32.353 [2024-07-14 15:10:11.388044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.353 [2024-07-14 15:10:11.388079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.353 qpair failed and we were unable to recover it. 00:37:32.353 [2024-07-14 15:10:11.388194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.353 [2024-07-14 15:10:11.388230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.353 qpair failed and we were unable to recover it. 00:37:32.353 [2024-07-14 15:10:11.388381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.353 [2024-07-14 15:10:11.388418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.353 qpair failed and we were unable to recover it. 00:37:32.353 [2024-07-14 15:10:11.388573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.353 [2024-07-14 15:10:11.388610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.353 qpair failed and we were unable to recover it. 00:37:32.353 [2024-07-14 15:10:11.388724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.353 [2024-07-14 15:10:11.388761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.353 qpair failed and we were unable to recover it. 00:37:32.353 [2024-07-14 15:10:11.388927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.353 [2024-07-14 15:10:11.388976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.353 qpair failed and we were unable to recover it. 00:37:32.353 [2024-07-14 15:10:11.389111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.353 [2024-07-14 15:10:11.389168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.353 qpair failed and we were unable to recover it. 00:37:32.353 [2024-07-14 15:10:11.389348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.353 [2024-07-14 15:10:11.389401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.353 qpair failed and we were unable to recover it. 00:37:32.353 [2024-07-14 15:10:11.389550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.353 [2024-07-14 15:10:11.389602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.353 qpair failed and we were unable to recover it. 00:37:32.353 [2024-07-14 15:10:11.389737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.353 [2024-07-14 15:10:11.389772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.353 qpair failed and we were unable to recover it. 00:37:32.353 [2024-07-14 15:10:11.389952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.353 [2024-07-14 15:10:11.390020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.353 qpair failed and we were unable to recover it. 00:37:32.353 [2024-07-14 15:10:11.390144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.353 [2024-07-14 15:10:11.390179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.353 qpair failed and we were unable to recover it. 00:37:32.353 [2024-07-14 15:10:11.390331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.353 [2024-07-14 15:10:11.390364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.353 qpair failed and we were unable to recover it. 00:37:32.353 [2024-07-14 15:10:11.390525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.353 [2024-07-14 15:10:11.390591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.353 qpair failed and we were unable to recover it. 00:37:32.354 [2024-07-14 15:10:11.390712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.354 [2024-07-14 15:10:11.390749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.354 qpair failed and we were unable to recover it. 00:37:32.354 [2024-07-14 15:10:11.390893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.354 [2024-07-14 15:10:11.390946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.354 qpair failed and we were unable to recover it. 00:37:32.354 [2024-07-14 15:10:11.391087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.354 [2024-07-14 15:10:11.391140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.354 qpair failed and we were unable to recover it. 00:37:32.354 [2024-07-14 15:10:11.391303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.354 [2024-07-14 15:10:11.391337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.354 qpair failed and we were unable to recover it. 00:37:32.354 [2024-07-14 15:10:11.391492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.354 [2024-07-14 15:10:11.391547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.354 qpair failed and we were unable to recover it. 00:37:32.354 [2024-07-14 15:10:11.391658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.354 [2024-07-14 15:10:11.391692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.354 qpair failed and we were unable to recover it. 00:37:32.354 [2024-07-14 15:10:11.391838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.354 [2024-07-14 15:10:11.391883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.354 qpair failed and we were unable to recover it. 00:37:32.354 [2024-07-14 15:10:11.392031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.354 [2024-07-14 15:10:11.392065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.354 qpair failed and we were unable to recover it. 00:37:32.354 [2024-07-14 15:10:11.392270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.354 [2024-07-14 15:10:11.392307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.354 qpair failed and we were unable to recover it. 00:37:32.354 [2024-07-14 15:10:11.392437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.354 [2024-07-14 15:10:11.392494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.354 qpair failed and we were unable to recover it. 00:37:32.354 [2024-07-14 15:10:11.392697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.354 [2024-07-14 15:10:11.392757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.354 qpair failed and we were unable to recover it. 00:37:32.354 [2024-07-14 15:10:11.392956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.354 [2024-07-14 15:10:11.392990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.354 qpair failed and we were unable to recover it. 00:37:32.354 [2024-07-14 15:10:11.393121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.354 [2024-07-14 15:10:11.393155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.354 qpair failed and we were unable to recover it. 00:37:32.354 [2024-07-14 15:10:11.393296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.354 [2024-07-14 15:10:11.393349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.354 qpair failed and we were unable to recover it. 00:37:32.354 [2024-07-14 15:10:11.393482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.354 [2024-07-14 15:10:11.393523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.354 qpair failed and we were unable to recover it. 00:37:32.354 [2024-07-14 15:10:11.393686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.354 [2024-07-14 15:10:11.393725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.354 qpair failed and we were unable to recover it. 00:37:32.354 [2024-07-14 15:10:11.393873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.354 [2024-07-14 15:10:11.393932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.354 qpair failed and we were unable to recover it. 00:37:32.354 [2024-07-14 15:10:11.394079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.354 [2024-07-14 15:10:11.394128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.354 qpair failed and we were unable to recover it. 00:37:32.354 [2024-07-14 15:10:11.394341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.354 [2024-07-14 15:10:11.394406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.354 qpair failed and we were unable to recover it. 00:37:32.354 [2024-07-14 15:10:11.394607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.354 [2024-07-14 15:10:11.394667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.354 qpair failed and we were unable to recover it. 00:37:32.354 [2024-07-14 15:10:11.394828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.354 [2024-07-14 15:10:11.394864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.354 qpair failed and we were unable to recover it. 00:37:32.354 [2024-07-14 15:10:11.395025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.354 [2024-07-14 15:10:11.395058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.354 qpair failed and we were unable to recover it. 00:37:32.354 [2024-07-14 15:10:11.395171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.354 [2024-07-14 15:10:11.395221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.354 qpair failed and we were unable to recover it. 00:37:32.354 [2024-07-14 15:10:11.395445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.354 [2024-07-14 15:10:11.395528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.354 qpair failed and we were unable to recover it. 00:37:32.354 [2024-07-14 15:10:11.395740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.354 [2024-07-14 15:10:11.395834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.354 qpair failed and we were unable to recover it. 00:37:32.354 [2024-07-14 15:10:11.395991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.354 [2024-07-14 15:10:11.396027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.354 qpair failed and we were unable to recover it. 00:37:32.354 [2024-07-14 15:10:11.396182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.354 [2024-07-14 15:10:11.396220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.354 qpair failed and we were unable to recover it. 00:37:32.354 [2024-07-14 15:10:11.396378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.354 [2024-07-14 15:10:11.396415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.354 qpair failed and we were unable to recover it. 00:37:32.354 [2024-07-14 15:10:11.396612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.354 [2024-07-14 15:10:11.396667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.354 qpair failed and we were unable to recover it. 00:37:32.354 [2024-07-14 15:10:11.396796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.354 [2024-07-14 15:10:11.396835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.354 qpair failed and we were unable to recover it. 00:37:32.354 [2024-07-14 15:10:11.396978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.354 [2024-07-14 15:10:11.397012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.354 qpair failed and we were unable to recover it. 00:37:32.355 [2024-07-14 15:10:11.397189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.355 [2024-07-14 15:10:11.397243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.355 qpair failed and we were unable to recover it. 00:37:32.355 [2024-07-14 15:10:11.397403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.355 [2024-07-14 15:10:11.397464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.355 qpair failed and we were unable to recover it. 00:37:32.355 [2024-07-14 15:10:11.397654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.355 [2024-07-14 15:10:11.397715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.355 qpair failed and we were unable to recover it. 00:37:32.355 [2024-07-14 15:10:11.397891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.355 [2024-07-14 15:10:11.397945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.355 qpair failed and we were unable to recover it. 00:37:32.355 [2024-07-14 15:10:11.398053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.355 [2024-07-14 15:10:11.398086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.355 qpair failed and we were unable to recover it. 00:37:32.355 [2024-07-14 15:10:11.398231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.355 [2024-07-14 15:10:11.398280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.355 qpair failed and we were unable to recover it. 00:37:32.355 [2024-07-14 15:10:11.398507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.355 [2024-07-14 15:10:11.398567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.355 qpair failed and we were unable to recover it. 00:37:32.355 [2024-07-14 15:10:11.398715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.355 [2024-07-14 15:10:11.398809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.355 qpair failed and we were unable to recover it. 00:37:32.355 [2024-07-14 15:10:11.398944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.355 [2024-07-14 15:10:11.398979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.355 qpair failed and we were unable to recover it. 00:37:32.355 [2024-07-14 15:10:11.399142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.355 [2024-07-14 15:10:11.399198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.355 qpair failed and we were unable to recover it. 00:37:32.355 [2024-07-14 15:10:11.399370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.355 [2024-07-14 15:10:11.399413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.355 qpair failed and we were unable to recover it. 00:37:32.355 [2024-07-14 15:10:11.399609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.355 [2024-07-14 15:10:11.399669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.355 qpair failed and we were unable to recover it. 00:37:32.355 [2024-07-14 15:10:11.399795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.355 [2024-07-14 15:10:11.399833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.355 qpair failed and we were unable to recover it. 00:37:32.355 [2024-07-14 15:10:11.399966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.355 [2024-07-14 15:10:11.400000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.355 qpair failed and we were unable to recover it. 00:37:32.355 [2024-07-14 15:10:11.400160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.355 [2024-07-14 15:10:11.400193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.355 qpair failed and we were unable to recover it. 00:37:32.355 [2024-07-14 15:10:11.400376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.355 [2024-07-14 15:10:11.400432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.355 qpair failed and we were unable to recover it. 00:37:32.355 [2024-07-14 15:10:11.400677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.355 [2024-07-14 15:10:11.400714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.355 qpair failed and we were unable to recover it. 00:37:32.355 [2024-07-14 15:10:11.400850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.355 [2024-07-14 15:10:11.400902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.355 qpair failed and we were unable to recover it. 00:37:32.355 [2024-07-14 15:10:11.401073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.355 [2024-07-14 15:10:11.401113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.355 qpair failed and we were unable to recover it. 00:37:32.355 [2024-07-14 15:10:11.401279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.355 [2024-07-14 15:10:11.401317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.355 qpair failed and we were unable to recover it. 00:37:32.355 [2024-07-14 15:10:11.401494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.355 [2024-07-14 15:10:11.401532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.355 qpair failed and we were unable to recover it. 00:37:32.355 [2024-07-14 15:10:11.401664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.355 [2024-07-14 15:10:11.401697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.355 qpair failed and we were unable to recover it. 00:37:32.355 [2024-07-14 15:10:11.401902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.355 [2024-07-14 15:10:11.401936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.355 qpair failed and we were unable to recover it. 00:37:32.355 [2024-07-14 15:10:11.402049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.355 [2024-07-14 15:10:11.402084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.355 qpair failed and we were unable to recover it. 00:37:32.355 [2024-07-14 15:10:11.402236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.356 [2024-07-14 15:10:11.402274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.356 qpair failed and we were unable to recover it. 00:37:32.356 [2024-07-14 15:10:11.402406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.356 [2024-07-14 15:10:11.402460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.356 qpair failed and we were unable to recover it. 00:37:32.356 [2024-07-14 15:10:11.402619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.356 [2024-07-14 15:10:11.402663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.356 qpair failed and we were unable to recover it. 00:37:32.356 [2024-07-14 15:10:11.402795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.356 [2024-07-14 15:10:11.402834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.356 qpair failed and we were unable to recover it. 00:37:32.356 [2024-07-14 15:10:11.402987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.356 [2024-07-14 15:10:11.403037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.356 qpair failed and we were unable to recover it. 00:37:32.356 [2024-07-14 15:10:11.403186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.356 [2024-07-14 15:10:11.403222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.356 qpair failed and we were unable to recover it. 00:37:32.356 [2024-07-14 15:10:11.403384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.356 [2024-07-14 15:10:11.403422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.356 qpair failed and we were unable to recover it. 00:37:32.356 [2024-07-14 15:10:11.403545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.356 [2024-07-14 15:10:11.403583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.356 qpair failed and we were unable to recover it. 00:37:32.356 [2024-07-14 15:10:11.403756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.356 [2024-07-14 15:10:11.403794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.356 qpair failed and we were unable to recover it. 00:37:32.356 [2024-07-14 15:10:11.403979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.356 [2024-07-14 15:10:11.404014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.356 qpair failed and we were unable to recover it. 00:37:32.356 [2024-07-14 15:10:11.404138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.356 [2024-07-14 15:10:11.404189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.356 qpair failed and we were unable to recover it. 00:37:32.356 [2024-07-14 15:10:11.404375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.356 [2024-07-14 15:10:11.404409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.356 qpair failed and we were unable to recover it. 00:37:32.356 [2024-07-14 15:10:11.404529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.356 [2024-07-14 15:10:11.404567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.356 qpair failed and we were unable to recover it. 00:37:32.356 [2024-07-14 15:10:11.404694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.356 [2024-07-14 15:10:11.404732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.356 qpair failed and we were unable to recover it. 00:37:32.356 [2024-07-14 15:10:11.404910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.356 [2024-07-14 15:10:11.404946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.356 qpair failed and we were unable to recover it. 00:37:32.356 [2024-07-14 15:10:11.405059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.356 [2024-07-14 15:10:11.405093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.356 qpair failed and we were unable to recover it. 00:37:32.356 [2024-07-14 15:10:11.405228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.356 [2024-07-14 15:10:11.405262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.356 qpair failed and we were unable to recover it. 00:37:32.356 [2024-07-14 15:10:11.405412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.356 [2024-07-14 15:10:11.405450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.356 qpair failed and we were unable to recover it. 00:37:32.356 [2024-07-14 15:10:11.405606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.356 [2024-07-14 15:10:11.405644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.356 qpair failed and we were unable to recover it. 00:37:32.356 [2024-07-14 15:10:11.405790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.356 [2024-07-14 15:10:11.405828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.356 qpair failed and we were unable to recover it. 00:37:32.356 [2024-07-14 15:10:11.405989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.356 [2024-07-14 15:10:11.406037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.356 qpair failed and we were unable to recover it. 00:37:32.356 [2024-07-14 15:10:11.406230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.356 [2024-07-14 15:10:11.406278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.356 qpair failed and we were unable to recover it. 00:37:32.356 [2024-07-14 15:10:11.406443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.356 [2024-07-14 15:10:11.406482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.356 qpair failed and we were unable to recover it. 00:37:32.356 [2024-07-14 15:10:11.406656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.356 [2024-07-14 15:10:11.406714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.356 qpair failed and we were unable to recover it. 00:37:32.356 [2024-07-14 15:10:11.406869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.356 [2024-07-14 15:10:11.406915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.356 qpair failed and we were unable to recover it. 00:37:32.356 [2024-07-14 15:10:11.407051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.356 [2024-07-14 15:10:11.407085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.356 qpair failed and we were unable to recover it. 00:37:32.356 [2024-07-14 15:10:11.407226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.356 [2024-07-14 15:10:11.407263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.356 qpair failed and we were unable to recover it. 00:37:32.356 [2024-07-14 15:10:11.407396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.356 [2024-07-14 15:10:11.407449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.356 qpair failed and we were unable to recover it. 00:37:32.356 [2024-07-14 15:10:11.407584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.356 [2024-07-14 15:10:11.407639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.356 qpair failed and we were unable to recover it. 00:37:32.356 [2024-07-14 15:10:11.407762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.356 [2024-07-14 15:10:11.407801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.356 qpair failed and we were unable to recover it. 00:37:32.356 [2024-07-14 15:10:11.407967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.356 [2024-07-14 15:10:11.408002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.356 qpair failed and we were unable to recover it. 00:37:32.356 [2024-07-14 15:10:11.408121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.356 [2024-07-14 15:10:11.408174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.356 qpair failed and we were unable to recover it. 00:37:32.356 [2024-07-14 15:10:11.408302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.356 [2024-07-14 15:10:11.408344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.356 qpair failed and we were unable to recover it. 00:37:32.357 [2024-07-14 15:10:11.408483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.357 [2024-07-14 15:10:11.408535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.357 qpair failed and we were unable to recover it. 00:37:32.357 [2024-07-14 15:10:11.408678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.357 [2024-07-14 15:10:11.408721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.357 qpair failed and we were unable to recover it. 00:37:32.357 [2024-07-14 15:10:11.408868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.357 [2024-07-14 15:10:11.408931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.357 qpair failed and we were unable to recover it. 00:37:32.357 [2024-07-14 15:10:11.409038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.357 [2024-07-14 15:10:11.409071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.357 qpair failed and we were unable to recover it. 00:37:32.357 [2024-07-14 15:10:11.409205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.357 [2024-07-14 15:10:11.409256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.357 qpair failed and we were unable to recover it. 00:37:32.357 [2024-07-14 15:10:11.409380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.357 [2024-07-14 15:10:11.409419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.357 qpair failed and we were unable to recover it. 00:37:32.357 [2024-07-14 15:10:11.409562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.357 [2024-07-14 15:10:11.409616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.357 qpair failed and we were unable to recover it. 00:37:32.357 [2024-07-14 15:10:11.409798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.357 [2024-07-14 15:10:11.409835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.357 qpair failed and we were unable to recover it. 00:37:32.357 [2024-07-14 15:10:11.409972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.357 [2024-07-14 15:10:11.410007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.357 qpair failed and we were unable to recover it. 00:37:32.357 [2024-07-14 15:10:11.410148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.357 [2024-07-14 15:10:11.410181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.357 qpair failed and we were unable to recover it. 00:37:32.357 [2024-07-14 15:10:11.410285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.357 [2024-07-14 15:10:11.410337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.357 qpair failed and we were unable to recover it. 00:37:32.357 [2024-07-14 15:10:11.410455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.357 [2024-07-14 15:10:11.410492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.357 qpair failed and we were unable to recover it. 00:37:32.357 [2024-07-14 15:10:11.410640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.357 [2024-07-14 15:10:11.410677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.357 qpair failed and we were unable to recover it. 00:37:32.357 [2024-07-14 15:10:11.410821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.357 [2024-07-14 15:10:11.410857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.357 qpair failed and we were unable to recover it. 00:37:32.357 [2024-07-14 15:10:11.410991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.357 [2024-07-14 15:10:11.411024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.357 qpair failed and we were unable to recover it. 00:37:32.357 [2024-07-14 15:10:11.411138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.357 [2024-07-14 15:10:11.411172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.357 qpair failed and we were unable to recover it. 00:37:32.357 [2024-07-14 15:10:11.411316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.357 [2024-07-14 15:10:11.411349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.357 qpair failed and we were unable to recover it. 00:37:32.357 [2024-07-14 15:10:11.411472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.357 [2024-07-14 15:10:11.411509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.357 qpair failed and we were unable to recover it. 00:37:32.357 [2024-07-14 15:10:11.411695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.357 [2024-07-14 15:10:11.411761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.357 qpair failed and we were unable to recover it. 00:37:32.357 [2024-07-14 15:10:11.411900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.357 [2024-07-14 15:10:11.411938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.357 qpair failed and we were unable to recover it. 00:37:32.357 [2024-07-14 15:10:11.412118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.357 [2024-07-14 15:10:11.412173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.357 qpair failed and we were unable to recover it. 00:37:32.357 [2024-07-14 15:10:11.412345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.357 [2024-07-14 15:10:11.412383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.357 qpair failed and we were unable to recover it. 00:37:32.357 [2024-07-14 15:10:11.412536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.357 [2024-07-14 15:10:11.412573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.357 qpair failed and we were unable to recover it. 00:37:32.357 [2024-07-14 15:10:11.412696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.357 [2024-07-14 15:10:11.412745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.357 qpair failed and we were unable to recover it. 00:37:32.357 [2024-07-14 15:10:11.412901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.357 [2024-07-14 15:10:11.412956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.357 qpair failed and we were unable to recover it. 00:37:32.357 [2024-07-14 15:10:11.413114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.357 [2024-07-14 15:10:11.413167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.357 qpair failed and we were unable to recover it. 00:37:32.357 [2024-07-14 15:10:11.413336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.357 [2024-07-14 15:10:11.413369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.357 qpair failed and we were unable to recover it. 00:37:32.357 [2024-07-14 15:10:11.413510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.357 [2024-07-14 15:10:11.413547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.357 qpair failed and we were unable to recover it. 00:37:32.357 [2024-07-14 15:10:11.413709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.357 [2024-07-14 15:10:11.413746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.357 qpair failed and we were unable to recover it. 00:37:32.357 [2024-07-14 15:10:11.413896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.357 [2024-07-14 15:10:11.413951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.357 qpair failed and we were unable to recover it. 00:37:32.357 [2024-07-14 15:10:11.414088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.357 [2024-07-14 15:10:11.414121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.357 qpair failed and we were unable to recover it. 00:37:32.357 [2024-07-14 15:10:11.414278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.357 [2024-07-14 15:10:11.414314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.357 qpair failed and we were unable to recover it. 00:37:32.358 [2024-07-14 15:10:11.414520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.358 [2024-07-14 15:10:11.414557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.358 qpair failed and we were unable to recover it. 00:37:32.358 [2024-07-14 15:10:11.414680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.358 [2024-07-14 15:10:11.414717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.358 qpair failed and we were unable to recover it. 00:37:32.358 [2024-07-14 15:10:11.414868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.358 [2024-07-14 15:10:11.414927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.358 qpair failed and we were unable to recover it. 00:37:32.358 [2024-07-14 15:10:11.415043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.358 [2024-07-14 15:10:11.415076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.358 qpair failed and we were unable to recover it. 00:37:32.358 [2024-07-14 15:10:11.415203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.358 [2024-07-14 15:10:11.415237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.358 qpair failed and we were unable to recover it. 00:37:32.358 [2024-07-14 15:10:11.415351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.358 [2024-07-14 15:10:11.415384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.358 qpair failed and we were unable to recover it. 00:37:32.358 [2024-07-14 15:10:11.415521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.358 [2024-07-14 15:10:11.415557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.358 qpair failed and we were unable to recover it. 00:37:32.358 [2024-07-14 15:10:11.415675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.358 [2024-07-14 15:10:11.415712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.358 qpair failed and we were unable to recover it. 00:37:32.358 [2024-07-14 15:10:11.415849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.358 [2024-07-14 15:10:11.415909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.358 qpair failed and we were unable to recover it. 00:37:32.358 [2024-07-14 15:10:11.416057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.358 [2024-07-14 15:10:11.416111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.358 qpair failed and we were unable to recover it. 00:37:32.358 [2024-07-14 15:10:11.416268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.358 [2024-07-14 15:10:11.416323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.358 qpair failed and we were unable to recover it. 00:37:32.358 [2024-07-14 15:10:11.416477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.358 [2024-07-14 15:10:11.416515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.358 qpair failed and we were unable to recover it. 00:37:32.358 [2024-07-14 15:10:11.416668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.358 [2024-07-14 15:10:11.416720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.358 qpair failed and we were unable to recover it. 00:37:32.358 [2024-07-14 15:10:11.416851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.358 [2024-07-14 15:10:11.416894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.358 qpair failed and we were unable to recover it. 00:37:32.358 [2024-07-14 15:10:11.417007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.358 [2024-07-14 15:10:11.417041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.358 qpair failed and we were unable to recover it. 00:37:32.358 [2024-07-14 15:10:11.417174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.358 [2024-07-14 15:10:11.417208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.358 qpair failed and we were unable to recover it. 00:37:32.358 [2024-07-14 15:10:11.417349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.358 [2024-07-14 15:10:11.417384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.358 qpair failed and we were unable to recover it. 00:37:32.358 [2024-07-14 15:10:11.417514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.358 [2024-07-14 15:10:11.417548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.358 qpair failed and we were unable to recover it. 00:37:32.358 [2024-07-14 15:10:11.417686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.358 [2024-07-14 15:10:11.417720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.358 qpair failed and we were unable to recover it. 00:37:32.358 [2024-07-14 15:10:11.417860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.358 [2024-07-14 15:10:11.417901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.358 qpair failed and we were unable to recover it. 00:37:32.358 [2024-07-14 15:10:11.418016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.358 [2024-07-14 15:10:11.418051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.358 qpair failed and we were unable to recover it. 00:37:32.358 [2024-07-14 15:10:11.418202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.358 [2024-07-14 15:10:11.418236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.358 qpair failed and we were unable to recover it. 00:37:32.358 [2024-07-14 15:10:11.418377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.358 [2024-07-14 15:10:11.418411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.358 qpair failed and we were unable to recover it. 00:37:32.358 [2024-07-14 15:10:11.418521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.358 [2024-07-14 15:10:11.418555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.358 qpair failed and we were unable to recover it. 00:37:32.358 [2024-07-14 15:10:11.418701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.358 [2024-07-14 15:10:11.418734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.358 qpair failed and we were unable to recover it. 00:37:32.358 [2024-07-14 15:10:11.418882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.358 [2024-07-14 15:10:11.418916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.358 qpair failed and we were unable to recover it. 00:37:32.358 [2024-07-14 15:10:11.419034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.358 [2024-07-14 15:10:11.419069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.358 qpair failed and we were unable to recover it. 00:37:32.358 [2024-07-14 15:10:11.419176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.358 [2024-07-14 15:10:11.419210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.358 qpair failed and we were unable to recover it. 00:37:32.358 [2024-07-14 15:10:11.419360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.358 [2024-07-14 15:10:11.419394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.358 qpair failed and we were unable to recover it. 00:37:32.358 [2024-07-14 15:10:11.419524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.358 [2024-07-14 15:10:11.419558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.358 qpair failed and we were unable to recover it. 00:37:32.358 [2024-07-14 15:10:11.419694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.358 [2024-07-14 15:10:11.419728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.358 qpair failed and we were unable to recover it. 00:37:32.358 [2024-07-14 15:10:11.419862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.359 [2024-07-14 15:10:11.419902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.359 qpair failed and we were unable to recover it. 00:37:32.359 [2024-07-14 15:10:11.420018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.359 [2024-07-14 15:10:11.420052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.359 qpair failed and we were unable to recover it. 00:37:32.359 [2024-07-14 15:10:11.420193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.359 [2024-07-14 15:10:11.420227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.359 qpair failed and we were unable to recover it. 00:37:32.359 [2024-07-14 15:10:11.420340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.359 [2024-07-14 15:10:11.420374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.359 qpair failed and we were unable to recover it. 00:37:32.359 [2024-07-14 15:10:11.420509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.359 [2024-07-14 15:10:11.420543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.359 qpair failed and we were unable to recover it. 00:37:32.359 [2024-07-14 15:10:11.420717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.359 [2024-07-14 15:10:11.420751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.359 qpair failed and we were unable to recover it. 00:37:32.359 [2024-07-14 15:10:11.420855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.359 [2024-07-14 15:10:11.420896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.359 qpair failed and we were unable to recover it. 00:37:32.359 [2024-07-14 15:10:11.421031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.359 [2024-07-14 15:10:11.421064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.359 qpair failed and we were unable to recover it. 00:37:32.359 [2024-07-14 15:10:11.421189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.359 [2024-07-14 15:10:11.421222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.359 qpair failed and we were unable to recover it. 00:37:32.359 [2024-07-14 15:10:11.421365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.359 [2024-07-14 15:10:11.421398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.359 qpair failed and we were unable to recover it. 00:37:32.359 [2024-07-14 15:10:11.421525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.359 [2024-07-14 15:10:11.421558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.359 qpair failed and we were unable to recover it. 00:37:32.359 [2024-07-14 15:10:11.421686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.359 [2024-07-14 15:10:11.421734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.359 qpair failed and we were unable to recover it. 00:37:32.359 [2024-07-14 15:10:11.421925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.359 [2024-07-14 15:10:11.421974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.359 qpair failed and we were unable to recover it. 00:37:32.359 [2024-07-14 15:10:11.422084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.359 [2024-07-14 15:10:11.422119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.359 qpair failed and we were unable to recover it. 00:37:32.359 [2024-07-14 15:10:11.422275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.359 [2024-07-14 15:10:11.422327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.359 qpair failed and we were unable to recover it. 00:37:32.359 [2024-07-14 15:10:11.422507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.359 [2024-07-14 15:10:11.422558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.359 qpair failed and we were unable to recover it. 00:37:32.359 [2024-07-14 15:10:11.422677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.359 [2024-07-14 15:10:11.422711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.359 qpair failed and we were unable to recover it. 00:37:32.359 [2024-07-14 15:10:11.422846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.359 [2024-07-14 15:10:11.422886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.359 qpair failed and we were unable to recover it. 00:37:32.359 [2024-07-14 15:10:11.422998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.359 [2024-07-14 15:10:11.423031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.359 qpair failed and we were unable to recover it. 00:37:32.359 [2024-07-14 15:10:11.423203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.359 [2024-07-14 15:10:11.423240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.359 qpair failed and we were unable to recover it. 00:37:32.359 [2024-07-14 15:10:11.423422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.359 [2024-07-14 15:10:11.423458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.359 qpair failed and we were unable to recover it. 00:37:32.359 [2024-07-14 15:10:11.423643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.359 [2024-07-14 15:10:11.423700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.359 qpair failed and we were unable to recover it. 00:37:32.359 [2024-07-14 15:10:11.423856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.359 [2024-07-14 15:10:11.423901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.359 qpair failed and we were unable to recover it. 00:37:32.359 [2024-07-14 15:10:11.424031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.359 [2024-07-14 15:10:11.424065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.359 qpair failed and we were unable to recover it. 00:37:32.359 [2024-07-14 15:10:11.424239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.359 [2024-07-14 15:10:11.424275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.359 qpair failed and we were unable to recover it. 00:37:32.359 [2024-07-14 15:10:11.424415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.359 [2024-07-14 15:10:11.424451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.359 qpair failed and we were unable to recover it. 00:37:32.359 [2024-07-14 15:10:11.424576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.359 [2024-07-14 15:10:11.424613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.359 qpair failed and we were unable to recover it. 00:37:32.359 [2024-07-14 15:10:11.424752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.359 [2024-07-14 15:10:11.424789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.359 qpair failed and we were unable to recover it. 00:37:32.359 [2024-07-14 15:10:11.424944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.359 [2024-07-14 15:10:11.424993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.359 qpair failed and we were unable to recover it. 00:37:32.359 [2024-07-14 15:10:11.425148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.359 [2024-07-14 15:10:11.425197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.359 qpair failed and we were unable to recover it. 00:37:32.360 [2024-07-14 15:10:11.425383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.360 [2024-07-14 15:10:11.425436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.360 qpair failed and we were unable to recover it. 00:37:32.360 [2024-07-14 15:10:11.425589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.360 [2024-07-14 15:10:11.425629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.360 qpair failed and we were unable to recover it. 00:37:32.360 [2024-07-14 15:10:11.425823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.360 [2024-07-14 15:10:11.425862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.360 qpair failed and we were unable to recover it. 00:37:32.360 [2024-07-14 15:10:11.426005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.360 [2024-07-14 15:10:11.426040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.360 qpair failed and we were unable to recover it. 00:37:32.360 [2024-07-14 15:10:11.426150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.360 [2024-07-14 15:10:11.426185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.360 qpair failed and we were unable to recover it. 00:37:32.360 [2024-07-14 15:10:11.426375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.360 [2024-07-14 15:10:11.426412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.360 qpair failed and we were unable to recover it. 00:37:32.360 [2024-07-14 15:10:11.426586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.360 [2024-07-14 15:10:11.426623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.360 qpair failed and we were unable to recover it. 00:37:32.360 [2024-07-14 15:10:11.426771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.360 [2024-07-14 15:10:11.426837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.360 qpair failed and we were unable to recover it. 00:37:32.360 [2024-07-14 15:10:11.427036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.360 [2024-07-14 15:10:11.427085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.360 qpair failed and we were unable to recover it. 00:37:32.360 [2024-07-14 15:10:11.427207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.360 [2024-07-14 15:10:11.427264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.360 qpair failed and we were unable to recover it. 00:37:32.360 [2024-07-14 15:10:11.427485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.360 [2024-07-14 15:10:11.427543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.360 qpair failed and we were unable to recover it. 00:37:32.360 [2024-07-14 15:10:11.427749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.360 [2024-07-14 15:10:11.427818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.360 qpair failed and we were unable to recover it. 00:37:32.360 [2024-07-14 15:10:11.427989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.360 [2024-07-14 15:10:11.428030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.360 qpair failed and we were unable to recover it. 00:37:32.360 [2024-07-14 15:10:11.428184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.360 [2024-07-14 15:10:11.428222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.360 qpair failed and we were unable to recover it. 00:37:32.360 [2024-07-14 15:10:11.428340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.360 [2024-07-14 15:10:11.428378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.360 qpair failed and we were unable to recover it. 00:37:32.360 [2024-07-14 15:10:11.428581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.360 [2024-07-14 15:10:11.428625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.360 qpair failed and we were unable to recover it. 00:37:32.360 [2024-07-14 15:10:11.428819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.360 [2024-07-14 15:10:11.428857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.360 qpair failed and we were unable to recover it. 00:37:32.360 [2024-07-14 15:10:11.429043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.360 [2024-07-14 15:10:11.429091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.360 qpair failed and we were unable to recover it. 00:37:32.360 [2024-07-14 15:10:11.429238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.360 [2024-07-14 15:10:11.429273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.360 qpair failed and we were unable to recover it. 00:37:32.360 [2024-07-14 15:10:11.429448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.360 [2024-07-14 15:10:11.429509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.360 qpair failed and we were unable to recover it. 00:37:32.360 [2024-07-14 15:10:11.429725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.360 [2024-07-14 15:10:11.429762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.360 qpair failed and we were unable to recover it. 00:37:32.360 [2024-07-14 15:10:11.429920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.360 [2024-07-14 15:10:11.429954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.360 qpair failed and we were unable to recover it. 00:37:32.360 [2024-07-14 15:10:11.430088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.360 [2024-07-14 15:10:11.430122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.360 qpair failed and we were unable to recover it. 00:37:32.360 [2024-07-14 15:10:11.430275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.360 [2024-07-14 15:10:11.430313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.360 qpair failed and we were unable to recover it. 00:37:32.360 [2024-07-14 15:10:11.430448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.360 [2024-07-14 15:10:11.430502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.360 qpair failed and we were unable to recover it. 00:37:32.360 [2024-07-14 15:10:11.430643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.360 [2024-07-14 15:10:11.430680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.360 qpair failed and we were unable to recover it. 00:37:32.360 [2024-07-14 15:10:11.430797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.360 [2024-07-14 15:10:11.430836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.360 qpair failed and we were unable to recover it. 00:37:32.360 [2024-07-14 15:10:11.431064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.360 [2024-07-14 15:10:11.431098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.360 qpair failed and we were unable to recover it. 00:37:32.360 [2024-07-14 15:10:11.431290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.360 [2024-07-14 15:10:11.431327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.360 qpair failed and we were unable to recover it. 00:37:32.360 [2024-07-14 15:10:11.431498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.360 [2024-07-14 15:10:11.431532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.360 qpair failed and we were unable to recover it. 00:37:32.360 [2024-07-14 15:10:11.431783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.361 [2024-07-14 15:10:11.431821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.361 qpair failed and we were unable to recover it. 00:37:32.361 [2024-07-14 15:10:11.432005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.361 [2024-07-14 15:10:11.432038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.361 qpair failed and we were unable to recover it. 00:37:32.361 [2024-07-14 15:10:11.432222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.361 [2024-07-14 15:10:11.432290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.361 qpair failed and we were unable to recover it. 00:37:32.361 [2024-07-14 15:10:11.432553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.361 [2024-07-14 15:10:11.432612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.361 qpair failed and we were unable to recover it. 00:37:32.361 [2024-07-14 15:10:11.432745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.361 [2024-07-14 15:10:11.432791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.361 qpair failed and we were unable to recover it. 00:37:32.361 [2024-07-14 15:10:11.433010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.361 [2024-07-14 15:10:11.433059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.361 qpair failed and we were unable to recover it. 00:37:32.361 [2024-07-14 15:10:11.433219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.361 [2024-07-14 15:10:11.433257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.361 qpair failed and we were unable to recover it. 00:37:32.361 [2024-07-14 15:10:11.433419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.361 [2024-07-14 15:10:11.433477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.361 qpair failed and we were unable to recover it. 00:37:32.361 [2024-07-14 15:10:11.433662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.361 [2024-07-14 15:10:11.433724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.361 qpair failed and we were unable to recover it. 00:37:32.361 [2024-07-14 15:10:11.433915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.361 [2024-07-14 15:10:11.433951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.361 qpair failed and we were unable to recover it. 00:37:32.361 [2024-07-14 15:10:11.434070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.361 [2024-07-14 15:10:11.434105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.361 qpair failed and we were unable to recover it. 00:37:32.361 [2024-07-14 15:10:11.434243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.361 [2024-07-14 15:10:11.434278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.361 qpair failed and we were unable to recover it. 00:37:32.361 [2024-07-14 15:10:11.434445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.361 [2024-07-14 15:10:11.434485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.361 qpair failed and we were unable to recover it. 00:37:32.361 [2024-07-14 15:10:11.434651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.361 [2024-07-14 15:10:11.434705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.361 qpair failed and we were unable to recover it. 00:37:32.361 [2024-07-14 15:10:11.434873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.361 [2024-07-14 15:10:11.434942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.361 qpair failed and we were unable to recover it. 00:37:32.361 [2024-07-14 15:10:11.435097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.361 [2024-07-14 15:10:11.435146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.361 qpair failed and we were unable to recover it. 00:37:32.361 [2024-07-14 15:10:11.435297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.361 [2024-07-14 15:10:11.435334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.361 qpair failed and we were unable to recover it. 00:37:32.361 [2024-07-14 15:10:11.435449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.361 [2024-07-14 15:10:11.435485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.361 qpair failed and we were unable to recover it. 00:37:32.361 [2024-07-14 15:10:11.435741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.361 [2024-07-14 15:10:11.435800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.361 qpair failed and we were unable to recover it. 00:37:32.361 [2024-07-14 15:10:11.435961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.361 [2024-07-14 15:10:11.435998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.361 qpair failed and we were unable to recover it. 00:37:32.361 [2024-07-14 15:10:11.436135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.361 [2024-07-14 15:10:11.436191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.361 qpair failed and we were unable to recover it. 00:37:32.361 [2024-07-14 15:10:11.436321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.361 [2024-07-14 15:10:11.436360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.361 qpair failed and we were unable to recover it. 00:37:32.361 [2024-07-14 15:10:11.436512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.361 [2024-07-14 15:10:11.436549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.361 qpair failed and we were unable to recover it. 00:37:32.361 [2024-07-14 15:10:11.436722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.361 [2024-07-14 15:10:11.436776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.361 qpair failed and we were unable to recover it. 00:37:32.361 [2024-07-14 15:10:11.436931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.361 [2024-07-14 15:10:11.436968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.361 qpair failed and we were unable to recover it. 00:37:32.361 [2024-07-14 15:10:11.437108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.362 [2024-07-14 15:10:11.437147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.362 qpair failed and we were unable to recover it. 00:37:32.362 [2024-07-14 15:10:11.437325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.362 [2024-07-14 15:10:11.437382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.362 qpair failed and we were unable to recover it. 00:37:32.362 [2024-07-14 15:10:11.437610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.362 [2024-07-14 15:10:11.437670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.362 qpair failed and we were unable to recover it. 00:37:32.362 [2024-07-14 15:10:11.437826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.362 [2024-07-14 15:10:11.437861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.362 qpair failed and we were unable to recover it. 00:37:32.362 [2024-07-14 15:10:11.438043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.362 [2024-07-14 15:10:11.438093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.362 qpair failed and we were unable to recover it. 00:37:32.362 [2024-07-14 15:10:11.438310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.362 [2024-07-14 15:10:11.438403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.362 qpair failed and we were unable to recover it. 00:37:32.362 [2024-07-14 15:10:11.438698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.362 [2024-07-14 15:10:11.438760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.362 qpair failed and we were unable to recover it. 00:37:32.362 [2024-07-14 15:10:11.438951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.362 [2024-07-14 15:10:11.438986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.362 qpair failed and we were unable to recover it. 00:37:32.362 [2024-07-14 15:10:11.439096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.362 [2024-07-14 15:10:11.439130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.362 qpair failed and we were unable to recover it. 00:37:32.362 [2024-07-14 15:10:11.439310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.362 [2024-07-14 15:10:11.439345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.362 qpair failed and we were unable to recover it. 00:37:32.362 [2024-07-14 15:10:11.439463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.362 [2024-07-14 15:10:11.439524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.362 qpair failed and we were unable to recover it. 00:37:32.362 [2024-07-14 15:10:11.439673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.362 [2024-07-14 15:10:11.439722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.362 qpair failed and we were unable to recover it. 00:37:32.362 [2024-07-14 15:10:11.439896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.362 [2024-07-14 15:10:11.439931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.362 qpair failed and we were unable to recover it. 00:37:32.362 [2024-07-14 15:10:11.440090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.362 [2024-07-14 15:10:11.440138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.362 qpair failed and we were unable to recover it. 00:37:32.362 [2024-07-14 15:10:11.440323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.362 [2024-07-14 15:10:11.440377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.362 qpair failed and we were unable to recover it. 00:37:32.362 [2024-07-14 15:10:11.440585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.362 [2024-07-14 15:10:11.440623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.362 qpair failed and we were unable to recover it. 00:37:32.362 [2024-07-14 15:10:11.440755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.362 [2024-07-14 15:10:11.440793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.362 qpair failed and we were unable to recover it. 00:37:32.362 [2024-07-14 15:10:11.440965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.362 [2024-07-14 15:10:11.441000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.362 qpair failed and we were unable to recover it. 00:37:32.362 [2024-07-14 15:10:11.441109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.362 [2024-07-14 15:10:11.441143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.362 qpair failed and we were unable to recover it. 00:37:32.362 [2024-07-14 15:10:11.441288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.362 [2024-07-14 15:10:11.441347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.362 qpair failed and we were unable to recover it. 00:37:32.362 [2024-07-14 15:10:11.441523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.362 [2024-07-14 15:10:11.441561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.362 qpair failed and we were unable to recover it. 00:37:32.362 [2024-07-14 15:10:11.441728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.362 [2024-07-14 15:10:11.441765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.362 qpair failed and we were unable to recover it. 00:37:32.362 [2024-07-14 15:10:11.441970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.362 [2024-07-14 15:10:11.442019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.362 qpair failed and we were unable to recover it. 00:37:32.362 [2024-07-14 15:10:11.442172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.362 [2024-07-14 15:10:11.442227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.362 qpair failed and we were unable to recover it. 00:37:32.362 [2024-07-14 15:10:11.442373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.362 [2024-07-14 15:10:11.442430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.362 qpair failed and we were unable to recover it. 00:37:32.362 [2024-07-14 15:10:11.442656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.362 [2024-07-14 15:10:11.442721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.362 qpair failed and we were unable to recover it. 00:37:32.362 [2024-07-14 15:10:11.442875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.362 [2024-07-14 15:10:11.442942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.362 qpair failed and we were unable to recover it. 00:37:32.362 [2024-07-14 15:10:11.443101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.362 [2024-07-14 15:10:11.443149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.362 qpair failed and we were unable to recover it. 00:37:32.362 [2024-07-14 15:10:11.443295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.362 [2024-07-14 15:10:11.443350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.362 qpair failed and we were unable to recover it. 00:37:32.362 [2024-07-14 15:10:11.443623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.362 [2024-07-14 15:10:11.443683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.362 qpair failed and we were unable to recover it. 00:37:32.362 [2024-07-14 15:10:11.443796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.363 [2024-07-14 15:10:11.443830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.363 qpair failed and we were unable to recover it. 00:37:32.363 [2024-07-14 15:10:11.444028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.363 [2024-07-14 15:10:11.444077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.363 qpair failed and we were unable to recover it. 00:37:32.363 [2024-07-14 15:10:11.444242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.363 [2024-07-14 15:10:11.444296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.363 qpair failed and we were unable to recover it. 00:37:32.363 [2024-07-14 15:10:11.444516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.363 [2024-07-14 15:10:11.444577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.363 qpair failed and we were unable to recover it. 00:37:32.363 [2024-07-14 15:10:11.444783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.363 [2024-07-14 15:10:11.444843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.363 qpair failed and we were unable to recover it. 00:37:32.363 [2024-07-14 15:10:11.445019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.363 [2024-07-14 15:10:11.445056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.363 qpair failed and we were unable to recover it. 00:37:32.363 [2024-07-14 15:10:11.445213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.363 [2024-07-14 15:10:11.445268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.363 qpair failed and we were unable to recover it. 00:37:32.363 [2024-07-14 15:10:11.445463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.363 [2024-07-14 15:10:11.445528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.363 qpair failed and we were unable to recover it. 00:37:32.363 [2024-07-14 15:10:11.445744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.363 [2024-07-14 15:10:11.445801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.363 qpair failed and we were unable to recover it. 00:37:32.363 [2024-07-14 15:10:11.446009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.363 [2024-07-14 15:10:11.446045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.363 qpair failed and we were unable to recover it. 00:37:32.363 [2024-07-14 15:10:11.446206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.363 [2024-07-14 15:10:11.446246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.363 qpair failed and we were unable to recover it. 00:37:32.363 [2024-07-14 15:10:11.446368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.363 [2024-07-14 15:10:11.446406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.363 qpair failed and we were unable to recover it. 00:37:32.363 [2024-07-14 15:10:11.446655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.363 [2024-07-14 15:10:11.446721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.363 qpair failed and we were unable to recover it. 00:37:32.363 [2024-07-14 15:10:11.446915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.363 [2024-07-14 15:10:11.446951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.363 qpair failed and we were unable to recover it. 00:37:32.363 [2024-07-14 15:10:11.447087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.363 [2024-07-14 15:10:11.447121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.363 qpair failed and we were unable to recover it. 00:37:32.363 [2024-07-14 15:10:11.447296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.363 [2024-07-14 15:10:11.447337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.363 qpair failed and we were unable to recover it. 00:37:32.363 [2024-07-14 15:10:11.447472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.363 [2024-07-14 15:10:11.447524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.363 qpair failed and we were unable to recover it. 00:37:32.363 [2024-07-14 15:10:11.447801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.363 [2024-07-14 15:10:11.447860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.363 qpair failed and we were unable to recover it. 00:37:32.363 [2024-07-14 15:10:11.448033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.363 [2024-07-14 15:10:11.448067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.363 qpair failed and we were unable to recover it. 00:37:32.363 [2024-07-14 15:10:11.448223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.363 [2024-07-14 15:10:11.448257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.363 qpair failed and we were unable to recover it. 00:37:32.363 [2024-07-14 15:10:11.448444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.363 [2024-07-14 15:10:11.448507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.363 qpair failed and we were unable to recover it. 00:37:32.363 [2024-07-14 15:10:11.448634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.363 [2024-07-14 15:10:11.448673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.363 qpair failed and we were unable to recover it. 00:37:32.363 [2024-07-14 15:10:11.448833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.363 [2024-07-14 15:10:11.448868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.363 qpair failed and we were unable to recover it. 00:37:32.363 [2024-07-14 15:10:11.449055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.363 [2024-07-14 15:10:11.449090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.363 qpair failed and we were unable to recover it. 00:37:32.363 [2024-07-14 15:10:11.449283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.363 [2024-07-14 15:10:11.449343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.363 qpair failed and we were unable to recover it. 00:37:32.363 [2024-07-14 15:10:11.449470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.363 [2024-07-14 15:10:11.449562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.363 qpair failed and we were unable to recover it. 00:37:32.363 [2024-07-14 15:10:11.449733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.363 [2024-07-14 15:10:11.449768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.363 qpair failed and we were unable to recover it. 00:37:32.363 [2024-07-14 15:10:11.449956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.363 [2024-07-14 15:10:11.449991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.363 qpair failed and we were unable to recover it. 00:37:32.363 [2024-07-14 15:10:11.450133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.363 [2024-07-14 15:10:11.450169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.363 qpair failed and we were unable to recover it. 00:37:32.363 [2024-07-14 15:10:11.450365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.363 [2024-07-14 15:10:11.450405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.363 qpair failed and we were unable to recover it. 00:37:32.363 [2024-07-14 15:10:11.450565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.363 [2024-07-14 15:10:11.450629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.363 qpair failed and we were unable to recover it. 00:37:32.363 [2024-07-14 15:10:11.450815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.363 [2024-07-14 15:10:11.450853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.364 qpair failed and we were unable to recover it. 00:37:32.364 [2024-07-14 15:10:11.451030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.364 [2024-07-14 15:10:11.451064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.364 qpair failed and we were unable to recover it. 00:37:32.364 [2024-07-14 15:10:11.451227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.364 [2024-07-14 15:10:11.451297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.364 qpair failed and we were unable to recover it. 00:37:32.364 [2024-07-14 15:10:11.451542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.364 [2024-07-14 15:10:11.451600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.364 qpair failed and we were unable to recover it. 00:37:32.364 [2024-07-14 15:10:11.451734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.364 [2024-07-14 15:10:11.451791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.364 qpair failed and we were unable to recover it. 00:37:32.364 [2024-07-14 15:10:11.451981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.364 [2024-07-14 15:10:11.452017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.364 qpair failed and we were unable to recover it. 00:37:32.364 [2024-07-14 15:10:11.452186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.364 [2024-07-14 15:10:11.452221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.364 qpair failed and we were unable to recover it. 00:37:32.364 [2024-07-14 15:10:11.452344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.364 [2024-07-14 15:10:11.452390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.364 qpair failed and we were unable to recover it. 00:37:32.364 [2024-07-14 15:10:11.452578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.364 [2024-07-14 15:10:11.452614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.364 qpair failed and we were unable to recover it. 00:37:32.364 [2024-07-14 15:10:11.452815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.364 [2024-07-14 15:10:11.452854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.364 qpair failed and we were unable to recover it. 00:37:32.364 [2024-07-14 15:10:11.453032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.364 [2024-07-14 15:10:11.453067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.364 qpair failed and we were unable to recover it. 00:37:32.364 [2024-07-14 15:10:11.453177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.364 [2024-07-14 15:10:11.453232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.364 qpair failed and we were unable to recover it. 00:37:32.364 [2024-07-14 15:10:11.453397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.364 [2024-07-14 15:10:11.453431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.364 qpair failed and we were unable to recover it. 00:37:32.364 [2024-07-14 15:10:11.453542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.364 [2024-07-14 15:10:11.453576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.364 qpair failed and we were unable to recover it. 00:37:32.364 [2024-07-14 15:10:11.453745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.364 [2024-07-14 15:10:11.453782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.364 qpair failed and we were unable to recover it. 00:37:32.364 [2024-07-14 15:10:11.453910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.364 [2024-07-14 15:10:11.453945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.364 qpair failed and we were unable to recover it. 00:37:32.364 [2024-07-14 15:10:11.454091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.364 [2024-07-14 15:10:11.454124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.364 qpair failed and we were unable to recover it. 00:37:32.364 [2024-07-14 15:10:11.454267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.364 [2024-07-14 15:10:11.454301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.364 qpair failed and we were unable to recover it. 00:37:32.364 [2024-07-14 15:10:11.454437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.364 [2024-07-14 15:10:11.454472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.364 qpair failed and we were unable to recover it. 00:37:32.364 [2024-07-14 15:10:11.454605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.364 [2024-07-14 15:10:11.454665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.364 qpair failed and we were unable to recover it. 00:37:32.364 [2024-07-14 15:10:11.454830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.364 [2024-07-14 15:10:11.454894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.364 qpair failed and we were unable to recover it. 00:37:32.364 [2024-07-14 15:10:11.455071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.364 [2024-07-14 15:10:11.455108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.364 qpair failed and we were unable to recover it. 00:37:32.364 [2024-07-14 15:10:11.455269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.364 [2024-07-14 15:10:11.455308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.364 qpair failed and we were unable to recover it. 00:37:32.364 [2024-07-14 15:10:11.455459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.364 [2024-07-14 15:10:11.455497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.364 qpair failed and we were unable to recover it. 00:37:32.364 [2024-07-14 15:10:11.455653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.364 [2024-07-14 15:10:11.455687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.364 qpair failed and we were unable to recover it. 00:37:32.364 [2024-07-14 15:10:11.455875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.364 [2024-07-14 15:10:11.455929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.364 qpair failed and we were unable to recover it. 00:37:32.364 [2024-07-14 15:10:11.456111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.364 [2024-07-14 15:10:11.456146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.364 qpair failed and we were unable to recover it. 00:37:32.364 [2024-07-14 15:10:11.456281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.364 [2024-07-14 15:10:11.456316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.364 qpair failed and we were unable to recover it. 00:37:32.364 [2024-07-14 15:10:11.456424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.364 [2024-07-14 15:10:11.456459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.364 qpair failed and we were unable to recover it. 00:37:32.364 [2024-07-14 15:10:11.456608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.364 [2024-07-14 15:10:11.456643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.364 qpair failed and we were unable to recover it. 00:37:32.364 [2024-07-14 15:10:11.456815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.364 [2024-07-14 15:10:11.456849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.364 qpair failed and we were unable to recover it. 00:37:32.364 [2024-07-14 15:10:11.457000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.364 [2024-07-14 15:10:11.457033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.364 qpair failed and we were unable to recover it. 00:37:32.364 [2024-07-14 15:10:11.457145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.364 [2024-07-14 15:10:11.457179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.364 qpair failed and we were unable to recover it. 00:37:32.365 [2024-07-14 15:10:11.457326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.365 [2024-07-14 15:10:11.457361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.365 qpair failed and we were unable to recover it. 00:37:32.365 [2024-07-14 15:10:11.457521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.365 [2024-07-14 15:10:11.457558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.365 qpair failed and we were unable to recover it. 00:37:32.365 [2024-07-14 15:10:11.457713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.365 [2024-07-14 15:10:11.457752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.365 qpair failed and we were unable to recover it. 00:37:32.365 [2024-07-14 15:10:11.457892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.365 [2024-07-14 15:10:11.457927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.365 qpair failed and we were unable to recover it. 00:37:32.365 [2024-07-14 15:10:11.458033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.365 [2024-07-14 15:10:11.458068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.365 qpair failed and we were unable to recover it. 00:37:32.365 [2024-07-14 15:10:11.458207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.365 [2024-07-14 15:10:11.458246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.365 qpair failed and we were unable to recover it. 00:37:32.365 [2024-07-14 15:10:11.458424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.365 [2024-07-14 15:10:11.458459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.365 qpair failed and we were unable to recover it. 00:37:32.365 [2024-07-14 15:10:11.458626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.365 [2024-07-14 15:10:11.458664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.365 qpair failed and we were unable to recover it. 00:37:32.365 [2024-07-14 15:10:11.458817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.365 [2024-07-14 15:10:11.458855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.365 qpair failed and we were unable to recover it. 00:37:32.365 [2024-07-14 15:10:11.459023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.365 [2024-07-14 15:10:11.459058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.365 qpair failed and we were unable to recover it. 00:37:32.365 [2024-07-14 15:10:11.459163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.365 [2024-07-14 15:10:11.459206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.365 qpair failed and we were unable to recover it. 00:37:32.365 [2024-07-14 15:10:11.459396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.365 [2024-07-14 15:10:11.459435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.365 qpair failed and we were unable to recover it. 00:37:32.365 [2024-07-14 15:10:11.459589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.365 [2024-07-14 15:10:11.459623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.365 qpair failed and we were unable to recover it. 00:37:32.365 [2024-07-14 15:10:11.459770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.365 [2024-07-14 15:10:11.459823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.365 qpair failed and we were unable to recover it. 00:37:32.365 [2024-07-14 15:10:11.459994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.365 [2024-07-14 15:10:11.460028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.365 qpair failed and we were unable to recover it. 00:37:32.365 [2024-07-14 15:10:11.460196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.365 [2024-07-14 15:10:11.460230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.365 qpair failed and we were unable to recover it. 00:37:32.365 [2024-07-14 15:10:11.460405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.365 [2024-07-14 15:10:11.460442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.365 qpair failed and we were unable to recover it. 00:37:32.365 [2024-07-14 15:10:11.460621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.365 [2024-07-14 15:10:11.460660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.365 qpair failed and we were unable to recover it. 00:37:32.365 [2024-07-14 15:10:11.460842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.365 [2024-07-14 15:10:11.460882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.365 qpair failed and we were unable to recover it. 00:37:32.365 [2024-07-14 15:10:11.461044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.365 [2024-07-14 15:10:11.461082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.365 qpair failed and we were unable to recover it. 00:37:32.365 [2024-07-14 15:10:11.461239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.365 [2024-07-14 15:10:11.461274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.365 qpair failed and we were unable to recover it. 00:37:32.365 [2024-07-14 15:10:11.461439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.365 [2024-07-14 15:10:11.461474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.365 qpair failed and we were unable to recover it. 00:37:32.365 [2024-07-14 15:10:11.461583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.365 [2024-07-14 15:10:11.461637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.365 qpair failed and we were unable to recover it. 00:37:32.365 [2024-07-14 15:10:11.461772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.365 [2024-07-14 15:10:11.461811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.365 qpair failed and we were unable to recover it. 00:37:32.365 [2024-07-14 15:10:11.461980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.365 [2024-07-14 15:10:11.462014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.365 qpair failed and we were unable to recover it. 00:37:32.365 [2024-07-14 15:10:11.462150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.365 [2024-07-14 15:10:11.462193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.365 qpair failed and we were unable to recover it. 00:37:32.365 [2024-07-14 15:10:11.462362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.365 [2024-07-14 15:10:11.462400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.365 qpair failed and we were unable to recover it. 00:37:32.365 [2024-07-14 15:10:11.462538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.365 [2024-07-14 15:10:11.462571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.365 qpair failed and we were unable to recover it. 00:37:32.365 [2024-07-14 15:10:11.462723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.365 [2024-07-14 15:10:11.462760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.365 qpair failed and we were unable to recover it. 00:37:32.365 [2024-07-14 15:10:11.462930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.365 [2024-07-14 15:10:11.462968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.365 qpair failed and we were unable to recover it. 00:37:32.365 [2024-07-14 15:10:11.463131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.365 [2024-07-14 15:10:11.463170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.365 qpair failed and we were unable to recover it. 00:37:32.366 [2024-07-14 15:10:11.463301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.366 [2024-07-14 15:10:11.463334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.366 qpair failed and we were unable to recover it. 00:37:32.366 [2024-07-14 15:10:11.463518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.366 [2024-07-14 15:10:11.463554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.366 qpair failed and we were unable to recover it. 00:37:32.366 [2024-07-14 15:10:11.463715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.366 [2024-07-14 15:10:11.463749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.366 qpair failed and we were unable to recover it. 00:37:32.366 [2024-07-14 15:10:11.463889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.366 [2024-07-14 15:10:11.463934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.366 qpair failed and we were unable to recover it. 00:37:32.366 [2024-07-14 15:10:11.464078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.366 [2024-07-14 15:10:11.464129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.366 qpair failed and we were unable to recover it. 00:37:32.366 [2024-07-14 15:10:11.464318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.366 [2024-07-14 15:10:11.464355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.366 qpair failed and we were unable to recover it. 00:37:32.366 [2024-07-14 15:10:11.464486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.366 [2024-07-14 15:10:11.464538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.366 qpair failed and we were unable to recover it. 00:37:32.366 [2024-07-14 15:10:11.464733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.366 [2024-07-14 15:10:11.464770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.366 qpair failed and we were unable to recover it. 00:37:32.366 [2024-07-14 15:10:11.464952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.366 [2024-07-14 15:10:11.464986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.366 qpair failed and we were unable to recover it. 00:37:32.366 [2024-07-14 15:10:11.465146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.366 [2024-07-14 15:10:11.465194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.366 qpair failed and we were unable to recover it. 00:37:32.366 [2024-07-14 15:10:11.465348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.366 [2024-07-14 15:10:11.465381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.366 qpair failed and we were unable to recover it. 00:37:32.366 [2024-07-14 15:10:11.465543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.366 [2024-07-14 15:10:11.465576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.366 qpair failed and we were unable to recover it. 00:37:32.366 [2024-07-14 15:10:11.465707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.366 [2024-07-14 15:10:11.465744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.366 qpair failed and we were unable to recover it. 00:37:32.366 [2024-07-14 15:10:11.465880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.366 [2024-07-14 15:10:11.465943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.366 qpair failed and we were unable to recover it. 00:37:32.366 [2024-07-14 15:10:11.466090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.366 [2024-07-14 15:10:11.466127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.366 qpair failed and we were unable to recover it. 00:37:32.366 [2024-07-14 15:10:11.466253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.366 [2024-07-14 15:10:11.466288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.366 qpair failed and we were unable to recover it. 00:37:32.366 [2024-07-14 15:10:11.466423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.366 [2024-07-14 15:10:11.466458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.366 qpair failed and we were unable to recover it. 00:37:32.366 [2024-07-14 15:10:11.466621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.366 [2024-07-14 15:10:11.466655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.366 qpair failed and we were unable to recover it. 00:37:32.366 [2024-07-14 15:10:11.466807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.366 [2024-07-14 15:10:11.466846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.366 qpair failed and we were unable to recover it. 00:37:32.366 [2024-07-14 15:10:11.467005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.366 [2024-07-14 15:10:11.467044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.366 qpair failed and we were unable to recover it. 00:37:32.366 [2024-07-14 15:10:11.467177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.366 [2024-07-14 15:10:11.467210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.366 qpair failed and we were unable to recover it. 00:37:32.366 [2024-07-14 15:10:11.467349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.366 [2024-07-14 15:10:11.467382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.366 qpair failed and we were unable to recover it. 00:37:32.366 [2024-07-14 15:10:11.467519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.366 [2024-07-14 15:10:11.467556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.366 qpair failed and we were unable to recover it. 00:37:32.366 [2024-07-14 15:10:11.467742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.366 [2024-07-14 15:10:11.467781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.367 qpair failed and we were unable to recover it. 00:37:32.367 [2024-07-14 15:10:11.467950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.367 [2024-07-14 15:10:11.467985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.367 qpair failed and we were unable to recover it. 00:37:32.367 [2024-07-14 15:10:11.468091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.367 [2024-07-14 15:10:11.468125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.367 qpair failed and we were unable to recover it. 00:37:32.367 [2024-07-14 15:10:11.468337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.367 [2024-07-14 15:10:11.468371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.367 qpair failed and we were unable to recover it. 00:37:32.367 [2024-07-14 15:10:11.468510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.367 [2024-07-14 15:10:11.468556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.367 qpair failed and we were unable to recover it. 00:37:32.367 [2024-07-14 15:10:11.468700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.367 [2024-07-14 15:10:11.468736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.367 qpair failed and we were unable to recover it. 00:37:32.367 [2024-07-14 15:10:11.468902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.367 [2024-07-14 15:10:11.468943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.367 qpair failed and we were unable to recover it. 00:37:32.367 [2024-07-14 15:10:11.469094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.367 [2024-07-14 15:10:11.469130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.367 qpair failed and we were unable to recover it. 00:37:32.367 [2024-07-14 15:10:11.469336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.367 [2024-07-14 15:10:11.469373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.367 qpair failed and we were unable to recover it. 00:37:32.367 [2024-07-14 15:10:11.469564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.367 [2024-07-14 15:10:11.469598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.367 qpair failed and we were unable to recover it. 00:37:32.367 [2024-07-14 15:10:11.469703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.367 [2024-07-14 15:10:11.469756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.367 qpair failed and we were unable to recover it. 00:37:32.367 [2024-07-14 15:10:11.469889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.367 [2024-07-14 15:10:11.469929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.367 qpair failed and we were unable to recover it. 00:37:32.367 [2024-07-14 15:10:11.470116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.367 [2024-07-14 15:10:11.470156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.367 qpair failed and we were unable to recover it. 00:37:32.367 [2024-07-14 15:10:11.470319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.367 [2024-07-14 15:10:11.470357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.367 qpair failed and we were unable to recover it. 00:37:32.367 [2024-07-14 15:10:11.470552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.367 [2024-07-14 15:10:11.470613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.367 qpair failed and we were unable to recover it. 00:37:32.367 [2024-07-14 15:10:11.470770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.367 [2024-07-14 15:10:11.470806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.367 qpair failed and we were unable to recover it. 00:37:32.367 [2024-07-14 15:10:11.470985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.367 [2024-07-14 15:10:11.471023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.367 qpair failed and we were unable to recover it. 00:37:32.367 [2024-07-14 15:10:11.471156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.367 [2024-07-14 15:10:11.471196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.367 qpair failed and we were unable to recover it. 00:37:32.367 [2024-07-14 15:10:11.471380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.367 [2024-07-14 15:10:11.471414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.367 qpair failed and we were unable to recover it. 00:37:32.367 [2024-07-14 15:10:11.471571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.367 [2024-07-14 15:10:11.471609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.367 qpair failed and we were unable to recover it. 00:37:32.367 [2024-07-14 15:10:11.471807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.367 [2024-07-14 15:10:11.471846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.367 qpair failed and we were unable to recover it. 00:37:32.367 [2024-07-14 15:10:11.471997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.367 [2024-07-14 15:10:11.472031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.367 qpair failed and we were unable to recover it. 00:37:32.367 [2024-07-14 15:10:11.472176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.367 [2024-07-14 15:10:11.472210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.367 qpair failed and we were unable to recover it. 00:37:32.367 [2024-07-14 15:10:11.472353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.367 [2024-07-14 15:10:11.472387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.367 qpair failed and we were unable to recover it. 00:37:32.367 [2024-07-14 15:10:11.472524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.367 [2024-07-14 15:10:11.472558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.367 qpair failed and we were unable to recover it. 00:37:32.367 [2024-07-14 15:10:11.472697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.367 [2024-07-14 15:10:11.472730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.367 qpair failed and we were unable to recover it. 00:37:32.367 [2024-07-14 15:10:11.472929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.367 [2024-07-14 15:10:11.472967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.367 qpair failed and we were unable to recover it. 00:37:32.367 [2024-07-14 15:10:11.473151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.367 [2024-07-14 15:10:11.473207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.367 qpair failed and we were unable to recover it. 00:37:32.367 [2024-07-14 15:10:11.473326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.367 [2024-07-14 15:10:11.473363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.367 qpair failed and we were unable to recover it. 00:37:32.367 [2024-07-14 15:10:11.473483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.367 [2024-07-14 15:10:11.473520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.367 qpair failed and we were unable to recover it. 00:37:32.367 [2024-07-14 15:10:11.473681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.367 [2024-07-14 15:10:11.473714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.367 qpair failed and we were unable to recover it. 00:37:32.367 [2024-07-14 15:10:11.473828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.367 [2024-07-14 15:10:11.473862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.367 qpair failed and we were unable to recover it. 00:37:32.367 [2024-07-14 15:10:11.473996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-14 15:10:11.474033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-14 15:10:11.474193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-14 15:10:11.474226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-14 15:10:11.474331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-14 15:10:11.474365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-14 15:10:11.474529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-14 15:10:11.474567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-14 15:10:11.474686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-14 15:10:11.474720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-14 15:10:11.474818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-14 15:10:11.474852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-14 15:10:11.475061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-14 15:10:11.475115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-14 15:10:11.475323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-14 15:10:11.475359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-14 15:10:11.475473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-14 15:10:11.475509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-14 15:10:11.475649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-14 15:10:11.475684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-14 15:10:11.475888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-14 15:10:11.475955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-14 15:10:11.476074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-14 15:10:11.476109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-14 15:10:11.476344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-14 15:10:11.476407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-14 15:10:11.476591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-14 15:10:11.476625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-14 15:10:11.476777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-14 15:10:11.476814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-14 15:10:11.477022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-14 15:10:11.477056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-14 15:10:11.477198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-14 15:10:11.477233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-14 15:10:11.477369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-14 15:10:11.477423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-14 15:10:11.477617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-14 15:10:11.477675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-14 15:10:11.477838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-14 15:10:11.477873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-14 15:10:11.478037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-14 15:10:11.478079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-14 15:10:11.478228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-14 15:10:11.478266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-14 15:10:11.478405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-14 15:10:11.478439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-14 15:10:11.478574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-14 15:10:11.478607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-14 15:10:11.478765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-14 15:10:11.478802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-14 15:10:11.478992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-14 15:10:11.479026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-14 15:10:11.479141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-14 15:10:11.479174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-14 15:10:11.479310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-14 15:10:11.479343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-14 15:10:11.479481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-14 15:10:11.479515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-14 15:10:11.479664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-14 15:10:11.479702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-14 15:10:11.479837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-14 15:10:11.479882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-14 15:10:11.480010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-14 15:10:11.480043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-14 15:10:11.480188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-14 15:10:11.480240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-14 15:10:11.480434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-14 15:10:11.480468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-14 15:10:11.480643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-14 15:10:11.480677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-14 15:10:11.480822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-14 15:10:11.480859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-14 15:10:11.481024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-14 15:10:11.481062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-14 15:10:11.481250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-14 15:10:11.481283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-14 15:10:11.481394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-14 15:10:11.481446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-14 15:10:11.481571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-14 15:10:11.481607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-14 15:10:11.481768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-14 15:10:11.481802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-14 15:10:11.481939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-14 15:10:11.481990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-14 15:10:11.482173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-14 15:10:11.482207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-14 15:10:11.482352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-14 15:10:11.482386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-14 15:10:11.482537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-14 15:10:11.482574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-14 15:10:11.482747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-14 15:10:11.482784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-14 15:10:11.482938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-14 15:10:11.482971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-14 15:10:11.483110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-14 15:10:11.483144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-14 15:10:11.483279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-14 15:10:11.483317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-14 15:10:11.483449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-14 15:10:11.483483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-14 15:10:11.483620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-14 15:10:11.483670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-14 15:10:11.483869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-14 15:10:11.483930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-14 15:10:11.484100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-14 15:10:11.484137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-14 15:10:11.484247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-14 15:10:11.484299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-14 15:10:11.484452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-14 15:10:11.484490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-14 15:10:11.484632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-14 15:10:11.484667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-14 15:10:11.484805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-14 15:10:11.484856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-14 15:10:11.485019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-14 15:10:11.485057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-14 15:10:11.485200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-14 15:10:11.485236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-14 15:10:11.485406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-14 15:10:11.485458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-14 15:10:11.485577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-14 15:10:11.485629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-14 15:10:11.485794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-14 15:10:11.485828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-14 15:10:11.485988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-14 15:10:11.486023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-14 15:10:11.486135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-14 15:10:11.486187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-14 15:10:11.486351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-14 15:10:11.486384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-14 15:10:11.486529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-14 15:10:11.486562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-14 15:10:11.486688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-14 15:10:11.486727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-14 15:10:11.486892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-14 15:10:11.486927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-14 15:10:11.487114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-14 15:10:11.487152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-14 15:10:11.487331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-14 15:10:11.487369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-14 15:10:11.487554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-14 15:10:11.487588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-14 15:10:11.487763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-14 15:10:11.487801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-14 15:10:11.487994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-14 15:10:11.488030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-14 15:10:11.488170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-14 15:10:11.488204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-14 15:10:11.488316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-14 15:10:11.488368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-14 15:10:11.488533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-14 15:10:11.488571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-14 15:10:11.488727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-14 15:10:11.488761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-14 15:10:11.488899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-14 15:10:11.488952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-14 15:10:11.489103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-14 15:10:11.489141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-14 15:10:11.489277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-14 15:10:11.489311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-14 15:10:11.489445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-14 15:10:11.489508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-14 15:10:11.489697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-14 15:10:11.489731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-14 15:10:11.489870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-14 15:10:11.489918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-14 15:10:11.490058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-14 15:10:11.490111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-14 15:10:11.490270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-14 15:10:11.490303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-14 15:10:11.490471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-14 15:10:11.490505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-14 15:10:11.490657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-14 15:10:11.490695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-14 15:10:11.490883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-14 15:10:11.490955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-14 15:10:11.491104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-14 15:10:11.491140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-14 15:10:11.491251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-14 15:10:11.491306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-14 15:10:11.491492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-14 15:10:11.491540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-14 15:10:11.491685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-14 15:10:11.491720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-14 15:10:11.491888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-14 15:10:11.491940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-14 15:10:11.492098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-14 15:10:11.492137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-14 15:10:11.492322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-14 15:10:11.492356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-14 15:10:11.492461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-14 15:10:11.492514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-14 15:10:11.492669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-14 15:10:11.492706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-14 15:10:11.492840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-14 15:10:11.492874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-14 15:10:11.493046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-14 15:10:11.493079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-14 15:10:11.493184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-14 15:10:11.493217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-14 15:10:11.493353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-14 15:10:11.493391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-14 15:10:11.493541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-14 15:10:11.493601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-14 15:10:11.493740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-14 15:10:11.493780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-14 15:10:11.493926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-14 15:10:11.493961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-14 15:10:11.494102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-14 15:10:11.494157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-14 15:10:11.494305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-14 15:10:11.494342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-14 15:10:11.494506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-14 15:10:11.494541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-14 15:10:11.494653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-14 15:10:11.494704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-14 15:10:11.494851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-14 15:10:11.494897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-14 15:10:11.495025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-14 15:10:11.495059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-14 15:10:11.495169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-14 15:10:11.495203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-14 15:10:11.495361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-14 15:10:11.495399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-14 15:10:11.495567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-14 15:10:11.495601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-14 15:10:11.495721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-14 15:10:11.495757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-14 15:10:11.495931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-14 15:10:11.495984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-14 15:10:11.496168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-14 15:10:11.496203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-14 15:10:11.496334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-14 15:10:11.496368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-14 15:10:11.496502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-14 15:10:11.496541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-14 15:10:11.496718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-14 15:10:11.496756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-14 15:10:11.496908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-14 15:10:11.496961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-14 15:10:11.497098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-14 15:10:11.497132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-14 15:10:11.497329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-14 15:10:11.497363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-14 15:10:11.497520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-14 15:10:11.497592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-14 15:10:11.497732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-14 15:10:11.497769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-14 15:10:11.497906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-14 15:10:11.497940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-14 15:10:11.498079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-14 15:10:11.498113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-14 15:10:11.498248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-14 15:10:11.498282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-14 15:10:11.498428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-14 15:10:11.498471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-14 15:10:11.498582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-14 15:10:11.498616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-14 15:10:11.498743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-14 15:10:11.498778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-14 15:10:11.498914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-14 15:10:11.498949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-14 15:10:11.499124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-14 15:10:11.499178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-14 15:10:11.499329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-14 15:10:11.499365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-14 15:10:11.499496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-14 15:10:11.499531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-14 15:10:11.499714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-14 15:10:11.499752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-14 15:10:11.499905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-14 15:10:11.499940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-14 15:10:11.500093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-14 15:10:11.500128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-14 15:10:11.500280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-14 15:10:11.500314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-14 15:10:11.500514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-14 15:10:11.500547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-14 15:10:11.500721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-14 15:10:11.500755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-14 15:10:11.500892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-14 15:10:11.500950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-14 15:10:11.501087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-14 15:10:11.501122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-14 15:10:11.501251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-14 15:10:11.501301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-14 15:10:11.501590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-14 15:10:11.501646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-14 15:10:11.501804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-14 15:10:11.501838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-14 15:10:11.501981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-14 15:10:11.502015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-14 15:10:11.502218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-14 15:10:11.502277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-14 15:10:11.502438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-14 15:10:11.502472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-14 15:10:11.502627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-14 15:10:11.502664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-14 15:10:11.502792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-14 15:10:11.502830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-14 15:10:11.502995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-14 15:10:11.503030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-14 15:10:11.503141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-14 15:10:11.503192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-14 15:10:11.503308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-14 15:10:11.503359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-14 15:10:11.503520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-14 15:10:11.503554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-14 15:10:11.503684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-14 15:10:11.503721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-14 15:10:11.503893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-14 15:10:11.503959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-14 15:10:11.504080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-14 15:10:11.504124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-14 15:10:11.504294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-14 15:10:11.504347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-14 15:10:11.504497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-14 15:10:11.504535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-14 15:10:11.504698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-14 15:10:11.504732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-14 15:10:11.504909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-14 15:10:11.504948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-14 15:10:11.505106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-14 15:10:11.505140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-14 15:10:11.505274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-14 15:10:11.505308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-14 15:10:11.505471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-14 15:10:11.505545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-14 15:10:11.505694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-14 15:10:11.505731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-14 15:10:11.505889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-14 15:10:11.505923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-14 15:10:11.506062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-14 15:10:11.506095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-14 15:10:11.506226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-14 15:10:11.506266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-14 15:10:11.506449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-14 15:10:11.506484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-14 15:10:11.506612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-14 15:10:11.506650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-14 15:10:11.506833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-14 15:10:11.506871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-14 15:10:11.507060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-14 15:10:11.507096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-14 15:10:11.507267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-14 15:10:11.507322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-14 15:10:11.507528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-14 15:10:11.507591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-14 15:10:11.507758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-14 15:10:11.507791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-14 15:10:11.507951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-14 15:10:11.507989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-14 15:10:11.508140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-14 15:10:11.508175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-14 15:10:11.508305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-14 15:10:11.508339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-14 15:10:11.508477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-14 15:10:11.508511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-14 15:10:11.508663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-14 15:10:11.508700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-14 15:10:11.508852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-14 15:10:11.508896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-14 15:10:11.509067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-14 15:10:11.509105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-14 15:10:11.509277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-14 15:10:11.509314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-14 15:10:11.509464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-14 15:10:11.509497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-14 15:10:11.509677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-14 15:10:11.509714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-14 15:10:11.509902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-14 15:10:11.509941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-14 15:10:11.510067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-14 15:10:11.510101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-14 15:10:11.510239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-14 15:10:11.510274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-14 15:10:11.510401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-14 15:10:11.510438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-14 15:10:11.510575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-14 15:10:11.510609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-14 15:10:11.510747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-14 15:10:11.510798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-14 15:10:11.510962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-14 15:10:11.511000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-14 15:10:11.511140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-14 15:10:11.511173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-14 15:10:11.511349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-14 15:10:11.511405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-14 15:10:11.511560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-14 15:10:11.511598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-14 15:10:11.511735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-14 15:10:11.511788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-14 15:10:11.512020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-14 15:10:11.512056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-14 15:10:11.512212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-14 15:10:11.512288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-14 15:10:11.512445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-14 15:10:11.512480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-14 15:10:11.512616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-14 15:10:11.512670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-14 15:10:11.512846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-14 15:10:11.512905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-14 15:10:11.513063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-14 15:10:11.513097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-14 15:10:11.513207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-14 15:10:11.513241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-14 15:10:11.513405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-14 15:10:11.513442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-14 15:10:11.513605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-14 15:10:11.513638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-14 15:10:11.513765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-14 15:10:11.513816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-14 15:10:11.513972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-14 15:10:11.514010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-14 15:10:11.514172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-14 15:10:11.514206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-14 15:10:11.514308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-14 15:10:11.514342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-14 15:10:11.514543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-14 15:10:11.514580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-14 15:10:11.514745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-14 15:10:11.514779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-14 15:10:11.514940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-14 15:10:11.514990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-14 15:10:11.515190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-14 15:10:11.515231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-14 15:10:11.515390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-14 15:10:11.515424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-14 15:10:11.515560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-14 15:10:11.515613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-14 15:10:11.515757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-14 15:10:11.515794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-14 15:10:11.515943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-14 15:10:11.515978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-14 15:10:11.516151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-14 15:10:11.516189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-14 15:10:11.516363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-14 15:10:11.516400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-14 15:10:11.516579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-14 15:10:11.516612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-14 15:10:11.516742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-14 15:10:11.516785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-14 15:10:11.516964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-14 15:10:11.517002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-14 15:10:11.517131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-14 15:10:11.517164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-14 15:10:11.517293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-14 15:10:11.517326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-14 15:10:11.517490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-14 15:10:11.517527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-14 15:10:11.517681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-14 15:10:11.517714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-14 15:10:11.517830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-14 15:10:11.517892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-14 15:10:11.518069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-14 15:10:11.518107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-14 15:10:11.518266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-14 15:10:11.518301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-14 15:10:11.518442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-14 15:10:11.518495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-14 15:10:11.518649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-14 15:10:11.518687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-14 15:10:11.518808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-14 15:10:11.518843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-14 15:10:11.518983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-14 15:10:11.519034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-14 15:10:11.519168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-14 15:10:11.519207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-14 15:10:11.519396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-14 15:10:11.519429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-14 15:10:11.519585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-14 15:10:11.519622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-14 15:10:11.519770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-14 15:10:11.519807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-14 15:10:11.519973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-14 15:10:11.520007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-14 15:10:11.520170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-14 15:10:11.520210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-14 15:10:11.520391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-14 15:10:11.520429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-14 15:10:11.520616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-14 15:10:11.520652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-14 15:10:11.520852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-14 15:10:11.520897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-14 15:10:11.521030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-14 15:10:11.521065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-14 15:10:11.521228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-14 15:10:11.521262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-14 15:10:11.521385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-14 15:10:11.521420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-14 15:10:11.521583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-14 15:10:11.521617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-14 15:10:11.521790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-14 15:10:11.521823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-14 15:10:11.521999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-14 15:10:11.522054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-14 15:10:11.522230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-14 15:10:11.522283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-14 15:10:11.522473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-14 15:10:11.522510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-14 15:10:11.522658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-14 15:10:11.522693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-14 15:10:11.522858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-14 15:10:11.522921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-14 15:10:11.523082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-14 15:10:11.523115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-14 15:10:11.523270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-14 15:10:11.523306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-14 15:10:11.523549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-14 15:10:11.523608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-14 15:10:11.523736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-14 15:10:11.523770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-14 15:10:11.523910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-14 15:10:11.523945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-14 15:10:11.524051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-14 15:10:11.524085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-14 15:10:11.524215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-14 15:10:11.524264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-14 15:10:11.524413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-14 15:10:11.524450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-14 15:10:11.524655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-14 15:10:11.524710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-14 15:10:11.524885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-14 15:10:11.524949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-14 15:10:11.525114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-14 15:10:11.525148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-14 15:10:11.525325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-14 15:10:11.525376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-14 15:10:11.525533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-14 15:10:11.525585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-14 15:10:11.525743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-14 15:10:11.525777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-14 15:10:11.525914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-14 15:10:11.525949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-14 15:10:11.526106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-14 15:10:11.526160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-14 15:10:11.526324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-14 15:10:11.526377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-14 15:10:11.526618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-14 15:10:11.526670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-14 15:10:11.526834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-14 15:10:11.526868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-14 15:10:11.527009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-14 15:10:11.527064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-14 15:10:11.527197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-14 15:10:11.527248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-14 15:10:11.527432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-14 15:10:11.527483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-14 15:10:11.527603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-14 15:10:11.527638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-14 15:10:11.527802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-14 15:10:11.527836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-14 15:10:11.528004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-14 15:10:11.528059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-14 15:10:11.528242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-14 15:10:11.528294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-14 15:10:11.528450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-14 15:10:11.528501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-14 15:10:11.528607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-14 15:10:11.528640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-14 15:10:11.528780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-14 15:10:11.528815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-14 15:10:11.528971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-14 15:10:11.529020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-14 15:10:11.529184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-14 15:10:11.529233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-14 15:10:11.529407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-14 15:10:11.529443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-14 15:10:11.529645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-14 15:10:11.529705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-14 15:10:11.529836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-14 15:10:11.529870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-14 15:10:11.530023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-14 15:10:11.530056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-14 15:10:11.530222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-14 15:10:11.530291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-14 15:10:11.530493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-14 15:10:11.530530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-14 15:10:11.530802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-14 15:10:11.530839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-14 15:10:11.530982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-14 15:10:11.531017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-14 15:10:11.531130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-14 15:10:11.531163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-14 15:10:11.531331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-14 15:10:11.531368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-14 15:10:11.531569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-14 15:10:11.531607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-14 15:10:11.531764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-14 15:10:11.531802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-14 15:10:11.531984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-14 15:10:11.532019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-14 15:10:11.532215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-14 15:10:11.532265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-14 15:10:11.532514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-14 15:10:11.532588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-14 15:10:11.532717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-14 15:10:11.532757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-14 15:10:11.532950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-14 15:10:11.532986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-14 15:10:11.533119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-14 15:10:11.533152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-14 15:10:11.533273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-14 15:10:11.533326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-14 15:10:11.533553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-14 15:10:11.533609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-14 15:10:11.533742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-14 15:10:11.533795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-14 15:10:11.533961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-14 15:10:11.533996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-14 15:10:11.534096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-14 15:10:11.534130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-14 15:10:11.534308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-14 15:10:11.534341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-14 15:10:11.534478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-14 15:10:11.534531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-14 15:10:11.534722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-14 15:10:11.534756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-14 15:10:11.534898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-14 15:10:11.534932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-14 15:10:11.535043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-14 15:10:11.535077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-14 15:10:11.535250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-14 15:10:11.535288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-14 15:10:11.535487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-14 15:10:11.535542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-14 15:10:11.535712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-14 15:10:11.535749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-14 15:10:11.535886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-14 15:10:11.535942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-14 15:10:11.536080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-14 15:10:11.536114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-14 15:10:11.536282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-14 15:10:11.536318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-14 15:10:11.536478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-14 15:10:11.536527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-14 15:10:11.536760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-14 15:10:11.536797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-14 15:10:11.536925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-14 15:10:11.536975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-14 15:10:11.537113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-14 15:10:11.537147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-14 15:10:11.537322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-14 15:10:11.537359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-14 15:10:11.537508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-14 15:10:11.537545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-14 15:10:11.537686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-14 15:10:11.537723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-14 15:10:11.537902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-14 15:10:11.537936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-14 15:10:11.538043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-14 15:10:11.538077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-14 15:10:11.538228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-14 15:10:11.538276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-14 15:10:11.538546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-14 15:10:11.538592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-14 15:10:11.538743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-14 15:10:11.538781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-14 15:10:11.538942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-14 15:10:11.538976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-14 15:10:11.539138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-14 15:10:11.539172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-14 15:10:11.539304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-14 15:10:11.539341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-14 15:10:11.539537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-14 15:10:11.539575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-14 15:10:11.539806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-14 15:10:11.539858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-14 15:10:11.540035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-14 15:10:11.540068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-14 15:10:11.540248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-14 15:10:11.540327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-14 15:10:11.540483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-14 15:10:11.540582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-14 15:10:11.540724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-14 15:10:11.540761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-14 15:10:11.540898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-14 15:10:11.540951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-14 15:10:11.541090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-14 15:10:11.541124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-14 15:10:11.541228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-14 15:10:11.541279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-14 15:10:11.541410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-14 15:10:11.541447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-14 15:10:11.541576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-14 15:10:11.541626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-14 15:10:11.541774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-14 15:10:11.541811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-14 15:10:11.541944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-14 15:10:11.541996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-14 15:10:11.542179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-14 15:10:11.542227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-14 15:10:11.542393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-14 15:10:11.542446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-14 15:10:11.542636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-14 15:10:11.542689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-14 15:10:11.542850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-14 15:10:11.542897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-14 15:10:11.543008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-14 15:10:11.543043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-14 15:10:11.543178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-14 15:10:11.543230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-14 15:10:11.543375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-14 15:10:11.543409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-14 15:10:11.543560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-14 15:10:11.543594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-14 15:10:11.543732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-14 15:10:11.543766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-14 15:10:11.543940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-14 15:10:11.543976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-14 15:10:11.544087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-14 15:10:11.544121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-14 15:10:11.544356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-14 15:10:11.544390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-14 15:10:11.544498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-14 15:10:11.544532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-14 15:10:11.544644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-14 15:10:11.544678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-14 15:10:11.544825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-14 15:10:11.544874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-14 15:10:11.545067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-14 15:10:11.545102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-14 15:10:11.545240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-14 15:10:11.545274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-14 15:10:11.545413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-14 15:10:11.545446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-14 15:10:11.545556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-14 15:10:11.545590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-14 15:10:11.545762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-14 15:10:11.545796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-14 15:10:11.545912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-14 15:10:11.545947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-14 15:10:11.546126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-14 15:10:11.546164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-14 15:10:11.546305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-14 15:10:11.546357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-14 15:10:11.546477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-14 15:10:11.546516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-14 15:10:11.546737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-14 15:10:11.546788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-14 15:10:11.546949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-14 15:10:11.546983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-14 15:10:11.547112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-14 15:10:11.547145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-14 15:10:11.547310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-14 15:10:11.547348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-14 15:10:11.547486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-14 15:10:11.547523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-14 15:10:11.547629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-14 15:10:11.547666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-14 15:10:11.547823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-14 15:10:11.547856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-14 15:10:11.548006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-14 15:10:11.548040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-14 15:10:11.548196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-14 15:10:11.548232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-14 15:10:11.548446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-14 15:10:11.548482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-14 15:10:11.548633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-14 15:10:11.548670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-14 15:10:11.548842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-14 15:10:11.548887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-14 15:10:11.549089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-14 15:10:11.549138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-14 15:10:11.549298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-14 15:10:11.549352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-14 15:10:11.549509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-14 15:10:11.549561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-14 15:10:11.549838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-14 15:10:11.549908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-14 15:10:11.550050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-14 15:10:11.550084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-14 15:10:11.550267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-14 15:10:11.550321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-14 15:10:11.550560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-14 15:10:11.550619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-14 15:10:11.550754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-14 15:10:11.550789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-14 15:10:11.550960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-14 15:10:11.550995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-14 15:10:11.551158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-14 15:10:11.551210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-14 15:10:11.551393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-14 15:10:11.551449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-14 15:10:11.551715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-14 15:10:11.551771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-14 15:10:11.551932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-14 15:10:11.551971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-14 15:10:11.552181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-14 15:10:11.552232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-14 15:10:11.552429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-14 15:10:11.552481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-14 15:10:11.552647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-14 15:10:11.552681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-14 15:10:11.552814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-14 15:10:11.552848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-14 15:10:11.552998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-14 15:10:11.553051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-14 15:10:11.553202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-14 15:10:11.553254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-14 15:10:11.553438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-14 15:10:11.553490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-14 15:10:11.553633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-14 15:10:11.553667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-14 15:10:11.553793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-14 15:10:11.553850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-14 15:10:11.554024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-14 15:10:11.554060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-14 15:10:11.554200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-14 15:10:11.554235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-14 15:10:11.554370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-14 15:10:11.554404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-14 15:10:11.554534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-14 15:10:11.554568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-14 15:10:11.554709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-14 15:10:11.554749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-14 15:10:11.554898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-14 15:10:11.554944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-14 15:10:11.555084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-14 15:10:11.555118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-14 15:10:11.555265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-14 15:10:11.555300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-14 15:10:11.555405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-14 15:10:11.555439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-14 15:10:11.555604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-14 15:10:11.555650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-14 15:10:11.555755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-14 15:10:11.555789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-14 15:10:11.555954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-14 15:10:11.556008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-14 15:10:11.556167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-14 15:10:11.556231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-14 15:10:11.556388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-14 15:10:11.556438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-14 15:10:11.556607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-14 15:10:11.556644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-14 15:10:11.556786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-14 15:10:11.556821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-14 15:10:11.556989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-14 15:10:11.557027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-14 15:10:11.557204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-14 15:10:11.557241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-14 15:10:11.557422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-14 15:10:11.557460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-14 15:10:11.557585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-14 15:10:11.557622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-14 15:10:11.557796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-14 15:10:11.557831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-14 15:10:11.557983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-14 15:10:11.558017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-14 15:10:11.558149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-14 15:10:11.558199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-14 15:10:11.558395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-14 15:10:11.558447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-14 15:10:11.558588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-14 15:10:11.558643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-14 15:10:11.558749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-14 15:10:11.558783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-14 15:10:11.558929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-14 15:10:11.558965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-14 15:10:11.559097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-14 15:10:11.559131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-14 15:10:11.559236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-14 15:10:11.559269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-14 15:10:11.559399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-14 15:10:11.559433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-14 15:10:11.559567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-14 15:10:11.559618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-14 15:10:11.559784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-14 15:10:11.559827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-14 15:10:11.559983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-14 15:10:11.560017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-14 15:10:11.560173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-14 15:10:11.560210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-14 15:10:11.560358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-14 15:10:11.560396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-14 15:10:11.560520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-14 15:10:11.560570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-14 15:10:11.560728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-14 15:10:11.560766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-14 15:10:11.560932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-14 15:10:11.560966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-14 15:10:11.561065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-14 15:10:11.561099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-14 15:10:11.561282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-14 15:10:11.561320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-14 15:10:11.561488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-14 15:10:11.561525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-14 15:10:11.561646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-14 15:10:11.561683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-14 15:10:11.561860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-14 15:10:11.561908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-14 15:10:11.562032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-14 15:10:11.562065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-14 15:10:11.562204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-14 15:10:11.562243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-14 15:10:11.562426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-14 15:10:11.562464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-14 15:10:11.562615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-14 15:10:11.562652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-14 15:10:11.562814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-14 15:10:11.562848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-14 15:10:11.562972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-14 15:10:11.563006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-14 15:10:11.563125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-14 15:10:11.563158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-14 15:10:11.563334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-14 15:10:11.563371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-14 15:10:11.563519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-14 15:10:11.563556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-14 15:10:11.563701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-14 15:10:11.563738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-14 15:10:11.563904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-14 15:10:11.563953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-14 15:10:11.564101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-14 15:10:11.564138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-14 15:10:11.564334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-14 15:10:11.564387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-14 15:10:11.564549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-14 15:10:11.564602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-14 15:10:11.564743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-14 15:10:11.564777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-14 15:10:11.564952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-14 15:10:11.564986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-14 15:10:11.565096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-14 15:10:11.565130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-14 15:10:11.565295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-14 15:10:11.565329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-14 15:10:11.565456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-14 15:10:11.565490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-14 15:10:11.565663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-14 15:10:11.565699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-14 15:10:11.565832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-14 15:10:11.565867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-14 15:10:11.566057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-14 15:10:11.566091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-14 15:10:11.566226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-14 15:10:11.566261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-14 15:10:11.566407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-14 15:10:11.566440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-14 15:10:11.566581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-14 15:10:11.566615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-14 15:10:11.566739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-14 15:10:11.566776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-14 15:10:11.566908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-14 15:10:11.566942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-14 15:10:11.567046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-14 15:10:11.567079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-14 15:10:11.567237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-14 15:10:11.567275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-14 15:10:11.567402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-14 15:10:11.567440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-14 15:10:11.567586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-14 15:10:11.567624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-14 15:10:11.567777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-14 15:10:11.567813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-14 15:10:11.567954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-14 15:10:11.567999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-14 15:10:11.568100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-14 15:10:11.568133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-14 15:10:11.568303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-14 15:10:11.568337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-14 15:10:11.568490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-14 15:10:11.568528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-14 15:10:11.568678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-14 15:10:11.568716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-14 15:10:11.568891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-14 15:10:11.568945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-14 15:10:11.569076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-14 15:10:11.569110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-14 15:10:11.569245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.384 [2024-07-14 15:10:11.569282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.384 qpair failed and we were unable to recover it. 00:37:32.384 [2024-07-14 15:10:11.569489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.384 [2024-07-14 15:10:11.569526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.384 qpair failed and we were unable to recover it. 00:37:32.384 [2024-07-14 15:10:11.569680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.384 [2024-07-14 15:10:11.569722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.384 qpair failed and we were unable to recover it. 00:37:32.384 [2024-07-14 15:10:11.569918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.384 [2024-07-14 15:10:11.569953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.384 qpair failed and we were unable to recover it. 00:37:32.384 [2024-07-14 15:10:11.570063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.384 [2024-07-14 15:10:11.570096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.384 qpair failed and we were unable to recover it. 00:37:32.384 [2024-07-14 15:10:11.570221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.384 [2024-07-14 15:10:11.570255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.384 qpair failed and we were unable to recover it. 00:37:32.384 [2024-07-14 15:10:11.570390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.384 [2024-07-14 15:10:11.570428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.384 qpair failed and we were unable to recover it. 00:37:32.384 [2024-07-14 15:10:11.570593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.384 [2024-07-14 15:10:11.570636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.384 qpair failed and we were unable to recover it. 00:37:32.384 [2024-07-14 15:10:11.570783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.384 [2024-07-14 15:10:11.570821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.384 qpair failed and we were unable to recover it. 00:37:32.384 [2024-07-14 15:10:11.571001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.384 [2024-07-14 15:10:11.571035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.384 qpair failed and we were unable to recover it. 00:37:32.384 [2024-07-14 15:10:11.571169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.384 [2024-07-14 15:10:11.571202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.384 qpair failed and we were unable to recover it. 00:37:32.384 [2024-07-14 15:10:11.571386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.384 [2024-07-14 15:10:11.571425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.384 qpair failed and we were unable to recover it. 00:37:32.384 [2024-07-14 15:10:11.571580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.384 [2024-07-14 15:10:11.571629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.384 qpair failed and we were unable to recover it. 00:37:32.384 [2024-07-14 15:10:11.571804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.384 [2024-07-14 15:10:11.571841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.384 qpair failed and we were unable to recover it. 00:37:32.384 [2024-07-14 15:10:11.571986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.384 [2024-07-14 15:10:11.572020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.384 qpair failed and we were unable to recover it. 00:37:32.384 [2024-07-14 15:10:11.572136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.384 [2024-07-14 15:10:11.572191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.384 qpair failed and we were unable to recover it. 00:37:32.384 [2024-07-14 15:10:11.572320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.384 [2024-07-14 15:10:11.572354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.384 qpair failed and we were unable to recover it. 00:37:32.384 [2024-07-14 15:10:11.572478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.384 [2024-07-14 15:10:11.572512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.384 qpair failed and we were unable to recover it. 00:37:32.384 [2024-07-14 15:10:11.572646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.384 [2024-07-14 15:10:11.572683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.384 qpair failed and we were unable to recover it. 00:37:32.384 [2024-07-14 15:10:11.572872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.384 [2024-07-14 15:10:11.572921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.384 qpair failed and we were unable to recover it. 00:37:32.384 [2024-07-14 15:10:11.573058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.384 [2024-07-14 15:10:11.573091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.384 qpair failed and we were unable to recover it. 00:37:32.384 [2024-07-14 15:10:11.573285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.384 [2024-07-14 15:10:11.573322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.384 qpair failed and we were unable to recover it. 00:37:32.384 [2024-07-14 15:10:11.573504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.384 [2024-07-14 15:10:11.573541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.384 qpair failed and we were unable to recover it. 00:37:32.384 [2024-07-14 15:10:11.573659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.384 [2024-07-14 15:10:11.573696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.384 qpair failed and we were unable to recover it. 00:37:32.384 [2024-07-14 15:10:11.573838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.384 [2024-07-14 15:10:11.573888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.384 qpair failed and we were unable to recover it. 00:37:32.384 [2024-07-14 15:10:11.574026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.384 [2024-07-14 15:10:11.574060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.384 qpair failed and we were unable to recover it. 00:37:32.384 [2024-07-14 15:10:11.574165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.384 [2024-07-14 15:10:11.574206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.384 qpair failed and we were unable to recover it. 00:37:32.384 [2024-07-14 15:10:11.574347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.384 [2024-07-14 15:10:11.574381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.384 qpair failed and we were unable to recover it. 00:37:32.384 [2024-07-14 15:10:11.574522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.384 [2024-07-14 15:10:11.574573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.384 qpair failed and we were unable to recover it. 00:37:32.384 [2024-07-14 15:10:11.574732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.384 [2024-07-14 15:10:11.574781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.384 qpair failed and we were unable to recover it. 00:37:32.384 [2024-07-14 15:10:11.574933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.384 [2024-07-14 15:10:11.574967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.384 qpair failed and we were unable to recover it. 00:37:32.384 [2024-07-14 15:10:11.575071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.384 [2024-07-14 15:10:11.575105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.384 qpair failed and we were unable to recover it. 00:37:32.384 [2024-07-14 15:10:11.575247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.384 [2024-07-14 15:10:11.575299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.384 qpair failed and we were unable to recover it. 00:37:32.384 [2024-07-14 15:10:11.575426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.384 [2024-07-14 15:10:11.575463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.384 qpair failed and we were unable to recover it. 00:37:32.385 [2024-07-14 15:10:11.575638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.385 [2024-07-14 15:10:11.575676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.385 qpair failed and we were unable to recover it. 00:37:32.385 [2024-07-14 15:10:11.575849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.385 [2024-07-14 15:10:11.575895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.385 qpair failed and we were unable to recover it. 00:37:32.385 [2024-07-14 15:10:11.576034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.385 [2024-07-14 15:10:11.576067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.385 qpair failed and we were unable to recover it. 00:37:32.385 [2024-07-14 15:10:11.576205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.385 [2024-07-14 15:10:11.576238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.385 qpair failed and we were unable to recover it. 00:37:32.385 [2024-07-14 15:10:11.576421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.385 [2024-07-14 15:10:11.576458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.385 qpair failed and we were unable to recover it. 00:37:32.385 [2024-07-14 15:10:11.576603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.385 [2024-07-14 15:10:11.576641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.385 qpair failed and we were unable to recover it. 00:37:32.385 [2024-07-14 15:10:11.576807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.385 [2024-07-14 15:10:11.576845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.385 qpair failed and we were unable to recover it. 00:37:32.385 [2024-07-14 15:10:11.576995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.385 [2024-07-14 15:10:11.577029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.385 qpair failed and we were unable to recover it. 00:37:32.385 [2024-07-14 15:10:11.577136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.385 [2024-07-14 15:10:11.577193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.385 qpair failed and we were unable to recover it. 00:37:32.385 [2024-07-14 15:10:11.577333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.385 [2024-07-14 15:10:11.577367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.385 qpair failed and we were unable to recover it. 00:37:32.385 [2024-07-14 15:10:11.577472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.385 [2024-07-14 15:10:11.577506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.385 qpair failed and we were unable to recover it. 00:37:32.385 [2024-07-14 15:10:11.577646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.385 [2024-07-14 15:10:11.577683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.385 qpair failed and we were unable to recover it. 00:37:32.385 [2024-07-14 15:10:11.577816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.385 [2024-07-14 15:10:11.577850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.385 qpair failed and we were unable to recover it. 00:37:32.385 [2024-07-14 15:10:11.577968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.385 [2024-07-14 15:10:11.578002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.385 qpair failed and we were unable to recover it. 00:37:32.385 [2024-07-14 15:10:11.578118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.385 [2024-07-14 15:10:11.578168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.385 qpair failed and we were unable to recover it. 00:37:32.385 [2024-07-14 15:10:11.578307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.385 [2024-07-14 15:10:11.578361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.385 qpair failed and we were unable to recover it. 00:37:32.385 [2024-07-14 15:10:11.578541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.385 [2024-07-14 15:10:11.578578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.385 qpair failed and we were unable to recover it. 00:37:32.385 [2024-07-14 15:10:11.578703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.385 [2024-07-14 15:10:11.578740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.385 qpair failed and we were unable to recover it. 00:37:32.385 [2024-07-14 15:10:11.578872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.385 [2024-07-14 15:10:11.578916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.385 qpair failed and we were unable to recover it. 00:37:32.385 [2024-07-14 15:10:11.579073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.385 [2024-07-14 15:10:11.579107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.385 qpair failed and we were unable to recover it. 00:37:32.385 [2024-07-14 15:10:11.579283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.385 [2024-07-14 15:10:11.579320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.385 qpair failed and we were unable to recover it. 00:37:32.385 [2024-07-14 15:10:11.579525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.385 [2024-07-14 15:10:11.579562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.385 qpair failed and we were unable to recover it. 00:37:32.385 [2024-07-14 15:10:11.579676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.385 [2024-07-14 15:10:11.579713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.385 qpair failed and we were unable to recover it. 00:37:32.385 [2024-07-14 15:10:11.579873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.385 [2024-07-14 15:10:11.579926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.385 qpair failed and we were unable to recover it. 00:37:32.385 [2024-07-14 15:10:11.580039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.385 [2024-07-14 15:10:11.580072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.385 qpair failed and we were unable to recover it. 00:37:32.385 [2024-07-14 15:10:11.580186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.385 [2024-07-14 15:10:11.580220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.385 qpair failed and we were unable to recover it. 00:37:32.385 [2024-07-14 15:10:11.580386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.385 [2024-07-14 15:10:11.580434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.385 qpair failed and we were unable to recover it. 00:37:32.385 [2024-07-14 15:10:11.580616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.385 [2024-07-14 15:10:11.580664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.385 qpair failed and we were unable to recover it. 00:37:32.385 [2024-07-14 15:10:11.580841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.385 [2024-07-14 15:10:11.580887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.385 qpair failed and we were unable to recover it. 00:37:32.385 [2024-07-14 15:10:11.581043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.385 [2024-07-14 15:10:11.581077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.385 qpair failed and we were unable to recover it. 00:37:32.385 [2024-07-14 15:10:11.581193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.385 [2024-07-14 15:10:11.581226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.385 qpair failed and we were unable to recover it. 00:37:32.385 [2024-07-14 15:10:11.581363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.385 [2024-07-14 15:10:11.581413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.385 qpair failed and we were unable to recover it. 00:37:32.385 [2024-07-14 15:10:11.581560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.385 [2024-07-14 15:10:11.581605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.385 qpair failed and we were unable to recover it. 00:37:32.385 [2024-07-14 15:10:11.581747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.385 [2024-07-14 15:10:11.581788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.385 qpair failed and we were unable to recover it. 00:37:32.385 [2024-07-14 15:10:11.581938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.385 [2024-07-14 15:10:11.581972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.385 qpair failed and we were unable to recover it. 00:37:32.385 [2024-07-14 15:10:11.582216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.386 [2024-07-14 15:10:11.582269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.386 qpair failed and we were unable to recover it. 00:37:32.386 [2024-07-14 15:10:11.582464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.386 [2024-07-14 15:10:11.582504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.386 qpair failed and we were unable to recover it. 00:37:32.386 [2024-07-14 15:10:11.582683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.386 [2024-07-14 15:10:11.582721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.386 qpair failed and we were unable to recover it. 00:37:32.386 [2024-07-14 15:10:11.582844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.386 [2024-07-14 15:10:11.582889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.386 qpair failed and we were unable to recover it. 00:37:32.386 [2024-07-14 15:10:11.583046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.386 [2024-07-14 15:10:11.583094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.386 qpair failed and we were unable to recover it. 00:37:32.386 [2024-07-14 15:10:11.583240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.386 [2024-07-14 15:10:11.583277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.386 qpair failed and we were unable to recover it. 00:37:32.386 [2024-07-14 15:10:11.583442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.386 [2024-07-14 15:10:11.583495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.386 qpair failed and we were unable to recover it. 00:37:32.386 [2024-07-14 15:10:11.583667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.386 [2024-07-14 15:10:11.583722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.386 qpair failed and we were unable to recover it. 00:37:32.386 [2024-07-14 15:10:11.583844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.386 [2024-07-14 15:10:11.583890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.386 qpair failed and we were unable to recover it. 00:37:32.386 [2024-07-14 15:10:11.584045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.386 [2024-07-14 15:10:11.584080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.386 qpair failed and we were unable to recover it. 00:37:32.386 [2024-07-14 15:10:11.584217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.386 [2024-07-14 15:10:11.584251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.386 qpair failed and we were unable to recover it. 00:37:32.386 [2024-07-14 15:10:11.584365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.386 [2024-07-14 15:10:11.584399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.386 qpair failed and we were unable to recover it. 00:37:32.386 [2024-07-14 15:10:11.584543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.386 [2024-07-14 15:10:11.584578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.386 qpair failed and we were unable to recover it. 00:37:32.386 [2024-07-14 15:10:11.584727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.386 [2024-07-14 15:10:11.584769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.386 qpair failed and we were unable to recover it. 00:37:32.386 [2024-07-14 15:10:11.584915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.386 [2024-07-14 15:10:11.584950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.386 qpair failed and we were unable to recover it. 00:37:32.386 [2024-07-14 15:10:11.585089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.386 [2024-07-14 15:10:11.585123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.386 qpair failed and we were unable to recover it. 00:37:32.386 [2024-07-14 15:10:11.585285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.386 [2024-07-14 15:10:11.585320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.386 qpair failed and we were unable to recover it. 00:37:32.386 [2024-07-14 15:10:11.585457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.386 [2024-07-14 15:10:11.585491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.386 qpair failed and we were unable to recover it. 00:37:32.386 [2024-07-14 15:10:11.585645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.386 [2024-07-14 15:10:11.585680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.386 qpair failed and we were unable to recover it. 00:37:32.386 [2024-07-14 15:10:11.585898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.386 [2024-07-14 15:10:11.585933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.386 qpair failed and we were unable to recover it. 00:37:32.386 [2024-07-14 15:10:11.586043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.386 [2024-07-14 15:10:11.586077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.386 qpair failed and we were unable to recover it. 00:37:32.386 [2024-07-14 15:10:11.586234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.386 [2024-07-14 15:10:11.586287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.386 qpair failed and we were unable to recover it. 00:37:32.386 [2024-07-14 15:10:11.586451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.386 [2024-07-14 15:10:11.586505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.386 qpair failed and we were unable to recover it. 00:37:32.386 [2024-07-14 15:10:11.586697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.386 [2024-07-14 15:10:11.586749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.386 qpair failed and we were unable to recover it. 00:37:32.386 [2024-07-14 15:10:11.586858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.386 [2024-07-14 15:10:11.586905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.386 qpair failed and we were unable to recover it. 00:37:32.386 [2024-07-14 15:10:11.587026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.386 [2024-07-14 15:10:11.587063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.386 qpair failed and we were unable to recover it. 00:37:32.386 [2024-07-14 15:10:11.587247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.386 [2024-07-14 15:10:11.587299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.386 qpair failed and we were unable to recover it. 00:37:32.386 [2024-07-14 15:10:11.587466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.386 [2024-07-14 15:10:11.587518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.386 qpair failed and we were unable to recover it. 00:37:32.386 [2024-07-14 15:10:11.587682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.386 [2024-07-14 15:10:11.587716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.386 qpair failed and we were unable to recover it. 00:37:32.386 [2024-07-14 15:10:11.587867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.386 [2024-07-14 15:10:11.587925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.386 qpair failed and we were unable to recover it. 00:37:32.386 [2024-07-14 15:10:11.588076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.386 [2024-07-14 15:10:11.588129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.386 qpair failed and we were unable to recover it. 00:37:32.386 [2024-07-14 15:10:11.588296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.386 [2024-07-14 15:10:11.588350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.386 qpair failed and we were unable to recover it. 00:37:32.386 [2024-07-14 15:10:11.588483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.386 [2024-07-14 15:10:11.588523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.386 qpair failed and we were unable to recover it. 00:37:32.386 [2024-07-14 15:10:11.588697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.386 [2024-07-14 15:10:11.588735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.386 qpair failed and we were unable to recover it. 00:37:32.386 [2024-07-14 15:10:11.588893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.386 [2024-07-14 15:10:11.588928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.386 qpair failed and we were unable to recover it. 00:37:32.386 [2024-07-14 15:10:11.589065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.386 [2024-07-14 15:10:11.589098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.386 qpair failed and we were unable to recover it. 00:37:32.387 [2024-07-14 15:10:11.589283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.387 [2024-07-14 15:10:11.589336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.387 qpair failed and we were unable to recover it. 00:37:32.387 [2024-07-14 15:10:11.589594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.387 [2024-07-14 15:10:11.589640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.387 qpair failed and we were unable to recover it. 00:37:32.387 [2024-07-14 15:10:11.589797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.387 [2024-07-14 15:10:11.589832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.387 qpair failed and we were unable to recover it. 00:37:32.387 [2024-07-14 15:10:11.589953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.387 [2024-07-14 15:10:11.589987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.387 qpair failed and we were unable to recover it. 00:37:32.387 [2024-07-14 15:10:11.590131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.387 [2024-07-14 15:10:11.590198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.387 qpair failed and we were unable to recover it. 00:37:32.387 [2024-07-14 15:10:11.590339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.387 [2024-07-14 15:10:11.590400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.387 qpair failed and we were unable to recover it. 00:37:32.387 [2024-07-14 15:10:11.590570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.387 [2024-07-14 15:10:11.590624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.387 qpair failed and we were unable to recover it. 00:37:32.387 [2024-07-14 15:10:11.590907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.387 [2024-07-14 15:10:11.590942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.387 qpair failed and we were unable to recover it. 00:37:32.387 [2024-07-14 15:10:11.591084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.387 [2024-07-14 15:10:11.591142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.387 qpair failed and we were unable to recover it. 00:37:32.387 [2024-07-14 15:10:11.591270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.387 [2024-07-14 15:10:11.591321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.387 qpair failed and we were unable to recover it. 00:37:32.387 [2024-07-14 15:10:11.591463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.387 [2024-07-14 15:10:11.591498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.387 qpair failed and we were unable to recover it. 00:37:32.387 [2024-07-14 15:10:11.591653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.387 [2024-07-14 15:10:11.591697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.387 qpair failed and we were unable to recover it. 00:37:32.387 [2024-07-14 15:10:11.591857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.387 [2024-07-14 15:10:11.591899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.387 qpair failed and we were unable to recover it. 00:37:32.387 [2024-07-14 15:10:11.592043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.387 [2024-07-14 15:10:11.592095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.387 qpair failed and we were unable to recover it. 00:37:32.387 [2024-07-14 15:10:11.592280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.387 [2024-07-14 15:10:11.592332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.387 qpair failed and we were unable to recover it. 00:37:32.387 [2024-07-14 15:10:11.592487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.387 [2024-07-14 15:10:11.592538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.387 qpair failed and we were unable to recover it. 00:37:32.387 [2024-07-14 15:10:11.592680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.387 [2024-07-14 15:10:11.592714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.387 qpair failed and we were unable to recover it. 00:37:32.387 [2024-07-14 15:10:11.592860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.387 [2024-07-14 15:10:11.592919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.387 qpair failed and we were unable to recover it. 00:37:32.387 [2024-07-14 15:10:11.593029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.387 [2024-07-14 15:10:11.593063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.387 qpair failed and we were unable to recover it. 00:37:32.387 [2024-07-14 15:10:11.593187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.387 [2024-07-14 15:10:11.593223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.387 qpair failed and we were unable to recover it. 00:37:32.387 [2024-07-14 15:10:11.593328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.387 [2024-07-14 15:10:11.593362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.387 qpair failed and we were unable to recover it. 00:37:32.387 [2024-07-14 15:10:11.593503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.387 [2024-07-14 15:10:11.593536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.387 qpair failed and we were unable to recover it. 00:37:32.387 [2024-07-14 15:10:11.593675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.387 [2024-07-14 15:10:11.593709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.387 qpair failed and we were unable to recover it. 00:37:32.387 [2024-07-14 15:10:11.593817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.387 [2024-07-14 15:10:11.593851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.387 qpair failed and we were unable to recover it. 00:37:32.387 [2024-07-14 15:10:11.594022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.387 [2024-07-14 15:10:11.594070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.387 qpair failed and we were unable to recover it. 00:37:32.387 [2024-07-14 15:10:11.594236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.387 [2024-07-14 15:10:11.594294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.387 qpair failed and we were unable to recover it. 00:37:32.387 [2024-07-14 15:10:11.594447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.387 [2024-07-14 15:10:11.594499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.387 qpair failed and we were unable to recover it. 00:37:32.387 [2024-07-14 15:10:11.594701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.387 [2024-07-14 15:10:11.594758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.387 qpair failed and we were unable to recover it. 00:37:32.387 [2024-07-14 15:10:11.594869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.387 [2024-07-14 15:10:11.594910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.387 qpair failed and we were unable to recover it. 00:37:32.387 [2024-07-14 15:10:11.595040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.387 [2024-07-14 15:10:11.595093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.387 qpair failed and we were unable to recover it. 00:37:32.387 [2024-07-14 15:10:11.595288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.387 [2024-07-14 15:10:11.595327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.387 qpair failed and we were unable to recover it. 00:37:32.387 [2024-07-14 15:10:11.595499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.387 [2024-07-14 15:10:11.595554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.387 qpair failed and we were unable to recover it. 00:37:32.387 [2024-07-14 15:10:11.595710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.387 [2024-07-14 15:10:11.595782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.387 qpair failed and we were unable to recover it. 00:37:32.387 [2024-07-14 15:10:11.595927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.387 [2024-07-14 15:10:11.595962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.387 qpair failed and we were unable to recover it. 00:37:32.387 [2024-07-14 15:10:11.596078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.387 [2024-07-14 15:10:11.596111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.387 qpair failed and we were unable to recover it. 00:37:32.387 [2024-07-14 15:10:11.596268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.387 [2024-07-14 15:10:11.596305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.387 qpair failed and we were unable to recover it. 00:37:32.387 [2024-07-14 15:10:11.596461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.387 [2024-07-14 15:10:11.596499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.387 qpair failed and we were unable to recover it. 00:37:32.387 [2024-07-14 15:10:11.596677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.388 [2024-07-14 15:10:11.596714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.388 qpair failed and we were unable to recover it. 00:37:32.388 [2024-07-14 15:10:11.596862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.388 [2024-07-14 15:10:11.596917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.388 qpair failed and we were unable to recover it. 00:37:32.388 [2024-07-14 15:10:11.597053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.388 [2024-07-14 15:10:11.597086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.388 qpair failed and we were unable to recover it. 00:37:32.388 [2024-07-14 15:10:11.597191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.388 [2024-07-14 15:10:11.597241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.388 qpair failed and we were unable to recover it. 00:37:32.388 [2024-07-14 15:10:11.597385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.388 [2024-07-14 15:10:11.597422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.388 qpair failed and we were unable to recover it. 00:37:32.388 [2024-07-14 15:10:11.597580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.388 [2024-07-14 15:10:11.597622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.388 qpair failed and we were unable to recover it. 00:37:32.388 [2024-07-14 15:10:11.597778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.388 [2024-07-14 15:10:11.597815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.388 qpair failed and we were unable to recover it. 00:37:32.388 [2024-07-14 15:10:11.597976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.388 [2024-07-14 15:10:11.598011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.388 qpair failed and we were unable to recover it. 00:37:32.388 [2024-07-14 15:10:11.598115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.388 [2024-07-14 15:10:11.598149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.388 qpair failed and we were unable to recover it. 00:37:32.388 [2024-07-14 15:10:11.598296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.388 [2024-07-14 15:10:11.598347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.388 qpair failed and we were unable to recover it. 00:37:32.388 [2024-07-14 15:10:11.598490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.388 [2024-07-14 15:10:11.598527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.388 qpair failed and we were unable to recover it. 00:37:32.388 [2024-07-14 15:10:11.598647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.388 [2024-07-14 15:10:11.598684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.388 qpair failed and we were unable to recover it. 00:37:32.388 [2024-07-14 15:10:11.598823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.388 [2024-07-14 15:10:11.598860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.388 qpair failed and we were unable to recover it. 00:37:32.388 [2024-07-14 15:10:11.599009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.388 [2024-07-14 15:10:11.599043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.388 qpair failed and we were unable to recover it. 00:37:32.388 [2024-07-14 15:10:11.599182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.388 [2024-07-14 15:10:11.599216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.388 qpair failed and we were unable to recover it. 00:37:32.388 [2024-07-14 15:10:11.599350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.388 [2024-07-14 15:10:11.599394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.388 qpair failed and we were unable to recover it. 00:37:32.388 [2024-07-14 15:10:11.599543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.388 [2024-07-14 15:10:11.599580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.388 qpair failed and we were unable to recover it. 00:37:32.388 [2024-07-14 15:10:11.599752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.388 [2024-07-14 15:10:11.599789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.388 qpair failed and we were unable to recover it. 00:37:32.388 [2024-07-14 15:10:11.599921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.388 [2024-07-14 15:10:11.599974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.388 qpair failed and we were unable to recover it. 00:37:32.388 [2024-07-14 15:10:11.600086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.388 [2024-07-14 15:10:11.600121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.388 qpair failed and we were unable to recover it. 00:37:32.388 [2024-07-14 15:10:11.600289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.388 [2024-07-14 15:10:11.600328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.388 qpair failed and we were unable to recover it. 00:37:32.388 [2024-07-14 15:10:11.600469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.388 [2024-07-14 15:10:11.600505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.388 qpair failed and we were unable to recover it. 00:37:32.388 [2024-07-14 15:10:11.600671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.388 [2024-07-14 15:10:11.600708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.388 qpair failed and we were unable to recover it. 00:37:32.388 [2024-07-14 15:10:11.600893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.388 [2024-07-14 15:10:11.600927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.388 qpair failed and we were unable to recover it. 00:37:32.388 [2024-07-14 15:10:11.601055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.388 [2024-07-14 15:10:11.601104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.388 qpair failed and we were unable to recover it. 00:37:32.388 [2024-07-14 15:10:11.601286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.388 [2024-07-14 15:10:11.601327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.388 qpair failed and we were unable to recover it. 00:37:32.388 [2024-07-14 15:10:11.601469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.388 [2024-07-14 15:10:11.601527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.388 qpair failed and we were unable to recover it. 00:37:32.388 [2024-07-14 15:10:11.601685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.388 [2024-07-14 15:10:11.601724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.388 qpair failed and we were unable to recover it. 00:37:32.388 [2024-07-14 15:10:11.601851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.388 [2024-07-14 15:10:11.601899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.388 qpair failed and we were unable to recover it. 00:37:32.388 [2024-07-14 15:10:11.602040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.388 [2024-07-14 15:10:11.602075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.388 qpair failed and we were unable to recover it. 00:37:32.388 [2024-07-14 15:10:11.602198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.388 [2024-07-14 15:10:11.602249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.388 qpair failed and we were unable to recover it. 00:37:32.389 [2024-07-14 15:10:11.602400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.389 [2024-07-14 15:10:11.602438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.389 qpair failed and we were unable to recover it. 00:37:32.389 [2024-07-14 15:10:11.602620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.389 [2024-07-14 15:10:11.602665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.389 qpair failed and we were unable to recover it. 00:37:32.389 [2024-07-14 15:10:11.602816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.389 [2024-07-14 15:10:11.602853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.389 qpair failed and we were unable to recover it. 00:37:32.389 [2024-07-14 15:10:11.603024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.389 [2024-07-14 15:10:11.603059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.389 qpair failed and we were unable to recover it. 00:37:32.389 [2024-07-14 15:10:11.603196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.389 [2024-07-14 15:10:11.603230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.389 qpair failed and we were unable to recover it. 00:37:32.389 [2024-07-14 15:10:11.603365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.389 [2024-07-14 15:10:11.603420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.389 qpair failed and we were unable to recover it. 00:37:32.389 [2024-07-14 15:10:11.603597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.389 [2024-07-14 15:10:11.603636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.389 qpair failed and we were unable to recover it. 00:37:32.389 [2024-07-14 15:10:11.603760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.389 [2024-07-14 15:10:11.603811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.389 qpair failed and we were unable to recover it. 00:37:32.389 [2024-07-14 15:10:11.603960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.389 [2024-07-14 15:10:11.604004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.389 qpair failed and we were unable to recover it. 00:37:32.389 [2024-07-14 15:10:11.604137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.389 [2024-07-14 15:10:11.604190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.389 qpair failed and we were unable to recover it. 00:37:32.389 [2024-07-14 15:10:11.604330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.389 [2024-07-14 15:10:11.604364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.389 qpair failed and we were unable to recover it. 00:37:32.389 [2024-07-14 15:10:11.604511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.389 [2024-07-14 15:10:11.604545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.389 qpair failed and we were unable to recover it. 00:37:32.389 [2024-07-14 15:10:11.604706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.389 [2024-07-14 15:10:11.604751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.389 qpair failed and we were unable to recover it. 00:37:32.389 [2024-07-14 15:10:11.604925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.389 [2024-07-14 15:10:11.604959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.389 qpair failed and we were unable to recover it. 00:37:32.389 [2024-07-14 15:10:11.605071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.389 [2024-07-14 15:10:11.605105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.389 qpair failed and we were unable to recover it. 00:37:32.389 [2024-07-14 15:10:11.605244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.389 [2024-07-14 15:10:11.605278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.389 qpair failed and we were unable to recover it. 00:37:32.389 [2024-07-14 15:10:11.605467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.389 [2024-07-14 15:10:11.605501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.389 qpair failed and we were unable to recover it. 00:37:32.389 [2024-07-14 15:10:11.605688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.389 [2024-07-14 15:10:11.605725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.389 qpair failed and we were unable to recover it. 00:37:32.389 [2024-07-14 15:10:11.605856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.389 [2024-07-14 15:10:11.605902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.389 qpair failed and we were unable to recover it. 00:37:32.389 [2024-07-14 15:10:11.606068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.389 [2024-07-14 15:10:11.606101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.389 qpair failed and we were unable to recover it. 00:37:32.389 [2024-07-14 15:10:11.606252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.389 [2024-07-14 15:10:11.606289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.389 qpair failed and we were unable to recover it. 00:37:32.389 [2024-07-14 15:10:11.606402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.389 [2024-07-14 15:10:11.606440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.389 qpair failed and we were unable to recover it. 00:37:32.389 [2024-07-14 15:10:11.606611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.389 [2024-07-14 15:10:11.606648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.389 qpair failed and we were unable to recover it. 00:37:32.389 [2024-07-14 15:10:11.606762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.389 [2024-07-14 15:10:11.606800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.389 qpair failed and we were unable to recover it. 00:37:32.389 [2024-07-14 15:10:11.606940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.389 [2024-07-14 15:10:11.606974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.389 qpair failed and we were unable to recover it. 00:37:32.389 [2024-07-14 15:10:11.607154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.389 [2024-07-14 15:10:11.607194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.389 qpair failed and we were unable to recover it. 00:37:32.389 [2024-07-14 15:10:11.607324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.389 [2024-07-14 15:10:11.607362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.389 qpair failed and we were unable to recover it. 00:37:32.389 [2024-07-14 15:10:11.607509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.389 [2024-07-14 15:10:11.607546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.389 qpair failed and we were unable to recover it. 00:37:32.389 [2024-07-14 15:10:11.607696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.389 [2024-07-14 15:10:11.607747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.389 qpair failed and we were unable to recover it. 00:37:32.389 [2024-07-14 15:10:11.607915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.389 [2024-07-14 15:10:11.607953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.389 qpair failed and we were unable to recover it. 00:37:32.389 [2024-07-14 15:10:11.608084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.389 [2024-07-14 15:10:11.608117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.389 qpair failed and we were unable to recover it. 00:37:32.389 [2024-07-14 15:10:11.608238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.389 [2024-07-14 15:10:11.608275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.389 qpair failed and we were unable to recover it. 00:37:32.389 [2024-07-14 15:10:11.608428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.389 [2024-07-14 15:10:11.608466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.389 qpair failed and we were unable to recover it. 00:37:32.389 [2024-07-14 15:10:11.608581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.389 [2024-07-14 15:10:11.608618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.389 qpair failed and we were unable to recover it. 00:37:32.389 [2024-07-14 15:10:11.608776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.389 [2024-07-14 15:10:11.608813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.389 qpair failed and we were unable to recover it. 00:37:32.389 [2024-07-14 15:10:11.608978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.389 [2024-07-14 15:10:11.609012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.389 qpair failed and we were unable to recover it. 00:37:32.389 [2024-07-14 15:10:11.609126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.389 [2024-07-14 15:10:11.609159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.389 qpair failed and we were unable to recover it. 00:37:32.389 [2024-07-14 15:10:11.609309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.389 [2024-07-14 15:10:11.609346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.389 qpair failed and we were unable to recover it. 00:37:32.389 [2024-07-14 15:10:11.609498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.389 [2024-07-14 15:10:11.609537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.389 qpair failed and we were unable to recover it. 00:37:32.389 [2024-07-14 15:10:11.609695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.389 [2024-07-14 15:10:11.609740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.389 qpair failed and we were unable to recover it. 00:37:32.389 [2024-07-14 15:10:11.609897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.390 [2024-07-14 15:10:11.609932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.390 qpair failed and we were unable to recover it. 00:37:32.390 [2024-07-14 15:10:11.610032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.390 [2024-07-14 15:10:11.610066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.390 qpair failed and we were unable to recover it. 00:37:32.390 [2024-07-14 15:10:11.610225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.390 [2024-07-14 15:10:11.610263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.390 qpair failed and we were unable to recover it. 00:37:32.390 [2024-07-14 15:10:11.610421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.390 [2024-07-14 15:10:11.610459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.390 qpair failed and we were unable to recover it. 00:37:32.390 [2024-07-14 15:10:11.610607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.390 [2024-07-14 15:10:11.610646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.390 qpair failed and we were unable to recover it. 00:37:32.390 [2024-07-14 15:10:11.610774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.390 [2024-07-14 15:10:11.610815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.390 qpair failed and we were unable to recover it. 00:37:32.390 [2024-07-14 15:10:11.610982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.390 [2024-07-14 15:10:11.611017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.390 qpair failed and we were unable to recover it. 00:37:32.390 [2024-07-14 15:10:11.611159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.390 [2024-07-14 15:10:11.611192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.390 qpair failed and we were unable to recover it. 00:37:32.390 [2024-07-14 15:10:11.611328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.390 [2024-07-14 15:10:11.611361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.390 qpair failed and we were unable to recover it. 00:37:32.390 [2024-07-14 15:10:11.611473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.390 [2024-07-14 15:10:11.611507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.390 qpair failed and we were unable to recover it. 00:37:32.390 [2024-07-14 15:10:11.611621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.390 [2024-07-14 15:10:11.611654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.390 qpair failed and we were unable to recover it. 00:37:32.390 [2024-07-14 15:10:11.611867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.390 [2024-07-14 15:10:11.611909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.390 qpair failed and we were unable to recover it. 00:37:32.390 [2024-07-14 15:10:11.612044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.390 [2024-07-14 15:10:11.612078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.390 qpair failed and we were unable to recover it. 00:37:32.390 [2024-07-14 15:10:11.612206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.390 [2024-07-14 15:10:11.612275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.390 qpair failed and we were unable to recover it. 00:37:32.390 [2024-07-14 15:10:11.612514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.390 [2024-07-14 15:10:11.612554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.390 qpair failed and we were unable to recover it. 00:37:32.390 [2024-07-14 15:10:11.612772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.390 [2024-07-14 15:10:11.612814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.390 qpair failed and we were unable to recover it. 00:37:32.390 [2024-07-14 15:10:11.613016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.390 [2024-07-14 15:10:11.613051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.390 qpair failed and we were unable to recover it. 00:37:32.390 [2024-07-14 15:10:11.613164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.390 [2024-07-14 15:10:11.613217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.390 qpair failed and we were unable to recover it. 00:37:32.390 [2024-07-14 15:10:11.613341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.390 [2024-07-14 15:10:11.613375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.390 qpair failed and we were unable to recover it. 00:37:32.390 [2024-07-14 15:10:11.613542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.390 [2024-07-14 15:10:11.613594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.390 qpair failed and we were unable to recover it. 00:37:32.390 [2024-07-14 15:10:11.613706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.390 [2024-07-14 15:10:11.613744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.390 qpair failed and we were unable to recover it. 00:37:32.390 [2024-07-14 15:10:11.613920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.390 [2024-07-14 15:10:11.613954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.390 qpair failed and we were unable to recover it. 00:37:32.390 [2024-07-14 15:10:11.614089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.390 [2024-07-14 15:10:11.614123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.390 qpair failed and we were unable to recover it. 00:37:32.390 [2024-07-14 15:10:11.614256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.390 [2024-07-14 15:10:11.614294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.390 qpair failed and we were unable to recover it. 00:37:32.390 [2024-07-14 15:10:11.614439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.390 [2024-07-14 15:10:11.614473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.390 qpair failed and we were unable to recover it. 00:37:32.390 [2024-07-14 15:10:11.614609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.390 [2024-07-14 15:10:11.614643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.390 qpair failed and we were unable to recover it. 00:37:32.390 [2024-07-14 15:10:11.614782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.390 [2024-07-14 15:10:11.614820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.390 qpair failed and we were unable to recover it. 00:37:32.390 [2024-07-14 15:10:11.614961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.390 [2024-07-14 15:10:11.614996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.390 qpair failed and we were unable to recover it. 00:37:32.390 [2024-07-14 15:10:11.615145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.390 [2024-07-14 15:10:11.615196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.390 qpair failed and we were unable to recover it. 00:37:32.390 [2024-07-14 15:10:11.615345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.390 [2024-07-14 15:10:11.615387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.390 qpair failed and we were unable to recover it. 00:37:32.390 [2024-07-14 15:10:11.615510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.390 [2024-07-14 15:10:11.615544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.390 qpair failed and we were unable to recover it. 00:37:32.390 [2024-07-14 15:10:11.615747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.390 [2024-07-14 15:10:11.615785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.390 qpair failed and we were unable to recover it. 00:37:32.390 [2024-07-14 15:10:11.615907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.390 [2024-07-14 15:10:11.615965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.390 qpair failed and we were unable to recover it. 00:37:32.390 [2024-07-14 15:10:11.616096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.390 [2024-07-14 15:10:11.616130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.390 qpair failed and we were unable to recover it. 00:37:32.390 [2024-07-14 15:10:11.616292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.390 [2024-07-14 15:10:11.616330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.390 qpair failed and we were unable to recover it. 00:37:32.390 [2024-07-14 15:10:11.616474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.390 [2024-07-14 15:10:11.616525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.390 qpair failed and we were unable to recover it. 00:37:32.390 [2024-07-14 15:10:11.616678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.390 [2024-07-14 15:10:11.616712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.390 qpair failed and we were unable to recover it. 00:37:32.390 [2024-07-14 15:10:11.616853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.391 [2024-07-14 15:10:11.616927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.391 qpair failed and we were unable to recover it. 00:37:32.391 [2024-07-14 15:10:11.617061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.391 [2024-07-14 15:10:11.617094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.391 qpair failed and we were unable to recover it. 00:37:32.391 [2024-07-14 15:10:11.617206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.391 [2024-07-14 15:10:11.617240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.391 qpair failed and we were unable to recover it. 00:37:32.391 [2024-07-14 15:10:11.617349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.391 [2024-07-14 15:10:11.617383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.391 qpair failed and we were unable to recover it. 00:37:32.391 [2024-07-14 15:10:11.617541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.391 [2024-07-14 15:10:11.617579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.391 qpair failed and we were unable to recover it. 00:37:32.391 [2024-07-14 15:10:11.617736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.391 [2024-07-14 15:10:11.617770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.391 qpair failed and we were unable to recover it. 00:37:32.391 [2024-07-14 15:10:11.617917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.391 [2024-07-14 15:10:11.617951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.391 qpair failed and we were unable to recover it. 00:37:32.391 [2024-07-14 15:10:11.618063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.391 [2024-07-14 15:10:11.618096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.391 qpair failed and we were unable to recover it. 00:37:32.391 [2024-07-14 15:10:11.618225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.391 [2024-07-14 15:10:11.618259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.391 qpair failed and we were unable to recover it. 00:37:32.391 [2024-07-14 15:10:11.618392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.391 [2024-07-14 15:10:11.618445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.391 qpair failed and we were unable to recover it. 00:37:32.391 [2024-07-14 15:10:11.618598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.391 [2024-07-14 15:10:11.618640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.391 qpair failed and we were unable to recover it. 00:37:32.391 [2024-07-14 15:10:11.618789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.391 [2024-07-14 15:10:11.618826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.391 qpair failed and we were unable to recover it. 00:37:32.391 [2024-07-14 15:10:11.618981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.391 [2024-07-14 15:10:11.619015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.391 qpair failed and we were unable to recover it. 00:37:32.391 [2024-07-14 15:10:11.619129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.391 [2024-07-14 15:10:11.619162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.391 qpair failed and we were unable to recover it. 00:37:32.391 [2024-07-14 15:10:11.619286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.391 [2024-07-14 15:10:11.619320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.391 qpair failed and we were unable to recover it. 00:37:32.391 [2024-07-14 15:10:11.619451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.391 [2024-07-14 15:10:11.619485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.391 qpair failed and we were unable to recover it. 00:37:32.391 [2024-07-14 15:10:11.619623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.391 [2024-07-14 15:10:11.619661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.391 qpair failed and we were unable to recover it. 00:37:32.391 [2024-07-14 15:10:11.619804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.391 [2024-07-14 15:10:11.619837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.391 qpair failed and we were unable to recover it. 00:37:32.391 [2024-07-14 15:10:11.619985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.391 [2024-07-14 15:10:11.620038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.391 qpair failed and we were unable to recover it. 00:37:32.391 [2024-07-14 15:10:11.620153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.391 [2024-07-14 15:10:11.620190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.391 qpair failed and we were unable to recover it. 00:37:32.391 [2024-07-14 15:10:11.620328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.391 [2024-07-14 15:10:11.620361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.391 qpair failed and we were unable to recover it. 00:37:32.391 [2024-07-14 15:10:11.620478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.391 [2024-07-14 15:10:11.620512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.676 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-14 15:10:11.620655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-14 15:10:11.620689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-14 15:10:11.620793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-14 15:10:11.620827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-14 15:10:11.621011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-14 15:10:11.621047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-14 15:10:11.621155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-14 15:10:11.621208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-14 15:10:11.621339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-14 15:10:11.621384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-14 15:10:11.621498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-14 15:10:11.621532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-14 15:10:11.621664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-14 15:10:11.621702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-14 15:10:11.621828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-14 15:10:11.621861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-14 15:10:11.621983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-14 15:10:11.622017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-14 15:10:11.622123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-14 15:10:11.622156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-14 15:10:11.622272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-14 15:10:11.622306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-14 15:10:11.622426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-14 15:10:11.622459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-14 15:10:11.622559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-14 15:10:11.622592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-14 15:10:11.622696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-14 15:10:11.622730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-14 15:10:11.622908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-14 15:10:11.622946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-14 15:10:11.623088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-14 15:10:11.623122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-14 15:10:11.623254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-14 15:10:11.623287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-14 15:10:11.623467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-14 15:10:11.623504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-14 15:10:11.623630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-14 15:10:11.623666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-14 15:10:11.623836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-14 15:10:11.623873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-14 15:10:11.624025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-14 15:10:11.624059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-14 15:10:11.624173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-14 15:10:11.624207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-14 15:10:11.624347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-14 15:10:11.624386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-14 15:10:11.624530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-14 15:10:11.624568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-14 15:10:11.624717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-14 15:10:11.624755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-14 15:10:11.624903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-14 15:10:11.624947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-14 15:10:11.625054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-14 15:10:11.625087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-14 15:10:11.625238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-14 15:10:11.625277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-14 15:10:11.625418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-14 15:10:11.625452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-14 15:10:11.625567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-14 15:10:11.625601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-14 15:10:11.625731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-14 15:10:11.625765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-14 15:10:11.625906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-14 15:10:11.625946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-14 15:10:11.626052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-14 15:10:11.626086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-14 15:10:11.626229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-14 15:10:11.626267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-14 15:10:11.626400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-14 15:10:11.626433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-14 15:10:11.626576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-14 15:10:11.626610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-14 15:10:11.626732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-14 15:10:11.626766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-14 15:10:11.626884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-14 15:10:11.626935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-14 15:10:11.627046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-14 15:10:11.627080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-14 15:10:11.627259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-14 15:10:11.627293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-14 15:10:11.627408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-14 15:10:11.627451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-14 15:10:11.627641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-14 15:10:11.627678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-14 15:10:11.627800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-14 15:10:11.627837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-14 15:10:11.627984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-14 15:10:11.628018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-14 15:10:11.628132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-14 15:10:11.628165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-14 15:10:11.628273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-14 15:10:11.628307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-14 15:10:11.628438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-14 15:10:11.628472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-14 15:10:11.628576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-14 15:10:11.628625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-14 15:10:11.628759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-14 15:10:11.628796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-14 15:10:11.628971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-14 15:10:11.629005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-14 15:10:11.629119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-14 15:10:11.629153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-14 15:10:11.629282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-14 15:10:11.629320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-14 15:10:11.629464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-14 15:10:11.629498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-14 15:10:11.629630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-14 15:10:11.629680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-14 15:10:11.629799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-14 15:10:11.629837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-14 15:10:11.629978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-14 15:10:11.630012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-14 15:10:11.630142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-14 15:10:11.630196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-14 15:10:11.630376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-14 15:10:11.630413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-14 15:10:11.630579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-14 15:10:11.630613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-14 15:10:11.630727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-14 15:10:11.630777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-14 15:10:11.630931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-14 15:10:11.630968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-14 15:10:11.631095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-14 15:10:11.631129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-14 15:10:11.631242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-14 15:10:11.631276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-14 15:10:11.631421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-14 15:10:11.631458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-14 15:10:11.631595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-14 15:10:11.631630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-14 15:10:11.631774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-14 15:10:11.631808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-14 15:10:11.631974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-14 15:10:11.632008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-14 15:10:11.632116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-14 15:10:11.632154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-14 15:10:11.632288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-14 15:10:11.632339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-14 15:10:11.632494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-14 15:10:11.632532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-14 15:10:11.632716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-14 15:10:11.632750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-14 15:10:11.632854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-14 15:10:11.632897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-14 15:10:11.633073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-14 15:10:11.633110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-14 15:10:11.633252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-14 15:10:11.633286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-14 15:10:11.633446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-14 15:10:11.633479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-14 15:10:11.633620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-14 15:10:11.633658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-14 15:10:11.633816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-14 15:10:11.633851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-14 15:10:11.634016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-14 15:10:11.634058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-14 15:10:11.634213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-14 15:10:11.634247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-14 15:10:11.634393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-14 15:10:11.634427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-14 15:10:11.634532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-14 15:10:11.634566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-14 15:10:11.634706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-14 15:10:11.634743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-14 15:10:11.634912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-14 15:10:11.634956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-14 15:10:11.635070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-14 15:10:11.635121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-14 15:10:11.635265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-14 15:10:11.635302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-14 15:10:11.635428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-14 15:10:11.635462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-14 15:10:11.635563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-14 15:10:11.635597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-14 15:10:11.635727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-14 15:10:11.635764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-14 15:10:11.635940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-14 15:10:11.635974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-14 15:10:11.636104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-14 15:10:11.636165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-14 15:10:11.636283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-14 15:10:11.636321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-14 15:10:11.636461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-14 15:10:11.636495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-14 15:10:11.636609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-14 15:10:11.636643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-14 15:10:11.636770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-14 15:10:11.636808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-14 15:10:11.636954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-14 15:10:11.636988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-14 15:10:11.637123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-14 15:10:11.637184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-14 15:10:11.637326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-14 15:10:11.637363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-14 15:10:11.637519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-14 15:10:11.637552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-14 15:10:11.637661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-14 15:10:11.637695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-14 15:10:11.637800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-14 15:10:11.637833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-14 15:10:11.637964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-14 15:10:11.637999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-14 15:10:11.638156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-14 15:10:11.638193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-14 15:10:11.638333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-14 15:10:11.638371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-14 15:10:11.638518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-14 15:10:11.638551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-14 15:10:11.638670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-14 15:10:11.638733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-14 15:10:11.638908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-14 15:10:11.638961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-14 15:10:11.639103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-14 15:10:11.639149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-14 15:10:11.639286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-14 15:10:11.639337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-14 15:10:11.639450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-14 15:10:11.639487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-14 15:10:11.639616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-14 15:10:11.639649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-14 15:10:11.639796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-14 15:10:11.639852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-14 15:10:11.639993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-14 15:10:11.640031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-14 15:10:11.640167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-14 15:10:11.640201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-14 15:10:11.640310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-14 15:10:11.640344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-14 15:10:11.640470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-14 15:10:11.640508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-14 15:10:11.640658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-14 15:10:11.640692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-14 15:10:11.640798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-14 15:10:11.640831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-14 15:10:11.640991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-14 15:10:11.641034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-14 15:10:11.641169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-14 15:10:11.641203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-14 15:10:11.641367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-14 15:10:11.641421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-14 15:10:11.641541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-14 15:10:11.641579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-14 15:10:11.641716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-14 15:10:11.641750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-14 15:10:11.641951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-14 15:10:11.642004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-14 15:10:11.642125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-14 15:10:11.642162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-14 15:10:11.642314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-14 15:10:11.642347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-14 15:10:11.642523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-14 15:10:11.642561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-14 15:10:11.642712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-14 15:10:11.642749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-14 15:10:11.642908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-14 15:10:11.642945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-14 15:10:11.643088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-14 15:10:11.643137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-14 15:10:11.643253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-14 15:10:11.643290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-14 15:10:11.643451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-14 15:10:11.643485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-14 15:10:11.643622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-14 15:10:11.643673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-14 15:10:11.643826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-14 15:10:11.643864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-14 15:10:11.644006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-14 15:10:11.644040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-14 15:10:11.644174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-14 15:10:11.644209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-14 15:10:11.644376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-14 15:10:11.644414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-14 15:10:11.644572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-14 15:10:11.644605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-14 15:10:11.644740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-14 15:10:11.644791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-14 15:10:11.644914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-14 15:10:11.644952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-14 15:10:11.645087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-14 15:10:11.645121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-14 15:10:11.645232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-14 15:10:11.645266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-14 15:10:11.645395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-14 15:10:11.645445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-14 15:10:11.645563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-14 15:10:11.645597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-14 15:10:11.645782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-14 15:10:11.645820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-14 15:10:11.645975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-14 15:10:11.646013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-14 15:10:11.646134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-14 15:10:11.646185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-14 15:10:11.646295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-14 15:10:11.646329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-14 15:10:11.646463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-14 15:10:11.646518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-14 15:10:11.646625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-14 15:10:11.646657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-14 15:10:11.646784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-14 15:10:11.646818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-14 15:10:11.646986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-14 15:10:11.647021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-14 15:10:11.647157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-14 15:10:11.647191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-14 15:10:11.647290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-14 15:10:11.647324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-14 15:10:11.647492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-14 15:10:11.647529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-14 15:10:11.647693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-14 15:10:11.647728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-14 15:10:11.647887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-14 15:10:11.647930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-14 15:10:11.648073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-14 15:10:11.648107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-14 15:10:11.648295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-14 15:10:11.648334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-14 15:10:11.648447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-14 15:10:11.648485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-14 15:10:11.648625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-14 15:10:11.648658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-14 15:10:11.648797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-14 15:10:11.648839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-14 15:10:11.648984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-14 15:10:11.649018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-14 15:10:11.649132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-14 15:10:11.649187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-14 15:10:11.649336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-14 15:10:11.649370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-14 15:10:11.649516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-14 15:10:11.649566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-14 15:10:11.649735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-14 15:10:11.649773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-14 15:10:11.649958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-14 15:10:11.649992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-14 15:10:11.650151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-14 15:10:11.650188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-14 15:10:11.650309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-14 15:10:11.650357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-14 15:10:11.650511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-14 15:10:11.650544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-14 15:10:11.650686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-14 15:10:11.650739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-14 15:10:11.650919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-14 15:10:11.650956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-14 15:10:11.651139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-14 15:10:11.651172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-14 15:10:11.651284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-14 15:10:11.651337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-14 15:10:11.651458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-14 15:10:11.651495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-14 15:10:11.651631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-14 15:10:11.651664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-14 15:10:11.651765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-14 15:10:11.651798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-14 15:10:11.651950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-14 15:10:11.651984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-14 15:10:11.652093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-14 15:10:11.652126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-14 15:10:11.652234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-14 15:10:11.652268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-14 15:10:11.652400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-14 15:10:11.652438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-14 15:10:11.652566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-14 15:10:11.652600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-14 15:10:11.652712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-14 15:10:11.652746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-14 15:10:11.652937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-14 15:10:11.652974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-14 15:10:11.653111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-14 15:10:11.653146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-14 15:10:11.653287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-14 15:10:11.653337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-14 15:10:11.653494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-14 15:10:11.653532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-14 15:10:11.653684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-14 15:10:11.653721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-14 15:10:11.653888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-14 15:10:11.653939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-14 15:10:11.654076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-14 15:10:11.654109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-14 15:10:11.654221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-14 15:10:11.654255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-14 15:10:11.654367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-14 15:10:11.654401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-14 15:10:11.654558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-14 15:10:11.654595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-14 15:10:11.654733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-14 15:10:11.654767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-14 15:10:11.654918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-14 15:10:11.654970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-14 15:10:11.655130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-14 15:10:11.655164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-14 15:10:11.655325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-14 15:10:11.655358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-14 15:10:11.655511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-14 15:10:11.655553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-14 15:10:11.655673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-14 15:10:11.655710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-14 15:10:11.655836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-14 15:10:11.655870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-14 15:10:11.656025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-14 15:10:11.656076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-14 15:10:11.656228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-14 15:10:11.656265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-14 15:10:11.656382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-14 15:10:11.656416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-14 15:10:11.656556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-14 15:10:11.656589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-14 15:10:11.656754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-14 15:10:11.656792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-14 15:10:11.656956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-14 15:10:11.656990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-14 15:10:11.657126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-14 15:10:11.657193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-14 15:10:11.657344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-14 15:10:11.657382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-14 15:10:11.657541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-14 15:10:11.657575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-14 15:10:11.657722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-14 15:10:11.657759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-14 15:10:11.657885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-14 15:10:11.657935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-14 15:10:11.658102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-14 15:10:11.658146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-14 15:10:11.658326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-14 15:10:11.658363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-14 15:10:11.658562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-14 15:10:11.658595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-14 15:10:11.658727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-14 15:10:11.658761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-14 15:10:11.658944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-14 15:10:11.658982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-14 15:10:11.659154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-14 15:10:11.659192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-14 15:10:11.659374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-14 15:10:11.659407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-14 15:10:11.659551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-14 15:10:11.659589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-14 15:10:11.659703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-14 15:10:11.659741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-14 15:10:11.659868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-14 15:10:11.659913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-14 15:10:11.660070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-14 15:10:11.660104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-14 15:10:11.660242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-14 15:10:11.660276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-14 15:10:11.660452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-14 15:10:11.660486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-14 15:10:11.660597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-14 15:10:11.660631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-14 15:10:11.660788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-14 15:10:11.660825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-14 15:10:11.660993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-14 15:10:11.661027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-14 15:10:11.661131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-14 15:10:11.661165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-14 15:10:11.661305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-14 15:10:11.661342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-14 15:10:11.661504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-14 15:10:11.661539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-14 15:10:11.661677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-14 15:10:11.661729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-14 15:10:11.661885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-14 15:10:11.661933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-14 15:10:11.662080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-14 15:10:11.662123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-14 15:10:11.662260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-14 15:10:11.662311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-14 15:10:11.662482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-14 15:10:11.662519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-14 15:10:11.662696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-14 15:10:11.662733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-14 15:10:11.662889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-14 15:10:11.662941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-14 15:10:11.663098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-14 15:10:11.663145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-14 15:10:11.663306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-14 15:10:11.663339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-14 15:10:11.663477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-14 15:10:11.663528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-14 15:10:11.663682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-14 15:10:11.663719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-14 15:10:11.663919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-14 15:10:11.663953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-14 15:10:11.664099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-14 15:10:11.664138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-14 15:10:11.664287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-14 15:10:11.664324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-14 15:10:11.664472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-14 15:10:11.664506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-14 15:10:11.664691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-14 15:10:11.664729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-14 15:10:11.664840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-14 15:10:11.664884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-14 15:10:11.665052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-14 15:10:11.665087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-14 15:10:11.665203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-14 15:10:11.665237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-14 15:10:11.665389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-14 15:10:11.665426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-14 15:10:11.665588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-14 15:10:11.665622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-14 15:10:11.665765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-14 15:10:11.665799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-14 15:10:11.665971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-14 15:10:11.666009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-14 15:10:11.666136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-14 15:10:11.666169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-14 15:10:11.666335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-14 15:10:11.666387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-14 15:10:11.666547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-14 15:10:11.666581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-14 15:10:11.666737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-14 15:10:11.666770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-14 15:10:11.666904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-14 15:10:11.666943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-14 15:10:11.667105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-14 15:10:11.667152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-14 15:10:11.667275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-14 15:10:11.667309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-14 15:10:11.667413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-14 15:10:11.667447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-14 15:10:11.667583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-14 15:10:11.667620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-14 15:10:11.667768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-14 15:10:11.667802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-14 15:10:11.667969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-14 15:10:11.668008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-14 15:10:11.668167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-14 15:10:11.668201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-14 15:10:11.668335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-14 15:10:11.668369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-14 15:10:11.668503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-14 15:10:11.668537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-14 15:10:11.668694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-14 15:10:11.668732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-14 15:10:11.668891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-14 15:10:11.668937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-14 15:10:11.669034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-14 15:10:11.669083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-14 15:10:11.669234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-14 15:10:11.669271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-14 15:10:11.669434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-14 15:10:11.669468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-14 15:10:11.669568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-14 15:10:11.669602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-14 15:10:11.669755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-14 15:10:11.669792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-14 15:10:11.669954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-14 15:10:11.669988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-14 15:10:11.670125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-14 15:10:11.670159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-14 15:10:11.670295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-14 15:10:11.670332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-14 15:10:11.670492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-14 15:10:11.670531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-14 15:10:11.670637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-14 15:10:11.670671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-14 15:10:11.670882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-14 15:10:11.670925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-14 15:10:11.671087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-14 15:10:11.671120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-14 15:10:11.671274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-14 15:10:11.671312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-14 15:10:11.671419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-14 15:10:11.671456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-14 15:10:11.671607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-14 15:10:11.671641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-14 15:10:11.671782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-14 15:10:11.671816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-14 15:10:11.671952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-14 15:10:11.671986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-14 15:10:11.672170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-14 15:10:11.672204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-14 15:10:11.672380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-14 15:10:11.672418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-14 15:10:11.672559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-14 15:10:11.672596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-14 15:10:11.672747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-14 15:10:11.672780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-14 15:10:11.672891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-14 15:10:11.672933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-14 15:10:11.673078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-14 15:10:11.673116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-14 15:10:11.673271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-14 15:10:11.673305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-14 15:10:11.673439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-14 15:10:11.673474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-14 15:10:11.673655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-14 15:10:11.673693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-14 15:10:11.673818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-14 15:10:11.673853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-14 15:10:11.674014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-14 15:10:11.674057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-14 15:10:11.674231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-14 15:10:11.674265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-14 15:10:11.674438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-14 15:10:11.674471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-14 15:10:11.674609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-14 15:10:11.674642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-14 15:10:11.674775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-14 15:10:11.674813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-14 15:10:11.674989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-14 15:10:11.675023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-14 15:10:11.675210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-14 15:10:11.675247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-14 15:10:11.675386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-14 15:10:11.675424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-14 15:10:11.675566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-14 15:10:11.675600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-14 15:10:11.675741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-14 15:10:11.675791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-14 15:10:11.675914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-14 15:10:11.675952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-14 15:10:11.676103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-14 15:10:11.676137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-14 15:10:11.676269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-14 15:10:11.676318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-14 15:10:11.676500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-14 15:10:11.676537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-14 15:10:11.676721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-14 15:10:11.676755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-14 15:10:11.676906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-14 15:10:11.676944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-14 15:10:11.677072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-14 15:10:11.677110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-14 15:10:11.677291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-14 15:10:11.677324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-14 15:10:11.677484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-14 15:10:11.677521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-14 15:10:11.677633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-14 15:10:11.677671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-14 15:10:11.677853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-14 15:10:11.677894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-14 15:10:11.678006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-14 15:10:11.678044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-14 15:10:11.678179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-14 15:10:11.678213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-14 15:10:11.678386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-14 15:10:11.678420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-14 15:10:11.678526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-14 15:10:11.678576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-14 15:10:11.678749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-14 15:10:11.678787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-14 15:10:11.678944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-14 15:10:11.678978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-14 15:10:11.679086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-14 15:10:11.679120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-14 15:10:11.679273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-14 15:10:11.679310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-14 15:10:11.679461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-14 15:10:11.679495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-14 15:10:11.679611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-14 15:10:11.679645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-14 15:10:11.679785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-14 15:10:11.679818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-14 15:10:11.679936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-14 15:10:11.679970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-14 15:10:11.680147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-14 15:10:11.680184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-14 15:10:11.680341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-14 15:10:11.680375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-14 15:10:11.680539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-14 15:10:11.680572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-14 15:10:11.680717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-14 15:10:11.680755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-14 15:10:11.680894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-14 15:10:11.680933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-14 15:10:11.681086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-14 15:10:11.681120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-14 15:10:11.681260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-14 15:10:11.681313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-14 15:10:11.681487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-14 15:10:11.681524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-14 15:10:11.681691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-14 15:10:11.681724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-14 15:10:11.681886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-14 15:10:11.681930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-14 15:10:11.682101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-14 15:10:11.682138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-14 15:10:11.682285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-14 15:10:11.682318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-14 15:10:11.682454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-14 15:10:11.682504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-14 15:10:11.682645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-14 15:10:11.682682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-14 15:10:11.682810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-14 15:10:11.682843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-14 15:10:11.683034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-14 15:10:11.683072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-14 15:10:11.683177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-14 15:10:11.683215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-14 15:10:11.683339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-14 15:10:11.683373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-14 15:10:11.683510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-14 15:10:11.683544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-14 15:10:11.683727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-14 15:10:11.683765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-14 15:10:11.683924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-14 15:10:11.683959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-14 15:10:11.684064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-14 15:10:11.684098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-14 15:10:11.684226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-14 15:10:11.684263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-14 15:10:11.684443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-14 15:10:11.684477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-14 15:10:11.684630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-14 15:10:11.684667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-14 15:10:11.684777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-14 15:10:11.684814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-14 15:10:11.685000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-14 15:10:11.685035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-14 15:10:11.685144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-14 15:10:11.685178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-14 15:10:11.685340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-14 15:10:11.685383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-14 15:10:11.685536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-14 15:10:11.685570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-14 15:10:11.685671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-14 15:10:11.685704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-14 15:10:11.685832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-14 15:10:11.685891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-14 15:10:11.686083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-14 15:10:11.686117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-14 15:10:11.686227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-14 15:10:11.686261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-14 15:10:11.686424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-14 15:10:11.686458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-14 15:10:11.686681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-14 15:10:11.686714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-14 15:10:11.686874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-14 15:10:11.686940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-14 15:10:11.687078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-14 15:10:11.687112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-14 15:10:11.687241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-14 15:10:11.687274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-14 15:10:11.687402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-14 15:10:11.687454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-14 15:10:11.687617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-14 15:10:11.687651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-14 15:10:11.687782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-14 15:10:11.687816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-14 15:10:11.687936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-14 15:10:11.687971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-14 15:10:11.688110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-14 15:10:11.688143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-14 15:10:11.688318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-14 15:10:11.688352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-14 15:10:11.688514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-14 15:10:11.688566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-14 15:10:11.688742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-14 15:10:11.688779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-14 15:10:11.688947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-14 15:10:11.688987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-14 15:10:11.689167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-14 15:10:11.689208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-14 15:10:11.689416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-14 15:10:11.689450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-14 15:10:11.689606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-14 15:10:11.689645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-14 15:10:11.689782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-14 15:10:11.689827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-14 15:10:11.689984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-14 15:10:11.690029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-14 15:10:11.690203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-14 15:10:11.690237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-14 15:10:11.690343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-14 15:10:11.690377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-14 15:10:11.690518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-14 15:10:11.690553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-14 15:10:11.690715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-14 15:10:11.690748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-14 15:10:11.690883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-14 15:10:11.690921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-14 15:10:11.691070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-14 15:10:11.691107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-14 15:10:11.691256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-14 15:10:11.691290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-14 15:10:11.691472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-14 15:10:11.691509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-14 15:10:11.691649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-14 15:10:11.691686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-14 15:10:11.691891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-14 15:10:11.691944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-14 15:10:11.692083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-14 15:10:11.692118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-14 15:10:11.692276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-14 15:10:11.692310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-14 15:10:11.692490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-14 15:10:11.692523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-14 15:10:11.692630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-14 15:10:11.692663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-14 15:10:11.692795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-14 15:10:11.692829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-14 15:10:11.692996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-14 15:10:11.693036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-14 15:10:11.693191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-14 15:10:11.693228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-14 15:10:11.693342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-14 15:10:11.693379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-14 15:10:11.693547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-14 15:10:11.693580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-14 15:10:11.693741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-14 15:10:11.693775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-14 15:10:11.693911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-14 15:10:11.693950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-14 15:10:11.694126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-14 15:10:11.694160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-14 15:10:11.694274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-14 15:10:11.694308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-14 15:10:11.694431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-14 15:10:11.694468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-14 15:10:11.694630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-14 15:10:11.694664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-14 15:10:11.694800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-14 15:10:11.694852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-14 15:10:11.694968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-14 15:10:11.695005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-14 15:10:11.695153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-14 15:10:11.695187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-14 15:10:11.695319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-14 15:10:11.695353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-14 15:10:11.695529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-14 15:10:11.695567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-14 15:10:11.695748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-14 15:10:11.695781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-14 15:10:11.695941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-14 15:10:11.695980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-14 15:10:11.696168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-14 15:10:11.696206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-14 15:10:11.696351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-14 15:10:11.696384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-14 15:10:11.696565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-14 15:10:11.696603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-14 15:10:11.696729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-14 15:10:11.696766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-14 15:10:11.696903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-14 15:10:11.696937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-14 15:10:11.697044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-14 15:10:11.697078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-14 15:10:11.697208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-14 15:10:11.697241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-14 15:10:11.697384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-14 15:10:11.697419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-14 15:10:11.697517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-14 15:10:11.697551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-14 15:10:11.697704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-14 15:10:11.697741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-14 15:10:11.697906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-14 15:10:11.697969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-14 15:10:11.698097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-14 15:10:11.698134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-14 15:10:11.698290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-14 15:10:11.698327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-14 15:10:11.698506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-14 15:10:11.698540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-14 15:10:11.698720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-14 15:10:11.698757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-14 15:10:11.698919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-14 15:10:11.698953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-14 15:10:11.699052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-14 15:10:11.699086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-14 15:10:11.699223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-14 15:10:11.699274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-14 15:10:11.699450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-14 15:10:11.699488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-14 15:10:11.699621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-14 15:10:11.699655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-14 15:10:11.699815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-14 15:10:11.699867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-14 15:10:11.699998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-14 15:10:11.700036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-14 15:10:11.700206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-14 15:10:11.700240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-14 15:10:11.700401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-14 15:10:11.700455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-14 15:10:11.700611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-14 15:10:11.700648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-14 15:10:11.700808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-14 15:10:11.700843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-14 15:10:11.700971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-14 15:10:11.701005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-14 15:10:11.701111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-14 15:10:11.701146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-14 15:10:11.701337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-14 15:10:11.701372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-14 15:10:11.701516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-14 15:10:11.701553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-14 15:10:11.701677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-14 15:10:11.701714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-14 15:10:11.701871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-14 15:10:11.701914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-14 15:10:11.702026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-14 15:10:11.702060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-14 15:10:11.702210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-14 15:10:11.702247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-14 15:10:11.702398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-14 15:10:11.702431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-14 15:10:11.702570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-14 15:10:11.702620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-14 15:10:11.702764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-14 15:10:11.702801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-14 15:10:11.702943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-14 15:10:11.702978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-14 15:10:11.703116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-14 15:10:11.703150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-14 15:10:11.703277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-14 15:10:11.703314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-14 15:10:11.703467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-14 15:10:11.703500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-14 15:10:11.703605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-14 15:10:11.703638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-14 15:10:11.703771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-14 15:10:11.703808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-14 15:10:11.703962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-14 15:10:11.703997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-14 15:10:11.704103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-14 15:10:11.704136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-14 15:10:11.704247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-14 15:10:11.704281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-14 15:10:11.704412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-14 15:10:11.704445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-14 15:10:11.704597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-14 15:10:11.704634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-14 15:10:11.704820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-14 15:10:11.704854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-14 15:10:11.705010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-14 15:10:11.705045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-14 15:10:11.705200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-14 15:10:11.705237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-14 15:10:11.705398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-14 15:10:11.705436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-14 15:10:11.705650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-14 15:10:11.705684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-14 15:10:11.705836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-14 15:10:11.705873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-14 15:10:11.706015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-14 15:10:11.706048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-14 15:10:11.706211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-14 15:10:11.706245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-14 15:10:11.706422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-14 15:10:11.706460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-14 15:10:11.706602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-14 15:10:11.706640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-14 15:10:11.706790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-14 15:10:11.706823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-14 15:10:11.706969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-14 15:10:11.707023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-14 15:10:11.707143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-14 15:10:11.707181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-14 15:10:11.707344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-14 15:10:11.707377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-14 15:10:11.707531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-14 15:10:11.707569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-14 15:10:11.707705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-14 15:10:11.707747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-14 15:10:11.707904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-14 15:10:11.707939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-14 15:10:11.708077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-14 15:10:11.708111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-14 15:10:11.708227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-14 15:10:11.708261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-14 15:10:11.708395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-14 15:10:11.708429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-14 15:10:11.708582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-14 15:10:11.708619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-14 15:10:11.708765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-14 15:10:11.708802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-14 15:10:11.708994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-14 15:10:11.709028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-14 15:10:11.709167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-14 15:10:11.709220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-14 15:10:11.709394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-14 15:10:11.709431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-14 15:10:11.709582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-14 15:10:11.709616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-14 15:10:11.709791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-14 15:10:11.709839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-14 15:10:11.710000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-14 15:10:11.710037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-14 15:10:11.710174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-14 15:10:11.710208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-14 15:10:11.710371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-14 15:10:11.710404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-14 15:10:11.710585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-14 15:10:11.710621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-14 15:10:11.710804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-14 15:10:11.710838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-14 15:10:11.711057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-14 15:10:11.711091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-14 15:10:11.711214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-14 15:10:11.711251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-14 15:10:11.711431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-14 15:10:11.711465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-14 15:10:11.711619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-14 15:10:11.711657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-14 15:10:11.711840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-14 15:10:11.711885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-14 15:10:11.712070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-14 15:10:11.712103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-14 15:10:11.712252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-14 15:10:11.712289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-14 15:10:11.712440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-14 15:10:11.712478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-14 15:10:11.712638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-14 15:10:11.712672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-14 15:10:11.712809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-14 15:10:11.712861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-14 15:10:11.713028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-14 15:10:11.713066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-14 15:10:11.713193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-14 15:10:11.713226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.695 [2024-07-14 15:10:11.713389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.695 [2024-07-14 15:10:11.713441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.695 qpair failed and we were unable to recover it. 00:37:32.695 [2024-07-14 15:10:11.713625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.695 [2024-07-14 15:10:11.713662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.695 qpair failed and we were unable to recover it. 00:37:32.695 [2024-07-14 15:10:11.713841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.695 [2024-07-14 15:10:11.713888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.695 qpair failed and we were unable to recover it. 00:37:32.695 [2024-07-14 15:10:11.714016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.695 [2024-07-14 15:10:11.714050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.695 qpair failed and we were unable to recover it. 00:37:32.695 [2024-07-14 15:10:11.714261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.695 [2024-07-14 15:10:11.714299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.695 qpair failed and we were unable to recover it. 00:37:32.695 [2024-07-14 15:10:11.714469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.695 [2024-07-14 15:10:11.714503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.695 qpair failed and we were unable to recover it. 00:37:32.695 [2024-07-14 15:10:11.714659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.695 [2024-07-14 15:10:11.714702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.695 qpair failed and we were unable to recover it. 00:37:32.695 [2024-07-14 15:10:11.714843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.695 [2024-07-14 15:10:11.714889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.695 qpair failed and we were unable to recover it. 00:37:32.695 [2024-07-14 15:10:11.715058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.695 [2024-07-14 15:10:11.715093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.695 qpair failed and we were unable to recover it. 00:37:32.695 [2024-07-14 15:10:11.715198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.695 [2024-07-14 15:10:11.715249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.695 qpair failed and we were unable to recover it. 00:37:32.695 [2024-07-14 15:10:11.715422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.695 [2024-07-14 15:10:11.715460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.695 qpair failed and we were unable to recover it. 00:37:32.695 [2024-07-14 15:10:11.715603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.695 [2024-07-14 15:10:11.715648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.695 qpair failed and we were unable to recover it. 00:37:32.695 [2024-07-14 15:10:11.715815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.695 [2024-07-14 15:10:11.715849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.695 qpair failed and we were unable to recover it. 00:37:32.695 [2024-07-14 15:10:11.715989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.695 [2024-07-14 15:10:11.716039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.695 qpair failed and we were unable to recover it. 00:37:32.695 [2024-07-14 15:10:11.716199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.695 [2024-07-14 15:10:11.716232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.695 qpair failed and we were unable to recover it. 00:37:32.695 [2024-07-14 15:10:11.716412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.695 [2024-07-14 15:10:11.716449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.695 qpair failed and we were unable to recover it. 00:37:32.695 [2024-07-14 15:10:11.716573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.695 [2024-07-14 15:10:11.716611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.695 qpair failed and we were unable to recover it. 00:37:32.695 [2024-07-14 15:10:11.716757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.695 [2024-07-14 15:10:11.716791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.695 qpair failed and we were unable to recover it. 00:37:32.695 [2024-07-14 15:10:11.716926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.695 [2024-07-14 15:10:11.716981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.695 qpair failed and we were unable to recover it. 00:37:32.695 [2024-07-14 15:10:11.717129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.695 [2024-07-14 15:10:11.717166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.695 qpair failed and we were unable to recover it. 00:37:32.695 [2024-07-14 15:10:11.717327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.696 [2024-07-14 15:10:11.717361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.696 qpair failed and we were unable to recover it. 00:37:32.696 [2024-07-14 15:10:11.717544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.696 [2024-07-14 15:10:11.717582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.696 qpair failed and we were unable to recover it. 00:37:32.696 [2024-07-14 15:10:11.717754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.696 [2024-07-14 15:10:11.717791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.696 qpair failed and we were unable to recover it. 00:37:32.696 [2024-07-14 15:10:11.717986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.696 [2024-07-14 15:10:11.718020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.696 qpair failed and we were unable to recover it. 00:37:32.696 [2024-07-14 15:10:11.718167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.696 [2024-07-14 15:10:11.718205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.696 qpair failed and we were unable to recover it. 00:37:32.696 [2024-07-14 15:10:11.718362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.696 [2024-07-14 15:10:11.718399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.696 qpair failed and we were unable to recover it. 00:37:32.696 [2024-07-14 15:10:11.718522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.696 [2024-07-14 15:10:11.718556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.696 qpair failed and we were unable to recover it. 00:37:32.696 [2024-07-14 15:10:11.718665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.696 [2024-07-14 15:10:11.718698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.696 qpair failed and we were unable to recover it. 00:37:32.696 [2024-07-14 15:10:11.718861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.696 [2024-07-14 15:10:11.718907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.696 qpair failed and we were unable to recover it. 00:37:32.696 [2024-07-14 15:10:11.719064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.696 [2024-07-14 15:10:11.719098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.696 qpair failed and we were unable to recover it. 00:37:32.696 [2024-07-14 15:10:11.719226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.696 [2024-07-14 15:10:11.719276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.696 qpair failed and we were unable to recover it. 00:37:32.696 [2024-07-14 15:10:11.719449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.696 [2024-07-14 15:10:11.719487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.696 qpair failed and we were unable to recover it. 00:37:32.696 [2024-07-14 15:10:11.719655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.696 [2024-07-14 15:10:11.719689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.696 qpair failed and we were unable to recover it. 00:37:32.696 [2024-07-14 15:10:11.719830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.696 [2024-07-14 15:10:11.719867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.696 qpair failed and we were unable to recover it. 00:37:32.696 [2024-07-14 15:10:11.720002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.696 [2024-07-14 15:10:11.720040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.696 qpair failed and we were unable to recover it. 00:37:32.696 [2024-07-14 15:10:11.720170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.696 [2024-07-14 15:10:11.720204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.696 qpair failed and we were unable to recover it. 00:37:32.696 [2024-07-14 15:10:11.720304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.696 [2024-07-14 15:10:11.720337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.696 qpair failed and we were unable to recover it. 00:37:32.696 [2024-07-14 15:10:11.720463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.696 [2024-07-14 15:10:11.720500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.696 qpair failed and we were unable to recover it. 00:37:32.696 [2024-07-14 15:10:11.720660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.696 [2024-07-14 15:10:11.720697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.696 qpair failed and we were unable to recover it. 00:37:32.696 [2024-07-14 15:10:11.720853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.696 [2024-07-14 15:10:11.720900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.696 qpair failed and we were unable to recover it. 00:37:32.696 [2024-07-14 15:10:11.721029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.696 [2024-07-14 15:10:11.721063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.696 qpair failed and we were unable to recover it. 00:37:32.696 [2024-07-14 15:10:11.721209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.696 [2024-07-14 15:10:11.721243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.696 qpair failed and we were unable to recover it. 00:37:32.696 [2024-07-14 15:10:11.721409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.696 [2024-07-14 15:10:11.721447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.696 qpair failed and we were unable to recover it. 00:37:32.696 [2024-07-14 15:10:11.721624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.696 [2024-07-14 15:10:11.721661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.696 qpair failed and we were unable to recover it. 00:37:32.696 [2024-07-14 15:10:11.721797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.696 [2024-07-14 15:10:11.721832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.696 qpair failed and we were unable to recover it. 00:37:32.696 [2024-07-14 15:10:11.722001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.696 [2024-07-14 15:10:11.722035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.696 qpair failed and we were unable to recover it. 00:37:32.696 [2024-07-14 15:10:11.722142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.696 [2024-07-14 15:10:11.722185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.696 qpair failed and we were unable to recover it. 00:37:32.696 [2024-07-14 15:10:11.722350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.696 [2024-07-14 15:10:11.722384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.696 qpair failed and we were unable to recover it. 00:37:32.696 [2024-07-14 15:10:11.722558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.696 [2024-07-14 15:10:11.722595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.696 qpair failed and we were unable to recover it. 00:37:32.696 [2024-07-14 15:10:11.722758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.696 [2024-07-14 15:10:11.722796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.696 qpair failed and we were unable to recover it. 00:37:32.696 [2024-07-14 15:10:11.722937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.696 [2024-07-14 15:10:11.722972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.696 qpair failed and we were unable to recover it. 00:37:32.697 [2024-07-14 15:10:11.723131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.697 [2024-07-14 15:10:11.723165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.697 qpair failed and we were unable to recover it. 00:37:32.697 [2024-07-14 15:10:11.723303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.697 [2024-07-14 15:10:11.723340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.697 qpair failed and we were unable to recover it. 00:37:32.697 [2024-07-14 15:10:11.723468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.697 [2024-07-14 15:10:11.723501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.697 qpair failed and we were unable to recover it. 00:37:32.697 [2024-07-14 15:10:11.723640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.697 [2024-07-14 15:10:11.723674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.697 qpair failed and we were unable to recover it. 00:37:32.697 [2024-07-14 15:10:11.723825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.697 [2024-07-14 15:10:11.723862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.697 qpair failed and we were unable to recover it. 00:37:32.697 [2024-07-14 15:10:11.724009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.697 [2024-07-14 15:10:11.724043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.697 qpair failed and we were unable to recover it. 00:37:32.697 [2024-07-14 15:10:11.724178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.697 [2024-07-14 15:10:11.724213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.697 qpair failed and we were unable to recover it. 00:37:32.697 [2024-07-14 15:10:11.724373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.697 [2024-07-14 15:10:11.724410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.697 qpair failed and we were unable to recover it. 00:37:32.697 [2024-07-14 15:10:11.724561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.697 [2024-07-14 15:10:11.724594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.697 qpair failed and we were unable to recover it. 00:37:32.697 [2024-07-14 15:10:11.724772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.697 [2024-07-14 15:10:11.724809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.697 qpair failed and we were unable to recover it. 00:37:32.697 [2024-07-14 15:10:11.724932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.697 [2024-07-14 15:10:11.724970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.697 qpair failed and we were unable to recover it. 00:37:32.697 [2024-07-14 15:10:11.725134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.697 [2024-07-14 15:10:11.725167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.697 qpair failed and we were unable to recover it. 00:37:32.697 [2024-07-14 15:10:11.725346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.697 [2024-07-14 15:10:11.725383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.697 qpair failed and we were unable to recover it. 00:37:32.697 [2024-07-14 15:10:11.725563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.697 [2024-07-14 15:10:11.725601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.697 qpair failed and we were unable to recover it. 00:37:32.697 [2024-07-14 15:10:11.725747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.697 [2024-07-14 15:10:11.725782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.697 qpair failed and we were unable to recover it. 00:37:32.697 [2024-07-14 15:10:11.725934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.697 [2024-07-14 15:10:11.725987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.697 qpair failed and we were unable to recover it. 00:37:32.697 [2024-07-14 15:10:11.726104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.697 [2024-07-14 15:10:11.726153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.697 qpair failed and we were unable to recover it. 00:37:32.697 [2024-07-14 15:10:11.726285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.697 [2024-07-14 15:10:11.726320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.697 qpair failed and we were unable to recover it. 00:37:32.697 [2024-07-14 15:10:11.726480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.697 [2024-07-14 15:10:11.726531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.697 qpair failed and we were unable to recover it. 00:37:32.697 [2024-07-14 15:10:11.726652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.697 [2024-07-14 15:10:11.726690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.697 qpair failed and we were unable to recover it. 00:37:32.697 [2024-07-14 15:10:11.726850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.697 [2024-07-14 15:10:11.726893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.697 qpair failed and we were unable to recover it. 00:37:32.697 [2024-07-14 15:10:11.727077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.697 [2024-07-14 15:10:11.727115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.697 qpair failed and we were unable to recover it. 00:37:32.697 [2024-07-14 15:10:11.727277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.697 [2024-07-14 15:10:11.727311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.697 qpair failed and we were unable to recover it. 00:37:32.697 [2024-07-14 15:10:11.727472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.697 [2024-07-14 15:10:11.727506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.697 qpair failed and we were unable to recover it. 00:37:32.697 [2024-07-14 15:10:11.727657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.697 [2024-07-14 15:10:11.727695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.697 qpair failed and we were unable to recover it. 00:37:32.697 [2024-07-14 15:10:11.727821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.697 [2024-07-14 15:10:11.727859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.697 qpair failed and we were unable to recover it. 00:37:32.697 [2024-07-14 15:10:11.728028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.697 [2024-07-14 15:10:11.728061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.697 qpair failed and we were unable to recover it. 00:37:32.697 [2024-07-14 15:10:11.728162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.697 [2024-07-14 15:10:11.728200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.697 qpair failed and we were unable to recover it. 00:37:32.697 [2024-07-14 15:10:11.728358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.697 [2024-07-14 15:10:11.728395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.697 qpair failed and we were unable to recover it. 00:37:32.697 [2024-07-14 15:10:11.728590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.697 [2024-07-14 15:10:11.728624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.698 qpair failed and we were unable to recover it. 00:37:32.698 [2024-07-14 15:10:11.728771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.698 [2024-07-14 15:10:11.728809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.698 qpair failed and we were unable to recover it. 00:37:32.698 [2024-07-14 15:10:11.728962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.698 [2024-07-14 15:10:11.728999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.698 qpair failed and we were unable to recover it. 00:37:32.698 [2024-07-14 15:10:11.729104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.698 [2024-07-14 15:10:11.729138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.698 qpair failed and we were unable to recover it. 00:37:32.698 [2024-07-14 15:10:11.729297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.698 [2024-07-14 15:10:11.729350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.698 qpair failed and we were unable to recover it. 00:37:32.698 [2024-07-14 15:10:11.729547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.698 [2024-07-14 15:10:11.729581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.698 qpair failed and we were unable to recover it. 00:37:32.698 [2024-07-14 15:10:11.729742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.698 [2024-07-14 15:10:11.729776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.698 qpair failed and we were unable to recover it. 00:37:32.698 [2024-07-14 15:10:11.729890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.698 [2024-07-14 15:10:11.729928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.698 qpair failed and we were unable to recover it. 00:37:32.698 [2024-07-14 15:10:11.730100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.698 [2024-07-14 15:10:11.730152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.698 qpair failed and we were unable to recover it. 00:37:32.698 [2024-07-14 15:10:11.730282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.698 [2024-07-14 15:10:11.730316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.698 qpair failed and we were unable to recover it. 00:37:32.698 [2024-07-14 15:10:11.730459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.698 [2024-07-14 15:10:11.730508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.698 qpair failed and we were unable to recover it. 00:37:32.698 [2024-07-14 15:10:11.730682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.698 [2024-07-14 15:10:11.730719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.698 qpair failed and we were unable to recover it. 00:37:32.698 [2024-07-14 15:10:11.730861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.698 [2024-07-14 15:10:11.730902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.698 qpair failed and we were unable to recover it. 00:37:32.698 [2024-07-14 15:10:11.731009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.698 [2024-07-14 15:10:11.731043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.698 qpair failed and we were unable to recover it. 00:37:32.698 [2024-07-14 15:10:11.731180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.698 [2024-07-14 15:10:11.731213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.698 qpair failed and we were unable to recover it. 00:37:32.698 [2024-07-14 15:10:11.731413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.698 [2024-07-14 15:10:11.731447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.698 qpair failed and we were unable to recover it. 00:37:32.698 [2024-07-14 15:10:11.731564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.698 [2024-07-14 15:10:11.731598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.698 qpair failed and we were unable to recover it. 00:37:32.698 [2024-07-14 15:10:11.731730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.698 [2024-07-14 15:10:11.731763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.698 qpair failed and we were unable to recover it. 00:37:32.698 [2024-07-14 15:10:11.731897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.698 [2024-07-14 15:10:11.731931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.698 qpair failed and we were unable to recover it. 00:37:32.698 [2024-07-14 15:10:11.732068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.698 [2024-07-14 15:10:11.732102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.698 qpair failed and we were unable to recover it. 00:37:32.698 [2024-07-14 15:10:11.732249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.698 [2024-07-14 15:10:11.732287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.698 qpair failed and we were unable to recover it. 00:37:32.698 [2024-07-14 15:10:11.732462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.698 [2024-07-14 15:10:11.732496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.698 qpair failed and we were unable to recover it. 00:37:32.698 [2024-07-14 15:10:11.732631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.698 [2024-07-14 15:10:11.732682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.698 qpair failed and we were unable to recover it. 00:37:32.698 [2024-07-14 15:10:11.732870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.698 [2024-07-14 15:10:11.732914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.698 qpair failed and we were unable to recover it. 00:37:32.698 [2024-07-14 15:10:11.733029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.698 [2024-07-14 15:10:11.733062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.698 qpair failed and we were unable to recover it. 00:37:32.698 [2024-07-14 15:10:11.733228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.698 [2024-07-14 15:10:11.733261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.698 qpair failed and we were unable to recover it. 00:37:32.698 [2024-07-14 15:10:11.733455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.698 [2024-07-14 15:10:11.733492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.698 qpair failed and we were unable to recover it. 00:37:32.698 [2024-07-14 15:10:11.733649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.698 [2024-07-14 15:10:11.733683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.698 qpair failed and we were unable to recover it. 00:37:32.698 [2024-07-14 15:10:11.733791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.698 [2024-07-14 15:10:11.733825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.698 qpair failed and we were unable to recover it. 00:37:32.698 [2024-07-14 15:10:11.733998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.698 [2024-07-14 15:10:11.734036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.698 qpair failed and we were unable to recover it. 00:37:32.698 [2024-07-14 15:10:11.734220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.698 [2024-07-14 15:10:11.734266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.698 qpair failed and we were unable to recover it. 00:37:32.698 [2024-07-14 15:10:11.734423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.698 [2024-07-14 15:10:11.734461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.698 qpair failed and we were unable to recover it. 00:37:32.698 [2024-07-14 15:10:11.734603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.698 [2024-07-14 15:10:11.734640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.698 qpair failed and we were unable to recover it. 00:37:32.698 [2024-07-14 15:10:11.734779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.699 [2024-07-14 15:10:11.734813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.699 qpair failed and we were unable to recover it. 00:37:32.699 [2024-07-14 15:10:11.734972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.699 [2024-07-14 15:10:11.735028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.699 qpair failed and we were unable to recover it. 00:37:32.699 [2024-07-14 15:10:11.735176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.699 [2024-07-14 15:10:11.735213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.699 qpair failed and we were unable to recover it. 00:37:32.699 [2024-07-14 15:10:11.735343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.699 [2024-07-14 15:10:11.735377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.699 qpair failed and we were unable to recover it. 00:37:32.699 [2024-07-14 15:10:11.735508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.699 [2024-07-14 15:10:11.735542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.699 qpair failed and we were unable to recover it. 00:37:32.699 [2024-07-14 15:10:11.735708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.699 [2024-07-14 15:10:11.735749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.699 qpair failed and we were unable to recover it. 00:37:32.699 [2024-07-14 15:10:11.735872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.699 [2024-07-14 15:10:11.735913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.699 qpair failed and we were unable to recover it. 00:37:32.699 [2024-07-14 15:10:11.736076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.699 [2024-07-14 15:10:11.736110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.699 qpair failed and we were unable to recover it. 00:37:32.699 [2024-07-14 15:10:11.736273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.699 [2024-07-14 15:10:11.736310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.699 qpair failed and we were unable to recover it. 00:37:32.699 [2024-07-14 15:10:11.736476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.699 [2024-07-14 15:10:11.736509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.699 qpair failed and we were unable to recover it. 00:37:32.699 [2024-07-14 15:10:11.736665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.699 [2024-07-14 15:10:11.736715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.699 qpair failed and we were unable to recover it. 00:37:32.699 [2024-07-14 15:10:11.736821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.699 [2024-07-14 15:10:11.736858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.699 qpair failed and we were unable to recover it. 00:37:32.699 [2024-07-14 15:10:11.737025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.699 [2024-07-14 15:10:11.737059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.699 qpair failed and we were unable to recover it. 00:37:32.699 [2024-07-14 15:10:11.737284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.699 [2024-07-14 15:10:11.737321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.699 qpair failed and we were unable to recover it. 00:37:32.699 [2024-07-14 15:10:11.737492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.699 [2024-07-14 15:10:11.737529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.699 qpair failed and we were unable to recover it. 00:37:32.699 [2024-07-14 15:10:11.737681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.699 [2024-07-14 15:10:11.737715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.699 qpair failed and we were unable to recover it. 00:37:32.699 [2024-07-14 15:10:11.737867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.699 [2024-07-14 15:10:11.737912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.699 qpair failed and we were unable to recover it. 00:37:32.699 [2024-07-14 15:10:11.738062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.699 [2024-07-14 15:10:11.738100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.699 qpair failed and we were unable to recover it. 00:37:32.699 [2024-07-14 15:10:11.738231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.699 [2024-07-14 15:10:11.738264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.699 qpair failed and we were unable to recover it. 00:37:32.699 [2024-07-14 15:10:11.738431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.699 [2024-07-14 15:10:11.738465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.699 qpair failed and we were unable to recover it. 00:37:32.699 [2024-07-14 15:10:11.738626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.699 [2024-07-14 15:10:11.738664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.699 qpair failed and we were unable to recover it. 00:37:32.699 [2024-07-14 15:10:11.738783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.699 [2024-07-14 15:10:11.738816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.699 qpair failed and we were unable to recover it. 00:37:32.699 [2024-07-14 15:10:11.738924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.699 [2024-07-14 15:10:11.738959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.699 qpair failed and we were unable to recover it. 00:37:32.699 [2024-07-14 15:10:11.739106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.699 [2024-07-14 15:10:11.739144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.699 qpair failed and we were unable to recover it. 00:37:32.699 [2024-07-14 15:10:11.739264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.699 [2024-07-14 15:10:11.739297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.699 qpair failed and we were unable to recover it. 00:37:32.699 [2024-07-14 15:10:11.739434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.699 [2024-07-14 15:10:11.739468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.699 qpair failed and we were unable to recover it. 00:37:32.699 [2024-07-14 15:10:11.739627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.699 [2024-07-14 15:10:11.739664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.699 qpair failed and we were unable to recover it. 00:37:32.699 [2024-07-14 15:10:11.739820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.699 [2024-07-14 15:10:11.739854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.699 qpair failed and we were unable to recover it. 00:37:32.699 [2024-07-14 15:10:11.740016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.699 [2024-07-14 15:10:11.740055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.699 qpair failed and we were unable to recover it. 00:37:32.699 [2024-07-14 15:10:11.740205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.700 [2024-07-14 15:10:11.740242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.700 qpair failed and we were unable to recover it. 00:37:32.700 [2024-07-14 15:10:11.740370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.700 [2024-07-14 15:10:11.740404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.700 qpair failed and we were unable to recover it. 00:37:32.700 [2024-07-14 15:10:11.740566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.700 [2024-07-14 15:10:11.740618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.700 qpair failed and we were unable to recover it. 00:37:32.700 [2024-07-14 15:10:11.740776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.700 [2024-07-14 15:10:11.740814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.700 qpair failed and we were unable to recover it. 00:37:32.700 [2024-07-14 15:10:11.741018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.700 [2024-07-14 15:10:11.741053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.700 qpair failed and we were unable to recover it. 00:37:32.700 [2024-07-14 15:10:11.741212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.700 [2024-07-14 15:10:11.741249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.700 qpair failed and we were unable to recover it. 00:37:32.700 [2024-07-14 15:10:11.741416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.700 [2024-07-14 15:10:11.741450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.700 qpair failed and we were unable to recover it. 00:37:32.700 [2024-07-14 15:10:11.741618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.700 [2024-07-14 15:10:11.741653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.700 qpair failed and we were unable to recover it. 00:37:32.700 [2024-07-14 15:10:11.741778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.700 [2024-07-14 15:10:11.741816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.700 qpair failed and we were unable to recover it. 00:37:32.700 [2024-07-14 15:10:11.741965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.700 [2024-07-14 15:10:11.742003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.700 qpair failed and we were unable to recover it. 00:37:32.700 [2024-07-14 15:10:11.742163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.700 [2024-07-14 15:10:11.742197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.700 qpair failed and we were unable to recover it. 00:37:32.700 [2024-07-14 15:10:11.742342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.700 [2024-07-14 15:10:11.742395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.700 qpair failed and we were unable to recover it. 00:37:32.700 [2024-07-14 15:10:11.742538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.700 [2024-07-14 15:10:11.742576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.700 qpair failed and we were unable to recover it. 00:37:32.700 [2024-07-14 15:10:11.742735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.700 [2024-07-14 15:10:11.742768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.700 qpair failed and we were unable to recover it. 00:37:32.700 [2024-07-14 15:10:11.742927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.700 [2024-07-14 15:10:11.742966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.700 qpair failed and we were unable to recover it. 00:37:32.700 [2024-07-14 15:10:11.743106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.700 [2024-07-14 15:10:11.743144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.700 qpair failed and we were unable to recover it. 00:37:32.700 [2024-07-14 15:10:11.743306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.700 [2024-07-14 15:10:11.743344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.700 qpair failed and we were unable to recover it. 00:37:32.700 [2024-07-14 15:10:11.743450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.700 [2024-07-14 15:10:11.743483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.700 qpair failed and we were unable to recover it. 00:37:32.700 [2024-07-14 15:10:11.743641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.700 [2024-07-14 15:10:11.743675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.700 qpair failed and we were unable to recover it. 00:37:32.700 [2024-07-14 15:10:11.743815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.700 [2024-07-14 15:10:11.743866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.700 qpair failed and we were unable to recover it. 00:37:32.700 [2024-07-14 15:10:11.744020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.700 [2024-07-14 15:10:11.744054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.700 qpair failed and we were unable to recover it. 00:37:32.700 [2024-07-14 15:10:11.744201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.700 [2024-07-14 15:10:11.744238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.700 qpair failed and we were unable to recover it. 00:37:32.700 [2024-07-14 15:10:11.744396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.700 [2024-07-14 15:10:11.744430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.700 qpair failed and we were unable to recover it. 00:37:32.700 [2024-07-14 15:10:11.744594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.700 [2024-07-14 15:10:11.744627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.700 qpair failed and we were unable to recover it. 00:37:32.700 [2024-07-14 15:10:11.744795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.700 [2024-07-14 15:10:11.744829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.700 qpair failed and we were unable to recover it. 00:37:32.700 [2024-07-14 15:10:11.744961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.700 [2024-07-14 15:10:11.744995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.700 qpair failed and we were unable to recover it. 00:37:32.700 [2024-07-14 15:10:11.745154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.700 [2024-07-14 15:10:11.745191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.700 qpair failed and we were unable to recover it. 00:37:32.700 [2024-07-14 15:10:11.745353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.701 [2024-07-14 15:10:11.745386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.701 qpair failed and we were unable to recover it. 00:37:32.701 [2024-07-14 15:10:11.745521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.701 [2024-07-14 15:10:11.745555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.701 qpair failed and we were unable to recover it. 00:37:32.701 [2024-07-14 15:10:11.745665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.701 [2024-07-14 15:10:11.745715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.701 qpair failed and we were unable to recover it. 00:37:32.701 [2024-07-14 15:10:11.745873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.701 [2024-07-14 15:10:11.745933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.701 qpair failed and we were unable to recover it. 00:37:32.701 [2024-07-14 15:10:11.746060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.701 [2024-07-14 15:10:11.746095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.701 qpair failed and we were unable to recover it. 00:37:32.701 [2024-07-14 15:10:11.746206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.701 [2024-07-14 15:10:11.746250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.701 qpair failed and we were unable to recover it. 00:37:32.701 [2024-07-14 15:10:11.746419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.701 [2024-07-14 15:10:11.746457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.701 qpair failed and we were unable to recover it. 00:37:32.701 [2024-07-14 15:10:11.746615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.701 [2024-07-14 15:10:11.746650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.701 qpair failed and we were unable to recover it. 00:37:32.701 [2024-07-14 15:10:11.746759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.701 [2024-07-14 15:10:11.746793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.701 qpair failed and we were unable to recover it. 00:37:32.701 [2024-07-14 15:10:11.746933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.701 [2024-07-14 15:10:11.746968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.701 qpair failed and we were unable to recover it. 00:37:32.701 [2024-07-14 15:10:11.747108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.701 [2024-07-14 15:10:11.747141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.701 qpair failed and we were unable to recover it. 00:37:32.701 [2024-07-14 15:10:11.747301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.701 [2024-07-14 15:10:11.747334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.701 qpair failed and we were unable to recover it. 00:37:32.701 [2024-07-14 15:10:11.747495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.701 [2024-07-14 15:10:11.747532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.701 qpair failed and we were unable to recover it. 00:37:32.701 [2024-07-14 15:10:11.747685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.701 [2024-07-14 15:10:11.747719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.701 qpair failed and we were unable to recover it. 00:37:32.701 [2024-07-14 15:10:11.747835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.701 [2024-07-14 15:10:11.747869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.701 qpair failed and we were unable to recover it. 00:37:32.701 [2024-07-14 15:10:11.748021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.701 [2024-07-14 15:10:11.748058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.701 qpair failed and we were unable to recover it. 00:37:32.701 [2024-07-14 15:10:11.748261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.701 [2024-07-14 15:10:11.748295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.701 qpair failed and we were unable to recover it. 00:37:32.701 [2024-07-14 15:10:11.748425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.701 [2024-07-14 15:10:11.748459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.701 qpair failed and we were unable to recover it. 00:37:32.701 [2024-07-14 15:10:11.748599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.701 [2024-07-14 15:10:11.748632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.701 qpair failed and we were unable to recover it. 00:37:32.701 [2024-07-14 15:10:11.748793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.701 [2024-07-14 15:10:11.748826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.701 qpair failed and we were unable to recover it. 00:37:32.701 [2024-07-14 15:10:11.748968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.701 [2024-07-14 15:10:11.749003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.701 qpair failed and we were unable to recover it. 00:37:32.701 [2024-07-14 15:10:11.749173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.701 [2024-07-14 15:10:11.749211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.701 qpair failed and we were unable to recover it. 00:37:32.701 [2024-07-14 15:10:11.749363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.701 [2024-07-14 15:10:11.749397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.701 qpair failed and we were unable to recover it. 00:37:32.701 [2024-07-14 15:10:11.749540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.701 [2024-07-14 15:10:11.749593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.701 qpair failed and we were unable to recover it. 00:37:32.701 [2024-07-14 15:10:11.749737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.701 [2024-07-14 15:10:11.749774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.701 qpair failed and we were unable to recover it. 00:37:32.701 [2024-07-14 15:10:11.749908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.701 [2024-07-14 15:10:11.749943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.701 qpair failed and we were unable to recover it. 00:37:32.701 [2024-07-14 15:10:11.750100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.701 [2024-07-14 15:10:11.750134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.701 qpair failed and we were unable to recover it. 00:37:32.701 [2024-07-14 15:10:11.750325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.701 [2024-07-14 15:10:11.750359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.701 qpair failed and we were unable to recover it. 00:37:32.701 [2024-07-14 15:10:11.750488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.701 [2024-07-14 15:10:11.750522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.701 qpair failed and we were unable to recover it. 00:37:32.701 [2024-07-14 15:10:11.750676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.701 [2024-07-14 15:10:11.750718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.701 qpair failed and we were unable to recover it. 00:37:32.701 [2024-07-14 15:10:11.750875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.702 [2024-07-14 15:10:11.750921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.702 qpair failed and we were unable to recover it. 00:37:32.702 [2024-07-14 15:10:11.751073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.702 [2024-07-14 15:10:11.751107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.702 qpair failed and we were unable to recover it. 00:37:32.702 [2024-07-14 15:10:11.751296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.702 [2024-07-14 15:10:11.751333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.702 qpair failed and we were unable to recover it. 00:37:32.702 [2024-07-14 15:10:11.751458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.702 [2024-07-14 15:10:11.751496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.702 qpair failed and we were unable to recover it. 00:37:32.702 [2024-07-14 15:10:11.751675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.702 [2024-07-14 15:10:11.751708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.702 qpair failed and we were unable to recover it. 00:37:32.702 [2024-07-14 15:10:11.751858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.702 [2024-07-14 15:10:11.751906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.702 qpair failed and we were unable to recover it. 00:37:32.702 [2024-07-14 15:10:11.752061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.702 [2024-07-14 15:10:11.752098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.702 qpair failed and we were unable to recover it. 00:37:32.702 [2024-07-14 15:10:11.752266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.702 [2024-07-14 15:10:11.752300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.702 qpair failed and we were unable to recover it. 00:37:32.702 [2024-07-14 15:10:11.752435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.702 [2024-07-14 15:10:11.752469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.702 qpair failed and we were unable to recover it. 00:37:32.702 [2024-07-14 15:10:11.752634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.702 [2024-07-14 15:10:11.752671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.702 qpair failed and we were unable to recover it. 00:37:32.702 [2024-07-14 15:10:11.752790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.702 [2024-07-14 15:10:11.752824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.702 qpair failed and we were unable to recover it. 00:37:32.702 [2024-07-14 15:10:11.752966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.702 [2024-07-14 15:10:11.753001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.702 qpair failed and we were unable to recover it. 00:37:32.702 [2024-07-14 15:10:11.753109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.702 [2024-07-14 15:10:11.753143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.702 qpair failed and we were unable to recover it. 00:37:32.702 [2024-07-14 15:10:11.753314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.702 [2024-07-14 15:10:11.753348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.702 qpair failed and we were unable to recover it. 00:37:32.702 [2024-07-14 15:10:11.753487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.702 [2024-07-14 15:10:11.753541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.702 qpair failed and we were unable to recover it. 00:37:32.702 [2024-07-14 15:10:11.753686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.702 [2024-07-14 15:10:11.753723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.702 qpair failed and we were unable to recover it. 00:37:32.702 [2024-07-14 15:10:11.753908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.702 [2024-07-14 15:10:11.753943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.702 qpair failed and we were unable to recover it. 00:37:32.702 [2024-07-14 15:10:11.754050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.702 [2024-07-14 15:10:11.754102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.702 qpair failed and we were unable to recover it. 00:37:32.702 [2024-07-14 15:10:11.754214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.702 [2024-07-14 15:10:11.754251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.702 qpair failed and we were unable to recover it. 00:37:32.702 [2024-07-14 15:10:11.754439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.702 [2024-07-14 15:10:11.754473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.702 qpair failed and we were unable to recover it. 00:37:32.702 [2024-07-14 15:10:11.754647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.702 [2024-07-14 15:10:11.754684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.702 qpair failed and we were unable to recover it. 00:37:32.702 [2024-07-14 15:10:11.754811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.702 [2024-07-14 15:10:11.754848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.702 qpair failed and we were unable to recover it. 00:37:32.702 [2024-07-14 15:10:11.755011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.702 [2024-07-14 15:10:11.755045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.702 qpair failed and we were unable to recover it. 00:37:32.702 [2024-07-14 15:10:11.755150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.702 [2024-07-14 15:10:11.755184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.702 qpair failed and we were unable to recover it. 00:37:32.702 [2024-07-14 15:10:11.755338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.702 [2024-07-14 15:10:11.755376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.702 qpair failed and we were unable to recover it. 00:37:32.702 [2024-07-14 15:10:11.755517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.702 [2024-07-14 15:10:11.755551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.702 qpair failed and we were unable to recover it. 00:37:32.702 [2024-07-14 15:10:11.755652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.702 [2024-07-14 15:10:11.755686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.702 qpair failed and we were unable to recover it. 00:37:32.702 [2024-07-14 15:10:11.755796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.702 [2024-07-14 15:10:11.755830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.702 qpair failed and we were unable to recover it. 00:37:32.702 [2024-07-14 15:10:11.755968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.702 [2024-07-14 15:10:11.756002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.702 qpair failed and we were unable to recover it. 00:37:32.702 [2024-07-14 15:10:11.756189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.702 [2024-07-14 15:10:11.756227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.702 qpair failed and we were unable to recover it. 00:37:32.702 [2024-07-14 15:10:11.756371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.702 [2024-07-14 15:10:11.756408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.702 qpair failed and we were unable to recover it. 00:37:32.702 [2024-07-14 15:10:11.756534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.702 [2024-07-14 15:10:11.756568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.702 qpair failed and we were unable to recover it. 00:37:32.702 [2024-07-14 15:10:11.756702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.703 [2024-07-14 15:10:11.756735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.703 qpair failed and we were unable to recover it. 00:37:32.703 [2024-07-14 15:10:11.756892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.703 [2024-07-14 15:10:11.756945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.703 qpair failed and we were unable to recover it. 00:37:32.703 [2024-07-14 15:10:11.757105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.703 [2024-07-14 15:10:11.757139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.703 qpair failed and we were unable to recover it. 00:37:32.703 [2024-07-14 15:10:11.757296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.703 [2024-07-14 15:10:11.757334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.703 qpair failed and we were unable to recover it. 00:37:32.703 [2024-07-14 15:10:11.757470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.703 [2024-07-14 15:10:11.757508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.703 qpair failed and we were unable to recover it. 00:37:32.703 [2024-07-14 15:10:11.757666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.703 [2024-07-14 15:10:11.757700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.703 qpair failed and we were unable to recover it. 00:37:32.703 [2024-07-14 15:10:11.757804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.703 [2024-07-14 15:10:11.757856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.703 qpair failed and we were unable to recover it. 00:37:32.703 [2024-07-14 15:10:11.758018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.703 [2024-07-14 15:10:11.758070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.703 qpair failed and we were unable to recover it. 00:37:32.703 [2024-07-14 15:10:11.758239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.703 [2024-07-14 15:10:11.758272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.703 qpair failed and we were unable to recover it. 00:37:32.703 [2024-07-14 15:10:11.758451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.703 [2024-07-14 15:10:11.758489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.703 qpair failed and we were unable to recover it. 00:37:32.703 [2024-07-14 15:10:11.758641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.703 [2024-07-14 15:10:11.758678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.703 qpair failed and we were unable to recover it. 00:37:32.703 [2024-07-14 15:10:11.758831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.703 [2024-07-14 15:10:11.758865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.703 qpair failed and we were unable to recover it. 00:37:32.703 [2024-07-14 15:10:11.759000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.703 [2024-07-14 15:10:11.759052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.703 qpair failed and we were unable to recover it. 00:37:32.703 [2024-07-14 15:10:11.759206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.703 [2024-07-14 15:10:11.759243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.703 qpair failed and we were unable to recover it. 00:37:32.703 [2024-07-14 15:10:11.759369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.703 [2024-07-14 15:10:11.759403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.703 qpair failed and we were unable to recover it. 00:37:32.703 [2024-07-14 15:10:11.759564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.703 [2024-07-14 15:10:11.759615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.703 qpair failed and we were unable to recover it. 00:37:32.703 [2024-07-14 15:10:11.759779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.703 [2024-07-14 15:10:11.759813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.703 qpair failed and we were unable to recover it. 00:37:32.703 [2024-07-14 15:10:11.759953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.703 [2024-07-14 15:10:11.759987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.703 qpair failed and we were unable to recover it. 00:37:32.703 [2024-07-14 15:10:11.760097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.703 [2024-07-14 15:10:11.760131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.703 qpair failed and we were unable to recover it. 00:37:32.703 [2024-07-14 15:10:11.760258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.703 [2024-07-14 15:10:11.760292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.703 qpair failed and we were unable to recover it. 00:37:32.703 [2024-07-14 15:10:11.760425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.703 [2024-07-14 15:10:11.760459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.703 qpair failed and we were unable to recover it. 00:37:32.703 [2024-07-14 15:10:11.760592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.703 [2024-07-14 15:10:11.760626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.703 qpair failed and we were unable to recover it. 00:37:32.703 [2024-07-14 15:10:11.760836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.703 [2024-07-14 15:10:11.760870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.703 qpair failed and we were unable to recover it. 00:37:32.703 [2024-07-14 15:10:11.761035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.703 [2024-07-14 15:10:11.761069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.703 qpair failed and we were unable to recover it. 00:37:32.703 [2024-07-14 15:10:11.761173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.703 [2024-07-14 15:10:11.761224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.703 qpair failed and we were unable to recover it. 00:37:32.703 [2024-07-14 15:10:11.761366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.703 [2024-07-14 15:10:11.761403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.703 qpair failed and we were unable to recover it. 00:37:32.703 [2024-07-14 15:10:11.761558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.703 [2024-07-14 15:10:11.761592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.703 qpair failed and we were unable to recover it. 00:37:32.703 [2024-07-14 15:10:11.761696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.703 [2024-07-14 15:10:11.761729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.703 qpair failed and we were unable to recover it. 00:37:32.703 [2024-07-14 15:10:11.761862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.703 [2024-07-14 15:10:11.761907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.703 qpair failed and we were unable to recover it. 00:37:32.703 [2024-07-14 15:10:11.762035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.703 [2024-07-14 15:10:11.762069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.703 qpair failed and we were unable to recover it. 00:37:32.704 [2024-07-14 15:10:11.762201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.704 [2024-07-14 15:10:11.762235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.704 qpair failed and we were unable to recover it. 00:37:32.704 [2024-07-14 15:10:11.762430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.704 [2024-07-14 15:10:11.762467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.704 qpair failed and we were unable to recover it. 00:37:32.704 [2024-07-14 15:10:11.762626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.704 [2024-07-14 15:10:11.762660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.704 qpair failed and we were unable to recover it. 00:37:32.704 [2024-07-14 15:10:11.762799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.704 [2024-07-14 15:10:11.762833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.704 qpair failed and we were unable to recover it. 00:37:32.704 [2024-07-14 15:10:11.762972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.704 [2024-07-14 15:10:11.763007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.704 qpair failed and we were unable to recover it. 00:37:32.704 [2024-07-14 15:10:11.763146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.704 [2024-07-14 15:10:11.763180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.704 qpair failed and we were unable to recover it. 00:37:32.704 [2024-07-14 15:10:11.763295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.704 [2024-07-14 15:10:11.763347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.704 qpair failed and we were unable to recover it. 00:37:32.704 [2024-07-14 15:10:11.763522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.704 [2024-07-14 15:10:11.763560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.704 qpair failed and we were unable to recover it. 00:37:32.704 [2024-07-14 15:10:11.763731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.704 [2024-07-14 15:10:11.763769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.704 qpair failed and we were unable to recover it. 00:37:32.704 [2024-07-14 15:10:11.763919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.704 [2024-07-14 15:10:11.763953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.704 qpair failed and we were unable to recover it. 00:37:32.704 [2024-07-14 15:10:11.764085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.704 [2024-07-14 15:10:11.764119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.704 qpair failed and we were unable to recover it. 00:37:32.704 [2024-07-14 15:10:11.764250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.704 [2024-07-14 15:10:11.764284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.704 qpair failed and we were unable to recover it. 00:37:32.704 [2024-07-14 15:10:11.764421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.704 [2024-07-14 15:10:11.764455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.704 qpair failed and we were unable to recover it. 00:37:32.704 [2024-07-14 15:10:11.764591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.704 [2024-07-14 15:10:11.764625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.704 qpair failed and we were unable to recover it. 00:37:32.704 [2024-07-14 15:10:11.764736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.704 [2024-07-14 15:10:11.764770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.704 qpair failed and we were unable to recover it. 00:37:32.704 [2024-07-14 15:10:11.764896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.704 [2024-07-14 15:10:11.764931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.704 qpair failed and we were unable to recover it. 00:37:32.704 [2024-07-14 15:10:11.765063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.704 [2024-07-14 15:10:11.765097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.704 qpair failed and we were unable to recover it. 00:37:32.704 [2024-07-14 15:10:11.765268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.704 [2024-07-14 15:10:11.765306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.704 qpair failed and we were unable to recover it. 00:37:32.704 [2024-07-14 15:10:11.765419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.704 [2024-07-14 15:10:11.765453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.704 qpair failed and we were unable to recover it. 00:37:32.704 [2024-07-14 15:10:11.765593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.704 [2024-07-14 15:10:11.765627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.704 qpair failed and we were unable to recover it. 00:37:32.704 [2024-07-14 15:10:11.765731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.704 [2024-07-14 15:10:11.765765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.704 qpair failed and we were unable to recover it. 00:37:32.704 [2024-07-14 15:10:11.765925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.704 [2024-07-14 15:10:11.765978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.704 qpair failed and we were unable to recover it. 00:37:32.704 [2024-07-14 15:10:11.766128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.704 [2024-07-14 15:10:11.766179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.704 qpair failed and we were unable to recover it. 00:37:32.704 [2024-07-14 15:10:11.766317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.704 [2024-07-14 15:10:11.766350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.704 qpair failed and we were unable to recover it. 00:37:32.704 [2024-07-14 15:10:11.766506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.704 [2024-07-14 15:10:11.766543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.704 qpair failed and we were unable to recover it. 00:37:32.704 [2024-07-14 15:10:11.766705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.704 [2024-07-14 15:10:11.766738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.704 qpair failed and we were unable to recover it. 00:37:32.704 [2024-07-14 15:10:11.766899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.704 [2024-07-14 15:10:11.766934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.704 qpair failed and we were unable to recover it. 00:37:32.704 [2024-07-14 15:10:11.767029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.704 [2024-07-14 15:10:11.767080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.704 qpair failed and we were unable to recover it. 00:37:32.704 [2024-07-14 15:10:11.767189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.704 [2024-07-14 15:10:11.767226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.704 qpair failed and we were unable to recover it. 00:37:32.704 [2024-07-14 15:10:11.767404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.704 [2024-07-14 15:10:11.767438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.704 qpair failed and we were unable to recover it. 00:37:32.704 [2024-07-14 15:10:11.767618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.704 [2024-07-14 15:10:11.767655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.704 qpair failed and we were unable to recover it. 00:37:32.704 [2024-07-14 15:10:11.767776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.704 [2024-07-14 15:10:11.767814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.704 qpair failed and we were unable to recover it. 00:37:32.704 [2024-07-14 15:10:11.767946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.705 [2024-07-14 15:10:11.767980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.705 qpair failed and we were unable to recover it. 00:37:32.705 [2024-07-14 15:10:11.768143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.705 [2024-07-14 15:10:11.768176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.705 qpair failed and we were unable to recover it. 00:37:32.705 [2024-07-14 15:10:11.768312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.705 [2024-07-14 15:10:11.768349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.705 qpair failed and we were unable to recover it. 00:37:32.705 [2024-07-14 15:10:11.768478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.705 [2024-07-14 15:10:11.768513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.705 qpair failed and we were unable to recover it. 00:37:32.705 [2024-07-14 15:10:11.768648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.705 [2024-07-14 15:10:11.768683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.705 qpair failed and we were unable to recover it. 00:37:32.705 [2024-07-14 15:10:11.768842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.705 [2024-07-14 15:10:11.768900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.705 qpair failed and we were unable to recover it. 00:37:32.705 [2024-07-14 15:10:11.769053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.705 [2024-07-14 15:10:11.769086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.705 qpair failed and we were unable to recover it. 00:37:32.705 [2024-07-14 15:10:11.769226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.705 [2024-07-14 15:10:11.769277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.705 qpair failed and we were unable to recover it. 00:37:32.705 [2024-07-14 15:10:11.769426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.705 [2024-07-14 15:10:11.769464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.705 qpair failed and we were unable to recover it. 00:37:32.705 [2024-07-14 15:10:11.769613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.705 [2024-07-14 15:10:11.769657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.705 qpair failed and we were unable to recover it. 00:37:32.705 [2024-07-14 15:10:11.769767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.705 [2024-07-14 15:10:11.769817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.705 qpair failed and we were unable to recover it. 00:37:32.705 [2024-07-14 15:10:11.770007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.705 [2024-07-14 15:10:11.770042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.705 qpair failed and we were unable to recover it. 00:37:32.705 [2024-07-14 15:10:11.770206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.705 [2024-07-14 15:10:11.770240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.705 qpair failed and we were unable to recover it. 00:37:32.705 [2024-07-14 15:10:11.770340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.705 [2024-07-14 15:10:11.770392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.705 qpair failed and we were unable to recover it. 00:37:32.705 [2024-07-14 15:10:11.770499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.705 [2024-07-14 15:10:11.770540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.705 qpair failed and we were unable to recover it. 00:37:32.705 [2024-07-14 15:10:11.770691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.705 [2024-07-14 15:10:11.770724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.705 qpair failed and we were unable to recover it. 00:37:32.705 [2024-07-14 15:10:11.770829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.705 [2024-07-14 15:10:11.770863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.705 qpair failed and we were unable to recover it. 00:37:32.705 [2024-07-14 15:10:11.770998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.705 [2024-07-14 15:10:11.771035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.705 qpair failed and we were unable to recover it. 00:37:32.705 [2024-07-14 15:10:11.771160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.705 [2024-07-14 15:10:11.771194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.705 qpair failed and we were unable to recover it. 00:37:32.705 [2024-07-14 15:10:11.771352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.705 [2024-07-14 15:10:11.771385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.705 qpair failed and we were unable to recover it. 00:37:32.705 [2024-07-14 15:10:11.771521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.705 [2024-07-14 15:10:11.771557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.705 qpair failed and we were unable to recover it. 00:37:32.705 [2024-07-14 15:10:11.771737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.705 [2024-07-14 15:10:11.771771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.705 qpair failed and we were unable to recover it. 00:37:32.705 [2024-07-14 15:10:11.771934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.705 [2024-07-14 15:10:11.771985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.705 qpair failed and we were unable to recover it. 00:37:32.705 [2024-07-14 15:10:11.772123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.705 [2024-07-14 15:10:11.772157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.705 qpair failed and we were unable to recover it. 00:37:32.705 [2024-07-14 15:10:11.772360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.705 [2024-07-14 15:10:11.772393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.705 qpair failed and we were unable to recover it. 00:37:32.705 [2024-07-14 15:10:11.772506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.705 [2024-07-14 15:10:11.772562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.705 qpair failed and we were unable to recover it. 00:37:32.705 [2024-07-14 15:10:11.772736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.705 [2024-07-14 15:10:11.772785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.705 qpair failed and we were unable to recover it. 00:37:32.705 [2024-07-14 15:10:11.772945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.705 [2024-07-14 15:10:11.772981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.705 qpair failed and we were unable to recover it. 00:37:32.705 [2024-07-14 15:10:11.773093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.705 [2024-07-14 15:10:11.773145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.705 qpair failed and we were unable to recover it. 00:37:32.705 [2024-07-14 15:10:11.773294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.705 [2024-07-14 15:10:11.773331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.705 qpair failed and we were unable to recover it. 00:37:32.705 [2024-07-14 15:10:11.773478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.705 [2024-07-14 15:10:11.773511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.705 qpair failed and we were unable to recover it. 00:37:32.706 [2024-07-14 15:10:11.773648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.706 [2024-07-14 15:10:11.773699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.706 qpair failed and we were unable to recover it. 00:37:32.706 [2024-07-14 15:10:11.773846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.706 [2024-07-14 15:10:11.773891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.706 qpair failed and we were unable to recover it. 00:37:32.706 [2024-07-14 15:10:11.774050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.706 [2024-07-14 15:10:11.774084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.706 qpair failed and we were unable to recover it. 00:37:32.706 [2024-07-14 15:10:11.774217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.706 [2024-07-14 15:10:11.774269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.706 qpair failed and we were unable to recover it. 00:37:32.706 [2024-07-14 15:10:11.774395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.706 [2024-07-14 15:10:11.774432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.706 qpair failed and we were unable to recover it. 00:37:32.706 [2024-07-14 15:10:11.774615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.706 [2024-07-14 15:10:11.774648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.706 qpair failed and we were unable to recover it. 00:37:32.706 [2024-07-14 15:10:11.774804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.706 [2024-07-14 15:10:11.774841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.706 qpair failed and we were unable to recover it. 00:37:32.706 [2024-07-14 15:10:11.774970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.706 [2024-07-14 15:10:11.775008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.706 qpair failed and we were unable to recover it. 00:37:32.706 [2024-07-14 15:10:11.775193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.706 [2024-07-14 15:10:11.775227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.706 qpair failed and we were unable to recover it. 00:37:32.706 [2024-07-14 15:10:11.775354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.706 [2024-07-14 15:10:11.775407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.706 qpair failed and we were unable to recover it. 00:37:32.706 [2024-07-14 15:10:11.775524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.706 [2024-07-14 15:10:11.775562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.706 qpair failed and we were unable to recover it. 00:37:32.706 [2024-07-14 15:10:11.775749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.706 [2024-07-14 15:10:11.775783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.706 qpair failed and we were unable to recover it. 00:37:32.706 [2024-07-14 15:10:11.775891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.706 [2024-07-14 15:10:11.775945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.706 qpair failed and we were unable to recover it. 00:37:32.706 [2024-07-14 15:10:11.776095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.706 [2024-07-14 15:10:11.776133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.706 qpair failed and we were unable to recover it. 00:37:32.706 [2024-07-14 15:10:11.776249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.706 [2024-07-14 15:10:11.776283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.706 qpair failed and we were unable to recover it. 00:37:32.706 [2024-07-14 15:10:11.776424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.706 [2024-07-14 15:10:11.776458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.706 qpair failed and we were unable to recover it. 00:37:32.706 [2024-07-14 15:10:11.776641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.706 [2024-07-14 15:10:11.776679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.706 qpair failed and we were unable to recover it. 00:37:32.706 [2024-07-14 15:10:11.776843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.706 [2024-07-14 15:10:11.776885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.706 qpair failed and we were unable to recover it. 00:37:32.706 [2024-07-14 15:10:11.777034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.706 [2024-07-14 15:10:11.777072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.706 qpair failed and we were unable to recover it. 00:37:32.706 [2024-07-14 15:10:11.777196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.706 [2024-07-14 15:10:11.777232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.706 qpair failed and we were unable to recover it. 00:37:32.706 [2024-07-14 15:10:11.777360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.706 [2024-07-14 15:10:11.777394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.706 qpair failed and we were unable to recover it. 00:37:32.706 [2024-07-14 15:10:11.777503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.706 [2024-07-14 15:10:11.777537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.706 qpair failed and we were unable to recover it. 00:37:32.706 [2024-07-14 15:10:11.777692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.706 [2024-07-14 15:10:11.777729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.706 qpair failed and we were unable to recover it. 00:37:32.706 [2024-07-14 15:10:11.777932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.706 [2024-07-14 15:10:11.777983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.706 qpair failed and we were unable to recover it. 00:37:32.706 [2024-07-14 15:10:11.778120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.706 [2024-07-14 15:10:11.778154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.706 qpair failed and we were unable to recover it. 00:37:32.706 [2024-07-14 15:10:11.778289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.706 [2024-07-14 15:10:11.778326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.706 qpair failed and we were unable to recover it. 00:37:32.707 [2024-07-14 15:10:11.778482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.707 [2024-07-14 15:10:11.778516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.707 qpair failed and we were unable to recover it. 00:37:32.707 [2024-07-14 15:10:11.778652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.707 [2024-07-14 15:10:11.778701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.707 qpair failed and we were unable to recover it. 00:37:32.707 [2024-07-14 15:10:11.778859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.707 [2024-07-14 15:10:11.778901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.707 qpair failed and we were unable to recover it. 00:37:32.707 [2024-07-14 15:10:11.779034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.707 [2024-07-14 15:10:11.779068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.707 qpair failed and we were unable to recover it. 00:37:32.707 [2024-07-14 15:10:11.779198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.707 [2024-07-14 15:10:11.779231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.707 qpair failed and we were unable to recover it. 00:37:32.707 [2024-07-14 15:10:11.779394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.707 [2024-07-14 15:10:11.779428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.707 qpair failed and we were unable to recover it. 00:37:32.707 [2024-07-14 15:10:11.779587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.707 [2024-07-14 15:10:11.779620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.707 qpair failed and we were unable to recover it. 00:37:32.707 [2024-07-14 15:10:11.779773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.707 [2024-07-14 15:10:11.779811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.707 qpair failed and we were unable to recover it. 00:37:32.707 [2024-07-14 15:10:11.779998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.707 [2024-07-14 15:10:11.780041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.707 qpair failed and we were unable to recover it. 00:37:32.707 [2024-07-14 15:10:11.780226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.707 [2024-07-14 15:10:11.780259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.707 qpair failed and we were unable to recover it. 00:37:32.707 [2024-07-14 15:10:11.780367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.707 [2024-07-14 15:10:11.780418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.707 qpair failed and we were unable to recover it. 00:37:32.707 [2024-07-14 15:10:11.780593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.707 [2024-07-14 15:10:11.780630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.707 qpair failed and we were unable to recover it. 00:37:32.707 [2024-07-14 15:10:11.780764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.707 [2024-07-14 15:10:11.780798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.707 qpair failed and we were unable to recover it. 00:37:32.707 [2024-07-14 15:10:11.780901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.707 [2024-07-14 15:10:11.780935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.707 qpair failed and we were unable to recover it. 00:37:32.707 [2024-07-14 15:10:11.781093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.707 [2024-07-14 15:10:11.781131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.707 qpair failed and we were unable to recover it. 00:37:32.707 [2024-07-14 15:10:11.781284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.707 [2024-07-14 15:10:11.781318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.707 qpair failed and we were unable to recover it. 00:37:32.707 [2024-07-14 15:10:11.781482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.707 [2024-07-14 15:10:11.781546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.707 qpair failed and we were unable to recover it. 00:37:32.707 [2024-07-14 15:10:11.781662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.707 [2024-07-14 15:10:11.781700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.707 qpair failed and we were unable to recover it. 00:37:32.707 [2024-07-14 15:10:11.781885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.707 [2024-07-14 15:10:11.781920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.707 qpair failed and we were unable to recover it. 00:37:32.707 [2024-07-14 15:10:11.782075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.707 [2024-07-14 15:10:11.782113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.707 qpair failed and we were unable to recover it. 00:37:32.707 [2024-07-14 15:10:11.782238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.707 [2024-07-14 15:10:11.782284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.707 qpair failed and we were unable to recover it. 00:37:32.707 [2024-07-14 15:10:11.782444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.707 [2024-07-14 15:10:11.782478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.707 qpair failed and we were unable to recover it. 00:37:32.707 [2024-07-14 15:10:11.782661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.707 [2024-07-14 15:10:11.782698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.707 qpair failed and we were unable to recover it. 00:37:32.707 [2024-07-14 15:10:11.782873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.707 [2024-07-14 15:10:11.782919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.707 qpair failed and we were unable to recover it. 00:37:32.707 [2024-07-14 15:10:11.783044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.707 [2024-07-14 15:10:11.783077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.707 qpair failed and we were unable to recover it. 00:37:32.707 [2024-07-14 15:10:11.783219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.707 [2024-07-14 15:10:11.783253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.707 qpair failed and we were unable to recover it. 00:37:32.707 [2024-07-14 15:10:11.783419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.707 [2024-07-14 15:10:11.783456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.707 qpair failed and we were unable to recover it. 00:37:32.707 [2024-07-14 15:10:11.783614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.707 [2024-07-14 15:10:11.783647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.707 qpair failed and we were unable to recover it. 00:37:32.707 [2024-07-14 15:10:11.783744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.707 [2024-07-14 15:10:11.783795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.707 qpair failed and we were unable to recover it. 00:37:32.707 [2024-07-14 15:10:11.783948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.707 [2024-07-14 15:10:11.783987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.707 qpair failed and we were unable to recover it. 00:37:32.707 [2024-07-14 15:10:11.784111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.707 [2024-07-14 15:10:11.784145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.707 qpair failed and we were unable to recover it. 00:37:32.707 [2024-07-14 15:10:11.784286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.707 [2024-07-14 15:10:11.784333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.707 qpair failed and we were unable to recover it. 00:37:32.707 [2024-07-14 15:10:11.784452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.708 [2024-07-14 15:10:11.784489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.708 qpair failed and we were unable to recover it. 00:37:32.708 [2024-07-14 15:10:11.784666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.708 [2024-07-14 15:10:11.784699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.708 qpair failed and we were unable to recover it. 00:37:32.708 [2024-07-14 15:10:11.784857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.708 [2024-07-14 15:10:11.784898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.708 qpair failed and we were unable to recover it. 00:37:32.708 [2024-07-14 15:10:11.785042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.708 [2024-07-14 15:10:11.785076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.708 qpair failed and we were unable to recover it. 00:37:32.708 [2024-07-14 15:10:11.785241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.708 [2024-07-14 15:10:11.785275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.708 qpair failed and we were unable to recover it. 00:37:32.708 [2024-07-14 15:10:11.785438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.708 [2024-07-14 15:10:11.785476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.708 qpair failed and we were unable to recover it. 00:37:32.708 [2024-07-14 15:10:11.785617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.708 [2024-07-14 15:10:11.785654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.708 qpair failed and we were unable to recover it. 00:37:32.708 [2024-07-14 15:10:11.785825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.708 [2024-07-14 15:10:11.785863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.708 qpair failed and we were unable to recover it. 00:37:32.708 [2024-07-14 15:10:11.786006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.708 [2024-07-14 15:10:11.786041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.708 qpair failed and we were unable to recover it. 00:37:32.708 [2024-07-14 15:10:11.786149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.708 [2024-07-14 15:10:11.786201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.708 qpair failed and we were unable to recover it. 00:37:32.708 [2024-07-14 15:10:11.786329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.708 [2024-07-14 15:10:11.786363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.708 qpair failed and we were unable to recover it. 00:37:32.708 [2024-07-14 15:10:11.786498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.708 [2024-07-14 15:10:11.786540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.708 qpair failed and we were unable to recover it. 00:37:32.708 [2024-07-14 15:10:11.786679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.708 [2024-07-14 15:10:11.786713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.708 qpair failed and we were unable to recover it. 00:37:32.708 [2024-07-14 15:10:11.786851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.708 [2024-07-14 15:10:11.786904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.708 qpair failed and we were unable to recover it. 00:37:32.708 [2024-07-14 15:10:11.787083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.708 [2024-07-14 15:10:11.787120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.708 qpair failed and we were unable to recover it. 00:37:32.708 [2024-07-14 15:10:11.787277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.708 [2024-07-14 15:10:11.787314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.708 qpair failed and we were unable to recover it. 00:37:32.708 [2024-07-14 15:10:11.787451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.708 [2024-07-14 15:10:11.787489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.708 qpair failed and we were unable to recover it. 00:37:32.708 [2024-07-14 15:10:11.787632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.708 [2024-07-14 15:10:11.787666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.708 qpair failed and we were unable to recover it. 00:37:32.708 [2024-07-14 15:10:11.787793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.708 [2024-07-14 15:10:11.787827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.708 qpair failed and we were unable to recover it. 00:37:32.708 [2024-07-14 15:10:11.787939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.708 [2024-07-14 15:10:11.787973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.708 qpair failed and we were unable to recover it. 00:37:32.708 [2024-07-14 15:10:11.788100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.708 [2024-07-14 15:10:11.788150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.708 qpair failed and we were unable to recover it. 00:37:32.708 [2024-07-14 15:10:11.788294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.708 [2024-07-14 15:10:11.788332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.708 qpair failed and we were unable to recover it. 00:37:32.708 [2024-07-14 15:10:11.788459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.708 [2024-07-14 15:10:11.788493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.708 qpair failed and we were unable to recover it. 00:37:32.708 [2024-07-14 15:10:11.788636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.708 [2024-07-14 15:10:11.788670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.708 qpair failed and we were unable to recover it. 00:37:32.708 [2024-07-14 15:10:11.788831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.708 [2024-07-14 15:10:11.788869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.708 qpair failed and we were unable to recover it. 00:37:32.708 [2024-07-14 15:10:11.789021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.708 [2024-07-14 15:10:11.789055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.708 qpair failed and we were unable to recover it. 00:37:32.708 [2024-07-14 15:10:11.789190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.708 [2024-07-14 15:10:11.789224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.708 qpair failed and we were unable to recover it. 00:37:32.708 [2024-07-14 15:10:11.789391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.708 [2024-07-14 15:10:11.789425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.708 qpair failed and we were unable to recover it. 00:37:32.708 [2024-07-14 15:10:11.789555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.708 [2024-07-14 15:10:11.789589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.708 qpair failed and we were unable to recover it. 00:37:32.708 [2024-07-14 15:10:11.789725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.708 [2024-07-14 15:10:11.789760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.708 qpair failed and we were unable to recover it. 00:37:32.708 [2024-07-14 15:10:11.789898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.708 [2024-07-14 15:10:11.789936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.708 qpair failed and we were unable to recover it. 00:37:32.709 [2024-07-14 15:10:11.790101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.709 [2024-07-14 15:10:11.790136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.709 qpair failed and we were unable to recover it. 00:37:32.709 [2024-07-14 15:10:11.790288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.709 [2024-07-14 15:10:11.790326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.709 qpair failed and we were unable to recover it. 00:37:32.709 [2024-07-14 15:10:11.790497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.709 [2024-07-14 15:10:11.790534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.709 qpair failed and we were unable to recover it. 00:37:32.709 [2024-07-14 15:10:11.790688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.709 [2024-07-14 15:10:11.790722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.709 qpair failed and we were unable to recover it. 00:37:32.709 [2024-07-14 15:10:11.790870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.709 [2024-07-14 15:10:11.790914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.709 qpair failed and we were unable to recover it. 00:37:32.709 [2024-07-14 15:10:11.791070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.709 [2024-07-14 15:10:11.791107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.709 qpair failed and we were unable to recover it. 00:37:32.709 [2024-07-14 15:10:11.791245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.709 [2024-07-14 15:10:11.791278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.709 qpair failed and we were unable to recover it. 00:37:32.709 [2024-07-14 15:10:11.791403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.709 [2024-07-14 15:10:11.791436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.709 qpair failed and we were unable to recover it. 00:37:32.709 [2024-07-14 15:10:11.791611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.709 [2024-07-14 15:10:11.791645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.709 qpair failed and we were unable to recover it. 00:37:32.709 [2024-07-14 15:10:11.791797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.709 [2024-07-14 15:10:11.791835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.709 qpair failed and we were unable to recover it. 00:37:32.709 [2024-07-14 15:10:11.792049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.709 [2024-07-14 15:10:11.792084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.709 qpair failed and we were unable to recover it. 00:37:32.709 [2024-07-14 15:10:11.792237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.709 [2024-07-14 15:10:11.792275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.709 qpair failed and we were unable to recover it. 00:37:32.709 [2024-07-14 15:10:11.792446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.709 [2024-07-14 15:10:11.792480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.709 qpair failed and we were unable to recover it. 00:37:32.709 [2024-07-14 15:10:11.792665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.709 [2024-07-14 15:10:11.792702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.709 qpair failed and we were unable to recover it. 00:37:32.709 [2024-07-14 15:10:11.792889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.709 [2024-07-14 15:10:11.792923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.709 qpair failed and we were unable to recover it. 00:37:32.709 [2024-07-14 15:10:11.793084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.709 [2024-07-14 15:10:11.793118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.709 qpair failed and we were unable to recover it. 00:37:32.709 [2024-07-14 15:10:11.793261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.709 [2024-07-14 15:10:11.793298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.709 qpair failed and we were unable to recover it. 00:37:32.709 [2024-07-14 15:10:11.793473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.709 [2024-07-14 15:10:11.793521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.709 qpair failed and we were unable to recover it. 00:37:32.709 [2024-07-14 15:10:11.793684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.709 [2024-07-14 15:10:11.793717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.709 qpair failed and we were unable to recover it. 00:37:32.709 [2024-07-14 15:10:11.793826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.709 [2024-07-14 15:10:11.793861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.709 qpair failed and we were unable to recover it. 00:37:32.709 [2024-07-14 15:10:11.794030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.709 [2024-07-14 15:10:11.794076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.709 qpair failed and we were unable to recover it. 00:37:32.709 [2024-07-14 15:10:11.794269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.709 [2024-07-14 15:10:11.794303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.709 qpair failed and we were unable to recover it. 00:37:32.709 [2024-07-14 15:10:11.794454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.709 [2024-07-14 15:10:11.794491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.709 qpair failed and we were unable to recover it. 00:37:32.709 [2024-07-14 15:10:11.794640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.709 [2024-07-14 15:10:11.794677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.709 qpair failed and we were unable to recover it. 00:37:32.709 [2024-07-14 15:10:11.794837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.709 [2024-07-14 15:10:11.794870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.709 qpair failed and we were unable to recover it. 00:37:32.709 [2024-07-14 15:10:11.795038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.709 [2024-07-14 15:10:11.795076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.709 qpair failed and we were unable to recover it. 00:37:32.709 [2024-07-14 15:10:11.795260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.709 [2024-07-14 15:10:11.795297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.709 qpair failed and we were unable to recover it. 00:37:32.709 [2024-07-14 15:10:11.795475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.709 [2024-07-14 15:10:11.795509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.709 qpair failed and we were unable to recover it. 00:37:32.709 [2024-07-14 15:10:11.795659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.709 [2024-07-14 15:10:11.795696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.709 qpair failed and we were unable to recover it. 00:37:32.709 [2024-07-14 15:10:11.795837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.709 [2024-07-14 15:10:11.795873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.709 qpair failed and we were unable to recover it. 00:37:32.709 [2024-07-14 15:10:11.796025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.709 [2024-07-14 15:10:11.796060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.709 qpair failed and we were unable to recover it. 00:37:32.709 [2024-07-14 15:10:11.796188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.709 [2024-07-14 15:10:11.796224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.709 qpair failed and we were unable to recover it. 00:37:32.710 [2024-07-14 15:10:11.796411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.710 [2024-07-14 15:10:11.796447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.710 qpair failed and we were unable to recover it. 00:37:32.710 [2024-07-14 15:10:11.796597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.710 [2024-07-14 15:10:11.796631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.710 qpair failed and we were unable to recover it. 00:37:32.710 [2024-07-14 15:10:11.796772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.710 [2024-07-14 15:10:11.796806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.710 qpair failed and we were unable to recover it. 00:37:32.710 [2024-07-14 15:10:11.796961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.710 [2024-07-14 15:10:11.797000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.710 qpair failed and we were unable to recover it. 00:37:32.710 [2024-07-14 15:10:11.797150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.710 [2024-07-14 15:10:11.797184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.710 qpair failed and we were unable to recover it. 00:37:32.710 [2024-07-14 15:10:11.797365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.710 [2024-07-14 15:10:11.797403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.710 qpair failed and we were unable to recover it. 00:37:32.710 [2024-07-14 15:10:11.797520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.710 [2024-07-14 15:10:11.797557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.710 qpair failed and we were unable to recover it. 00:37:32.710 [2024-07-14 15:10:11.797711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.710 [2024-07-14 15:10:11.797745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.710 qpair failed and we were unable to recover it. 00:37:32.710 [2024-07-14 15:10:11.797860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.710 [2024-07-14 15:10:11.797902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.710 qpair failed and we were unable to recover it. 00:37:32.710 [2024-07-14 15:10:11.798033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.710 [2024-07-14 15:10:11.798071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.710 qpair failed and we were unable to recover it. 00:37:32.710 [2024-07-14 15:10:11.798219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.710 [2024-07-14 15:10:11.798253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.710 qpair failed and we were unable to recover it. 00:37:32.710 [2024-07-14 15:10:11.798365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.710 [2024-07-14 15:10:11.798398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.710 qpair failed and we were unable to recover it. 00:37:32.710 [2024-07-14 15:10:11.798522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.710 [2024-07-14 15:10:11.798556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.710 qpair failed and we were unable to recover it. 00:37:32.710 [2024-07-14 15:10:11.798674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.710 [2024-07-14 15:10:11.798709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.710 qpair failed and we were unable to recover it. 00:37:32.710 [2024-07-14 15:10:11.798809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.710 [2024-07-14 15:10:11.798843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.710 qpair failed and we were unable to recover it. 00:37:32.710 [2024-07-14 15:10:11.799021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.710 [2024-07-14 15:10:11.799059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.710 qpair failed and we were unable to recover it. 00:37:32.710 [2024-07-14 15:10:11.799222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.710 [2024-07-14 15:10:11.799256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.710 qpair failed and we were unable to recover it. 00:37:32.710 [2024-07-14 15:10:11.799437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.710 [2024-07-14 15:10:11.799474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.710 qpair failed and we were unable to recover it. 00:37:32.710 [2024-07-14 15:10:11.799652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.710 [2024-07-14 15:10:11.799689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.710 qpair failed and we were unable to recover it. 00:37:32.710 [2024-07-14 15:10:11.799808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.710 [2024-07-14 15:10:11.799862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.710 qpair failed and we were unable to recover it. 00:37:32.710 [2024-07-14 15:10:11.800041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.710 [2024-07-14 15:10:11.800075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.710 qpair failed and we were unable to recover it. 00:37:32.710 [2024-07-14 15:10:11.800200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.710 [2024-07-14 15:10:11.800237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.710 qpair failed and we were unable to recover it. 00:37:32.710 [2024-07-14 15:10:11.800405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.710 [2024-07-14 15:10:11.800438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.710 qpair failed and we were unable to recover it. 00:37:32.710 [2024-07-14 15:10:11.800545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.710 [2024-07-14 15:10:11.800578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.710 qpair failed and we were unable to recover it. 00:37:32.710 [2024-07-14 15:10:11.800725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.710 [2024-07-14 15:10:11.800759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.710 qpair failed and we were unable to recover it. 00:37:32.710 [2024-07-14 15:10:11.800925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.710 [2024-07-14 15:10:11.800960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.710 qpair failed and we were unable to recover it. 00:37:32.710 [2024-07-14 15:10:11.801115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.710 [2024-07-14 15:10:11.801152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.710 qpair failed and we were unable to recover it. 00:37:32.710 [2024-07-14 15:10:11.801294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.710 [2024-07-14 15:10:11.801343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.710 qpair failed and we were unable to recover it. 00:37:32.710 [2024-07-14 15:10:11.801498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.710 [2024-07-14 15:10:11.801531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.710 qpair failed and we were unable to recover it. 00:37:32.710 [2024-07-14 15:10:11.801637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.710 [2024-07-14 15:10:11.801670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.710 qpair failed and we were unable to recover it. 00:37:32.710 [2024-07-14 15:10:11.801797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.710 [2024-07-14 15:10:11.801834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.710 qpair failed and we were unable to recover it. 00:37:32.711 [2024-07-14 15:10:11.802007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.711 [2024-07-14 15:10:11.802041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.711 qpair failed and we were unable to recover it. 00:37:32.711 [2024-07-14 15:10:11.802186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.711 [2024-07-14 15:10:11.802224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.711 qpair failed and we were unable to recover it. 00:37:32.711 [2024-07-14 15:10:11.802391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.711 [2024-07-14 15:10:11.802433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.711 qpair failed and we were unable to recover it. 00:37:32.711 [2024-07-14 15:10:11.802618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.711 [2024-07-14 15:10:11.802651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.711 qpair failed and we were unable to recover it. 00:37:32.711 [2024-07-14 15:10:11.802788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.711 [2024-07-14 15:10:11.802840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.711 qpair failed and we were unable to recover it. 00:37:32.711 [2024-07-14 15:10:11.802991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.711 [2024-07-14 15:10:11.803024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.711 qpair failed and we were unable to recover it. 00:37:32.711 [2024-07-14 15:10:11.803187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.711 [2024-07-14 15:10:11.803221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.711 qpair failed and we were unable to recover it. 00:37:32.711 [2024-07-14 15:10:11.803315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.711 [2024-07-14 15:10:11.803365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.711 qpair failed and we were unable to recover it. 00:37:32.711 [2024-07-14 15:10:11.803510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.711 [2024-07-14 15:10:11.803547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.711 qpair failed and we were unable to recover it. 00:37:32.711 [2024-07-14 15:10:11.803725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.711 [2024-07-14 15:10:11.803758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.711 qpair failed and we were unable to recover it. 00:37:32.711 [2024-07-14 15:10:11.803924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.711 [2024-07-14 15:10:11.803962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.711 qpair failed and we were unable to recover it. 00:37:32.711 [2024-07-14 15:10:11.804108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.711 [2024-07-14 15:10:11.804146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.711 qpair failed and we were unable to recover it. 00:37:32.711 [2024-07-14 15:10:11.804282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.711 [2024-07-14 15:10:11.804316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.711 qpair failed and we were unable to recover it. 00:37:32.711 [2024-07-14 15:10:11.804477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.711 [2024-07-14 15:10:11.804511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.711 qpair failed and we were unable to recover it. 00:37:32.711 [2024-07-14 15:10:11.804635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.711 [2024-07-14 15:10:11.804673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.711 qpair failed and we were unable to recover it. 00:37:32.711 [2024-07-14 15:10:11.804802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.711 [2024-07-14 15:10:11.804836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.711 qpair failed and we were unable to recover it. 00:37:32.711 [2024-07-14 15:10:11.804971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.711 [2024-07-14 15:10:11.805005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.711 qpair failed and we were unable to recover it. 00:37:32.711 [2024-07-14 15:10:11.805166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.711 [2024-07-14 15:10:11.805200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.711 qpair failed and we were unable to recover it. 00:37:32.711 [2024-07-14 15:10:11.805333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.711 [2024-07-14 15:10:11.805377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.711 qpair failed and we were unable to recover it. 00:37:32.711 [2024-07-14 15:10:11.805482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.711 [2024-07-14 15:10:11.805515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.711 qpair failed and we were unable to recover it. 00:37:32.711 [2024-07-14 15:10:11.805655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.711 [2024-07-14 15:10:11.805688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.711 qpair failed and we were unable to recover it. 00:37:32.711 [2024-07-14 15:10:11.805826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.711 [2024-07-14 15:10:11.805860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.711 qpair failed and we were unable to recover it. 00:37:32.711 [2024-07-14 15:10:11.806002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.711 [2024-07-14 15:10:11.806035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.711 qpair failed and we were unable to recover it. 00:37:32.711 [2024-07-14 15:10:11.806200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.711 [2024-07-14 15:10:11.806237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.711 qpair failed and we were unable to recover it. 00:37:32.711 [2024-07-14 15:10:11.806421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.711 [2024-07-14 15:10:11.806454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.711 qpair failed and we were unable to recover it. 00:37:32.711 [2024-07-14 15:10:11.806608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.711 [2024-07-14 15:10:11.806645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.711 qpair failed and we were unable to recover it. 00:37:32.711 [2024-07-14 15:10:11.806768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.711 [2024-07-14 15:10:11.806805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.711 qpair failed and we were unable to recover it. 00:37:32.711 [2024-07-14 15:10:11.806961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.711 [2024-07-14 15:10:11.806995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.711 qpair failed and we were unable to recover it. 00:37:32.711 [2024-07-14 15:10:11.807133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.711 [2024-07-14 15:10:11.807166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.711 qpair failed and we were unable to recover it. 00:37:32.711 [2024-07-14 15:10:11.807337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.711 [2024-07-14 15:10:11.807371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.711 qpair failed and we were unable to recover it. 00:37:32.712 [2024-07-14 15:10:11.807539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.712 [2024-07-14 15:10:11.807572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.712 qpair failed and we were unable to recover it. 00:37:32.712 [2024-07-14 15:10:11.807749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.712 [2024-07-14 15:10:11.807786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.712 qpair failed and we were unable to recover it. 00:37:32.712 [2024-07-14 15:10:11.807897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.712 [2024-07-14 15:10:11.807946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.712 qpair failed and we were unable to recover it. 00:37:32.712 [2024-07-14 15:10:11.808098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.712 [2024-07-14 15:10:11.808131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.712 qpair failed and we were unable to recover it. 00:37:32.712 [2024-07-14 15:10:11.808310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.712 [2024-07-14 15:10:11.808347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.712 qpair failed and we were unable to recover it. 00:37:32.712 [2024-07-14 15:10:11.808504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.712 [2024-07-14 15:10:11.808541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.712 qpair failed and we were unable to recover it. 00:37:32.712 [2024-07-14 15:10:11.808701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.712 [2024-07-14 15:10:11.808735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.712 qpair failed and we were unable to recover it. 00:37:32.712 [2024-07-14 15:10:11.808848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.712 [2024-07-14 15:10:11.808905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.712 qpair failed and we were unable to recover it. 00:37:32.712 [2024-07-14 15:10:11.809024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.712 [2024-07-14 15:10:11.809062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.712 qpair failed and we were unable to recover it. 00:37:32.712 [2024-07-14 15:10:11.809211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.712 [2024-07-14 15:10:11.809245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.712 qpair failed and we were unable to recover it. 00:37:32.712 [2024-07-14 15:10:11.809349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.712 [2024-07-14 15:10:11.809383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.712 qpair failed and we were unable to recover it. 00:37:32.712 [2024-07-14 15:10:11.809545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.712 [2024-07-14 15:10:11.809582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.712 qpair failed and we were unable to recover it. 00:37:32.712 [2024-07-14 15:10:11.809697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.712 [2024-07-14 15:10:11.809731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.712 qpair failed and we were unable to recover it. 00:37:32.712 [2024-07-14 15:10:11.809845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.712 [2024-07-14 15:10:11.809886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.712 qpair failed and we were unable to recover it. 00:37:32.712 [2024-07-14 15:10:11.810043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.712 [2024-07-14 15:10:11.810081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.712 qpair failed and we were unable to recover it. 00:37:32.712 [2024-07-14 15:10:11.810260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.712 [2024-07-14 15:10:11.810294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.712 qpair failed and we were unable to recover it. 00:37:32.712 [2024-07-14 15:10:11.810411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.712 [2024-07-14 15:10:11.810450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.712 qpair failed and we were unable to recover it. 00:37:32.712 [2024-07-14 15:10:11.810599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.712 [2024-07-14 15:10:11.810637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.712 qpair failed and we were unable to recover it. 00:37:32.712 [2024-07-14 15:10:11.810841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.712 [2024-07-14 15:10:11.810888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.712 qpair failed and we were unable to recover it. 00:37:32.712 [2024-07-14 15:10:11.811050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.712 [2024-07-14 15:10:11.811084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.712 qpair failed and we were unable to recover it. 00:37:32.712 [2024-07-14 15:10:11.811238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.712 [2024-07-14 15:10:11.811275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.712 qpair failed and we were unable to recover it. 00:37:32.712 [2024-07-14 15:10:11.811422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.712 [2024-07-14 15:10:11.811455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.712 qpair failed and we were unable to recover it. 00:37:32.712 [2024-07-14 15:10:11.811642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.712 [2024-07-14 15:10:11.811679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.712 qpair failed and we were unable to recover it. 00:37:32.712 [2024-07-14 15:10:11.811853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.712 [2024-07-14 15:10:11.811930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.712 qpair failed and we were unable to recover it. 00:37:32.712 [2024-07-14 15:10:11.812094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.712 [2024-07-14 15:10:11.812128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.712 qpair failed and we were unable to recover it. 00:37:32.712 [2024-07-14 15:10:11.812304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.712 [2024-07-14 15:10:11.812352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.712 qpair failed and we were unable to recover it. 00:37:32.713 [2024-07-14 15:10:11.812489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.713 [2024-07-14 15:10:11.812522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.713 qpair failed and we were unable to recover it. 00:37:32.713 [2024-07-14 15:10:11.812636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.713 [2024-07-14 15:10:11.812669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.713 qpair failed and we were unable to recover it. 00:37:32.713 [2024-07-14 15:10:11.812832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.713 [2024-07-14 15:10:11.812866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.713 qpair failed and we were unable to recover it. 00:37:32.713 [2024-07-14 15:10:11.813031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.713 [2024-07-14 15:10:11.813068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.713 qpair failed and we were unable to recover it. 00:37:32.713 [2024-07-14 15:10:11.813225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.713 [2024-07-14 15:10:11.813259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.713 qpair failed and we were unable to recover it. 00:37:32.713 [2024-07-14 15:10:11.813391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.713 [2024-07-14 15:10:11.813442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.713 qpair failed and we were unable to recover it. 00:37:32.713 [2024-07-14 15:10:11.813560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.713 [2024-07-14 15:10:11.813597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.713 qpair failed and we were unable to recover it. 00:37:32.713 [2024-07-14 15:10:11.813754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.713 [2024-07-14 15:10:11.813787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.713 qpair failed and we were unable to recover it. 00:37:32.713 [2024-07-14 15:10:11.813942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.713 [2024-07-14 15:10:11.813979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.713 qpair failed and we were unable to recover it. 00:37:32.713 [2024-07-14 15:10:11.814096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.713 [2024-07-14 15:10:11.814133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.713 qpair failed and we were unable to recover it. 00:37:32.713 [2024-07-14 15:10:11.814311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.713 [2024-07-14 15:10:11.814344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.713 qpair failed and we were unable to recover it. 00:37:32.713 [2024-07-14 15:10:11.814485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.713 [2024-07-14 15:10:11.814518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.713 qpair failed and we were unable to recover it. 00:37:32.713 [2024-07-14 15:10:11.814654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.713 [2024-07-14 15:10:11.814688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.713 qpair failed and we were unable to recover it. 00:37:32.713 [2024-07-14 15:10:11.814822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.713 [2024-07-14 15:10:11.814859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.713 qpair failed and we were unable to recover it. 00:37:32.713 [2024-07-14 15:10:11.815020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.713 [2024-07-14 15:10:11.815057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.713 qpair failed and we were unable to recover it. 00:37:32.713 [2024-07-14 15:10:11.815237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.713 [2024-07-14 15:10:11.815274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.713 qpair failed and we were unable to recover it. 00:37:32.713 [2024-07-14 15:10:11.815459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.713 [2024-07-14 15:10:11.815493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.713 qpair failed and we were unable to recover it. 00:37:32.713 [2024-07-14 15:10:11.815635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.713 [2024-07-14 15:10:11.815672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.713 qpair failed and we were unable to recover it. 00:37:32.713 [2024-07-14 15:10:11.815839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.713 [2024-07-14 15:10:11.815872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.713 qpair failed and we were unable to recover it. 00:37:32.713 [2024-07-14 15:10:11.816026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.713 [2024-07-14 15:10:11.816059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.713 qpair failed and we were unable to recover it. 00:37:32.713 [2024-07-14 15:10:11.816196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.713 [2024-07-14 15:10:11.816229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.713 qpair failed and we were unable to recover it. 00:37:32.713 [2024-07-14 15:10:11.816367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.713 [2024-07-14 15:10:11.816400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.713 qpair failed and we were unable to recover it. 00:37:32.713 [2024-07-14 15:10:11.816540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.713 [2024-07-14 15:10:11.816574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.713 qpair failed and we were unable to recover it. 00:37:32.713 [2024-07-14 15:10:11.816701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.713 [2024-07-14 15:10:11.816754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.713 qpair failed and we were unable to recover it. 00:37:32.713 [2024-07-14 15:10:11.816894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.713 [2024-07-14 15:10:11.816932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.713 qpair failed and we were unable to recover it. 00:37:32.713 [2024-07-14 15:10:11.817070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.713 [2024-07-14 15:10:11.817103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.713 qpair failed and we were unable to recover it. 00:37:32.713 [2024-07-14 15:10:11.817216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.713 [2024-07-14 15:10:11.817259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.713 qpair failed and we were unable to recover it. 00:37:32.713 [2024-07-14 15:10:11.817381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.713 [2024-07-14 15:10:11.817431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.713 qpair failed and we were unable to recover it. 00:37:32.713 [2024-07-14 15:10:11.817564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.713 [2024-07-14 15:10:11.817597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.713 qpair failed and we were unable to recover it. 00:37:32.713 [2024-07-14 15:10:11.817733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.713 [2024-07-14 15:10:11.817767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.713 qpair failed and we were unable to recover it. 00:37:32.713 [2024-07-14 15:10:11.817908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.714 [2024-07-14 15:10:11.817958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.714 qpair failed and we were unable to recover it. 00:37:32.714 [2024-07-14 15:10:11.818112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.714 [2024-07-14 15:10:11.818145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.714 qpair failed and we were unable to recover it. 00:37:32.714 [2024-07-14 15:10:11.818333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.714 [2024-07-14 15:10:11.818370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.714 qpair failed and we were unable to recover it. 00:37:32.714 [2024-07-14 15:10:11.818492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.714 [2024-07-14 15:10:11.818529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.714 qpair failed and we were unable to recover it. 00:37:32.714 [2024-07-14 15:10:11.818704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.714 [2024-07-14 15:10:11.818742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.714 qpair failed and we were unable to recover it. 00:37:32.714 [2024-07-14 15:10:11.818894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.714 [2024-07-14 15:10:11.818944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.714 qpair failed and we were unable to recover it. 00:37:32.714 [2024-07-14 15:10:11.819103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.714 [2024-07-14 15:10:11.819137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.714 qpair failed and we were unable to recover it. 00:37:32.714 [2024-07-14 15:10:11.819299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.714 [2024-07-14 15:10:11.819332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.714 qpair failed and we were unable to recover it. 00:37:32.714 [2024-07-14 15:10:11.819447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.714 [2024-07-14 15:10:11.819481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.714 qpair failed and we were unable to recover it. 00:37:32.714 [2024-07-14 15:10:11.819646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.714 [2024-07-14 15:10:11.819683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.714 qpair failed and we were unable to recover it. 00:37:32.714 [2024-07-14 15:10:11.819806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.714 [2024-07-14 15:10:11.819839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.714 qpair failed and we were unable to recover it. 00:37:32.714 [2024-07-14 15:10:11.820005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.714 [2024-07-14 15:10:11.820039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.714 qpair failed and we were unable to recover it. 00:37:32.714 [2024-07-14 15:10:11.820155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.714 [2024-07-14 15:10:11.820188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.714 qpair failed and we were unable to recover it. 00:37:32.714 [2024-07-14 15:10:11.820354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.714 [2024-07-14 15:10:11.820388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.714 qpair failed and we were unable to recover it. 00:37:32.714 [2024-07-14 15:10:11.820548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.714 [2024-07-14 15:10:11.820586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.714 qpair failed and we were unable to recover it. 00:37:32.714 [2024-07-14 15:10:11.820738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.714 [2024-07-14 15:10:11.820775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.714 qpair failed and we were unable to recover it. 00:37:32.714 [2024-07-14 15:10:11.820939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.714 [2024-07-14 15:10:11.820973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.714 qpair failed and we were unable to recover it. 00:37:32.714 [2024-07-14 15:10:11.821121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.714 [2024-07-14 15:10:11.821158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.714 qpair failed and we were unable to recover it. 00:37:32.714 [2024-07-14 15:10:11.821303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.714 [2024-07-14 15:10:11.821340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.714 qpair failed and we were unable to recover it. 00:37:32.714 [2024-07-14 15:10:11.821517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.714 [2024-07-14 15:10:11.821550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.714 qpair failed and we were unable to recover it. 00:37:32.714 [2024-07-14 15:10:11.821658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.714 [2024-07-14 15:10:11.821707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.714 qpair failed and we were unable to recover it. 00:37:32.714 [2024-07-14 15:10:11.821829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.714 [2024-07-14 15:10:11.821866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.714 qpair failed and we were unable to recover it. 00:37:32.714 [2024-07-14 15:10:11.822004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.714 [2024-07-14 15:10:11.822037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.714 qpair failed and we were unable to recover it. 00:37:32.714 [2024-07-14 15:10:11.822171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.714 [2024-07-14 15:10:11.822209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.714 qpair failed and we were unable to recover it. 00:37:32.714 [2024-07-14 15:10:11.822362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.714 [2024-07-14 15:10:11.822399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.714 qpair failed and we were unable to recover it. 00:37:32.714 [2024-07-14 15:10:11.822579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.714 [2024-07-14 15:10:11.822613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.714 qpair failed and we were unable to recover it. 00:37:32.714 [2024-07-14 15:10:11.822739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.714 [2024-07-14 15:10:11.822772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.714 qpair failed and we were unable to recover it. 00:37:32.714 [2024-07-14 15:10:11.822962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.714 [2024-07-14 15:10:11.823000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.714 qpair failed and we were unable to recover it. 00:37:32.714 [2024-07-14 15:10:11.823129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.714 [2024-07-14 15:10:11.823162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.714 qpair failed and we were unable to recover it. 00:37:32.714 [2024-07-14 15:10:11.823325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.714 [2024-07-14 15:10:11.823379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.714 qpair failed and we were unable to recover it. 00:37:32.714 [2024-07-14 15:10:11.823527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.714 [2024-07-14 15:10:11.823564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.714 qpair failed and we were unable to recover it. 00:37:32.714 [2024-07-14 15:10:11.823700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.715 [2024-07-14 15:10:11.823733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.715 qpair failed and we were unable to recover it. 00:37:32.715 [2024-07-14 15:10:11.823890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.715 [2024-07-14 15:10:11.823924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.715 qpair failed and we were unable to recover it. 00:37:32.715 [2024-07-14 15:10:11.824071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.715 [2024-07-14 15:10:11.824108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.715 qpair failed and we were unable to recover it. 00:37:32.715 [2024-07-14 15:10:11.824290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.715 [2024-07-14 15:10:11.824324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.715 qpair failed and we were unable to recover it. 00:37:32.715 [2024-07-14 15:10:11.824503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.715 [2024-07-14 15:10:11.824540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.715 qpair failed and we were unable to recover it. 00:37:32.715 [2024-07-14 15:10:11.824722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.715 [2024-07-14 15:10:11.824756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.715 qpair failed and we were unable to recover it. 00:37:32.715 [2024-07-14 15:10:11.824894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.715 [2024-07-14 15:10:11.824928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.715 qpair failed and we were unable to recover it. 00:37:32.715 [2024-07-14 15:10:11.825057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.715 [2024-07-14 15:10:11.825090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.715 qpair failed and we were unable to recover it. 00:37:32.715 [2024-07-14 15:10:11.825242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.715 [2024-07-14 15:10:11.825279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.715 qpair failed and we were unable to recover it. 00:37:32.715 [2024-07-14 15:10:11.825458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.715 [2024-07-14 15:10:11.825491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.715 qpair failed and we were unable to recover it. 00:37:32.715 [2024-07-14 15:10:11.825667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.715 [2024-07-14 15:10:11.825704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.715 qpair failed and we were unable to recover it. 00:37:32.715 [2024-07-14 15:10:11.825839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.715 [2024-07-14 15:10:11.825901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.715 qpair failed and we were unable to recover it. 00:37:32.715 [2024-07-14 15:10:11.826040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.715 [2024-07-14 15:10:11.826074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.715 qpair failed and we were unable to recover it. 00:37:32.715 [2024-07-14 15:10:11.826212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.715 [2024-07-14 15:10:11.826245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.715 qpair failed and we were unable to recover it. 00:37:32.715 [2024-07-14 15:10:11.826366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.715 [2024-07-14 15:10:11.826403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.715 qpair failed and we were unable to recover it. 00:37:32.715 [2024-07-14 15:10:11.826539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.715 [2024-07-14 15:10:11.826573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.715 qpair failed and we were unable to recover it. 00:37:32.715 [2024-07-14 15:10:11.826712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.715 [2024-07-14 15:10:11.826746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.715 qpair failed and we were unable to recover it. 00:37:32.715 [2024-07-14 15:10:11.826869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.715 [2024-07-14 15:10:11.826917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.715 qpair failed and we were unable to recover it. 00:37:32.715 [2024-07-14 15:10:11.827072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.715 [2024-07-14 15:10:11.827106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.715 qpair failed and we were unable to recover it. 00:37:32.715 [2024-07-14 15:10:11.827277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.715 [2024-07-14 15:10:11.827310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.715 qpair failed and we were unable to recover it. 00:37:32.715 [2024-07-14 15:10:11.827501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.715 [2024-07-14 15:10:11.827537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.715 qpair failed and we were unable to recover it. 00:37:32.715 [2024-07-14 15:10:11.827695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.715 [2024-07-14 15:10:11.827728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.715 qpair failed and we were unable to recover it. 00:37:32.715 [2024-07-14 15:10:11.827870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.715 [2024-07-14 15:10:11.827915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.715 qpair failed and we were unable to recover it. 00:37:32.715 [2024-07-14 15:10:11.828031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.715 [2024-07-14 15:10:11.828068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.715 qpair failed and we were unable to recover it. 00:37:32.715 [2024-07-14 15:10:11.828193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.715 [2024-07-14 15:10:11.828227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.715 qpair failed and we were unable to recover it. 00:37:32.715 [2024-07-14 15:10:11.828350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.715 [2024-07-14 15:10:11.828385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.715 qpair failed and we were unable to recover it. 00:37:32.715 [2024-07-14 15:10:11.828532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.715 [2024-07-14 15:10:11.828569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.715 qpair failed and we were unable to recover it. 00:37:32.715 [2024-07-14 15:10:11.828721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.715 [2024-07-14 15:10:11.828755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.715 qpair failed and we were unable to recover it. 00:37:32.715 [2024-07-14 15:10:11.828892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.715 [2024-07-14 15:10:11.828926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.715 qpair failed and we were unable to recover it. 00:37:32.715 [2024-07-14 15:10:11.829032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.715 [2024-07-14 15:10:11.829075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.715 qpair failed and we were unable to recover it. 00:37:32.715 [2024-07-14 15:10:11.829211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.715 [2024-07-14 15:10:11.829245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.716 qpair failed and we were unable to recover it. 00:37:32.716 [2024-07-14 15:10:11.829396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.716 [2024-07-14 15:10:11.829433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.716 qpair failed and we were unable to recover it. 00:37:32.716 [2024-07-14 15:10:11.829544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.716 [2024-07-14 15:10:11.829586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.716 qpair failed and we were unable to recover it. 00:37:32.716 [2024-07-14 15:10:11.829761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.716 [2024-07-14 15:10:11.829794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.716 qpair failed and we were unable to recover it. 00:37:32.716 [2024-07-14 15:10:11.829897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.716 [2024-07-14 15:10:11.829932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.716 qpair failed and we were unable to recover it. 00:37:32.716 [2024-07-14 15:10:11.830091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.716 [2024-07-14 15:10:11.830128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.716 qpair failed and we were unable to recover it. 00:37:32.716 [2024-07-14 15:10:11.830295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.716 [2024-07-14 15:10:11.830328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.716 qpair failed and we were unable to recover it. 00:37:32.716 [2024-07-14 15:10:11.830477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.716 [2024-07-14 15:10:11.830514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.716 qpair failed and we were unable to recover it. 00:37:32.716 [2024-07-14 15:10:11.830632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.716 [2024-07-14 15:10:11.830669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.716 qpair failed and we were unable to recover it. 00:37:32.716 [2024-07-14 15:10:11.830850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.716 [2024-07-14 15:10:11.830891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.716 qpair failed and we were unable to recover it. 00:37:32.716 [2024-07-14 15:10:11.831037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.716 [2024-07-14 15:10:11.831074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.716 qpair failed and we were unable to recover it. 00:37:32.716 [2024-07-14 15:10:11.831190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.716 [2024-07-14 15:10:11.831227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.716 qpair failed and we were unable to recover it. 00:37:32.716 [2024-07-14 15:10:11.831340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.716 [2024-07-14 15:10:11.831373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.716 qpair failed and we were unable to recover it. 00:37:32.716 [2024-07-14 15:10:11.831533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.716 [2024-07-14 15:10:11.831566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.716 qpair failed and we were unable to recover it. 00:37:32.716 [2024-07-14 15:10:11.831713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.716 [2024-07-14 15:10:11.831751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.716 qpair failed and we were unable to recover it. 00:37:32.716 [2024-07-14 15:10:11.831889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.716 [2024-07-14 15:10:11.831924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.716 qpair failed and we were unable to recover it. 00:37:32.716 [2024-07-14 15:10:11.832026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.716 [2024-07-14 15:10:11.832059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.716 qpair failed and we were unable to recover it. 00:37:32.716 [2024-07-14 15:10:11.832220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.716 [2024-07-14 15:10:11.832258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.716 qpair failed and we were unable to recover it. 00:37:32.716 [2024-07-14 15:10:11.832412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.716 [2024-07-14 15:10:11.832446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.716 qpair failed and we were unable to recover it. 00:37:32.716 [2024-07-14 15:10:11.832584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.716 [2024-07-14 15:10:11.832635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.716 qpair failed and we were unable to recover it. 00:37:32.716 [2024-07-14 15:10:11.832744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.716 [2024-07-14 15:10:11.832781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.716 qpair failed and we were unable to recover it. 00:37:32.716 [2024-07-14 15:10:11.832919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.716 [2024-07-14 15:10:11.832953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.716 qpair failed and we were unable to recover it. 00:37:32.716 [2024-07-14 15:10:11.833054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.716 [2024-07-14 15:10:11.833087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.716 qpair failed and we were unable to recover it. 00:37:32.716 [2024-07-14 15:10:11.833260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.716 [2024-07-14 15:10:11.833296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.716 qpair failed and we were unable to recover it. 00:37:32.716 [2024-07-14 15:10:11.833420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.716 [2024-07-14 15:10:11.833471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.716 qpair failed and we were unable to recover it. 00:37:32.716 [2024-07-14 15:10:11.833622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.716 [2024-07-14 15:10:11.833659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.716 qpair failed and we were unable to recover it. 00:37:32.716 [2024-07-14 15:10:11.833812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.716 [2024-07-14 15:10:11.833849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.716 qpair failed and we were unable to recover it. 00:37:32.716 [2024-07-14 15:10:11.833977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.716 [2024-07-14 15:10:11.834010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.716 qpair failed and we were unable to recover it. 00:37:32.716 [2024-07-14 15:10:11.834121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.716 [2024-07-14 15:10:11.834156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.716 qpair failed and we were unable to recover it. 00:37:32.716 [2024-07-14 15:10:11.834328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.716 [2024-07-14 15:10:11.834366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.716 qpair failed and we were unable to recover it. 00:37:32.716 [2024-07-14 15:10:11.834500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.716 [2024-07-14 15:10:11.834533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.716 qpair failed and we were unable to recover it. 00:37:32.716 [2024-07-14 15:10:11.834663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.716 [2024-07-14 15:10:11.834696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.716 qpair failed and we were unable to recover it. 00:37:32.717 [2024-07-14 15:10:11.834903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.717 [2024-07-14 15:10:11.834941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.717 qpair failed and we were unable to recover it. 00:37:32.717 [2024-07-14 15:10:11.835071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.717 [2024-07-14 15:10:11.835105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.717 qpair failed and we were unable to recover it. 00:37:32.717 [2024-07-14 15:10:11.835242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.717 [2024-07-14 15:10:11.835275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.717 qpair failed and we were unable to recover it. 00:37:32.717 [2024-07-14 15:10:11.835379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.717 [2024-07-14 15:10:11.835413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.717 qpair failed and we were unable to recover it. 00:37:32.717 [2024-07-14 15:10:11.835522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.717 [2024-07-14 15:10:11.835555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.717 qpair failed and we were unable to recover it. 00:37:32.717 [2024-07-14 15:10:11.835687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.717 [2024-07-14 15:10:11.835738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.717 qpair failed and we were unable to recover it. 00:37:32.717 [2024-07-14 15:10:11.835856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.717 [2024-07-14 15:10:11.835903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.717 qpair failed and we were unable to recover it. 00:37:32.717 [2024-07-14 15:10:11.836026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.717 [2024-07-14 15:10:11.836060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.717 qpair failed and we were unable to recover it. 00:37:32.717 [2024-07-14 15:10:11.836205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.717 [2024-07-14 15:10:11.836238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.717 qpair failed and we were unable to recover it. 00:37:32.717 [2024-07-14 15:10:11.836377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.717 [2024-07-14 15:10:11.836413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.717 qpair failed and we were unable to recover it. 00:37:32.717 [2024-07-14 15:10:11.836536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.717 [2024-07-14 15:10:11.836573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.717 qpair failed and we were unable to recover it. 00:37:32.717 [2024-07-14 15:10:11.836708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.717 [2024-07-14 15:10:11.836741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.717 qpair failed and we were unable to recover it. 00:37:32.717 [2024-07-14 15:10:11.836873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.717 [2024-07-14 15:10:11.836919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.717 qpair failed and we were unable to recover it. 00:37:32.717 [2024-07-14 15:10:11.837051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.717 [2024-07-14 15:10:11.837084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.717 qpair failed and we were unable to recover it. 00:37:32.717 [2024-07-14 15:10:11.837224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.717 [2024-07-14 15:10:11.837274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.717 qpair failed and we were unable to recover it. 00:37:32.717 [2024-07-14 15:10:11.837420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.717 [2024-07-14 15:10:11.837456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.717 qpair failed and we were unable to recover it. 00:37:32.717 [2024-07-14 15:10:11.837587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.717 [2024-07-14 15:10:11.837620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.717 qpair failed and we were unable to recover it. 00:37:32.717 [2024-07-14 15:10:11.837760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.717 [2024-07-14 15:10:11.837793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.717 qpair failed and we were unable to recover it. 00:37:32.717 [2024-07-14 15:10:11.837939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.717 [2024-07-14 15:10:11.837973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.717 qpair failed and we were unable to recover it. 00:37:32.717 [2024-07-14 15:10:11.838084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.717 [2024-07-14 15:10:11.838118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.717 qpair failed and we were unable to recover it. 00:37:32.717 [2024-07-14 15:10:11.838250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.717 [2024-07-14 15:10:11.838284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.717 qpair failed and we were unable to recover it. 00:37:32.717 [2024-07-14 15:10:11.838412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.717 [2024-07-14 15:10:11.838446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.717 qpair failed and we were unable to recover it. 00:37:32.717 [2024-07-14 15:10:11.838543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.717 [2024-07-14 15:10:11.838576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.717 qpair failed and we were unable to recover it. 00:37:32.717 [2024-07-14 15:10:11.838717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.718 [2024-07-14 15:10:11.838751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.718 qpair failed and we were unable to recover it. 00:37:32.718 [2024-07-14 15:10:11.838893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.718 [2024-07-14 15:10:11.838931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.718 qpair failed and we were unable to recover it. 00:37:32.718 [2024-07-14 15:10:11.839112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.718 [2024-07-14 15:10:11.839145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.718 qpair failed and we were unable to recover it. 00:37:32.718 [2024-07-14 15:10:11.839256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.718 [2024-07-14 15:10:11.839307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.718 qpair failed and we were unable to recover it. 00:37:32.718 [2024-07-14 15:10:11.839435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.718 [2024-07-14 15:10:11.839472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.718 qpair failed and we were unable to recover it. 00:37:32.718 [2024-07-14 15:10:11.839619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.718 [2024-07-14 15:10:11.839653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.718 qpair failed and we were unable to recover it. 00:37:32.718 [2024-07-14 15:10:11.839760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.718 [2024-07-14 15:10:11.839793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.718 qpair failed and we were unable to recover it. 00:37:32.718 [2024-07-14 15:10:11.839963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.718 [2024-07-14 15:10:11.839997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.718 qpair failed and we were unable to recover it. 00:37:32.718 [2024-07-14 15:10:11.840129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.718 [2024-07-14 15:10:11.840173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.718 qpair failed and we were unable to recover it. 00:37:32.718 [2024-07-14 15:10:11.840435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.718 [2024-07-14 15:10:11.840472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.718 qpair failed and we were unable to recover it. 00:37:32.718 [2024-07-14 15:10:11.840621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.718 [2024-07-14 15:10:11.840658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.718 qpair failed and we were unable to recover it. 00:37:32.718 [2024-07-14 15:10:11.840784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.718 [2024-07-14 15:10:11.840818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.718 qpair failed and we were unable to recover it. 00:37:32.718 [2024-07-14 15:10:11.840959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.718 [2024-07-14 15:10:11.840993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.718 qpair failed and we were unable to recover it. 00:37:32.718 [2024-07-14 15:10:11.841129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.718 [2024-07-14 15:10:11.841165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.718 qpair failed and we were unable to recover it. 00:37:32.718 [2024-07-14 15:10:11.841325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.718 [2024-07-14 15:10:11.841358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.718 qpair failed and we were unable to recover it. 00:37:32.718 [2024-07-14 15:10:11.841468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.718 [2024-07-14 15:10:11.841502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.718 qpair failed and we were unable to recover it. 00:37:32.718 [2024-07-14 15:10:11.841673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.718 [2024-07-14 15:10:11.841707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.718 qpair failed and we were unable to recover it. 00:37:32.718 [2024-07-14 15:10:11.841850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.718 [2024-07-14 15:10:11.841893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.718 qpair failed and we were unable to recover it. 00:37:32.718 [2024-07-14 15:10:11.842036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.718 [2024-07-14 15:10:11.842087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.718 qpair failed and we were unable to recover it. 00:37:32.718 [2024-07-14 15:10:11.842247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.718 [2024-07-14 15:10:11.842281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.718 qpair failed and we were unable to recover it. 00:37:32.718 [2024-07-14 15:10:11.842449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.718 [2024-07-14 15:10:11.842485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.718 qpair failed and we were unable to recover it. 00:37:32.718 [2024-07-14 15:10:11.842646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.718 [2024-07-14 15:10:11.842683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.718 qpair failed and we were unable to recover it. 00:37:32.718 [2024-07-14 15:10:11.842829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.718 [2024-07-14 15:10:11.842867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.718 qpair failed and we were unable to recover it. 00:37:32.718 [2024-07-14 15:10:11.843006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.718 [2024-07-14 15:10:11.843039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.718 qpair failed and we were unable to recover it. 00:37:32.718 [2024-07-14 15:10:11.843199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.718 [2024-07-14 15:10:11.843251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.718 qpair failed and we were unable to recover it. 00:37:32.718 [2024-07-14 15:10:11.843394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.718 [2024-07-14 15:10:11.843431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.718 qpair failed and we were unable to recover it. 00:37:32.718 [2024-07-14 15:10:11.843571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.718 [2024-07-14 15:10:11.843605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.718 qpair failed and we were unable to recover it. 00:37:32.718 [2024-07-14 15:10:11.843745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.718 [2024-07-14 15:10:11.843783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.718 qpair failed and we were unable to recover it. 00:37:32.718 [2024-07-14 15:10:11.843921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.718 [2024-07-14 15:10:11.843955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.718 qpair failed and we were unable to recover it. 00:37:32.718 [2024-07-14 15:10:11.844060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.719 [2024-07-14 15:10:11.844093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.719 qpair failed and we were unable to recover it. 00:37:32.719 [2024-07-14 15:10:11.844230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.719 [2024-07-14 15:10:11.844263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.719 qpair failed and we were unable to recover it. 00:37:32.719 [2024-07-14 15:10:11.844384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.719 [2024-07-14 15:10:11.844422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.719 qpair failed and we were unable to recover it. 00:37:32.719 [2024-07-14 15:10:11.844571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.719 [2024-07-14 15:10:11.844605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.719 qpair failed and we were unable to recover it. 00:37:32.719 [2024-07-14 15:10:11.844711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.719 [2024-07-14 15:10:11.844745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.719 qpair failed and we were unable to recover it. 00:37:32.719 [2024-07-14 15:10:11.844934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.719 [2024-07-14 15:10:11.844968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.719 qpair failed and we were unable to recover it. 00:37:32.719 [2024-07-14 15:10:11.845106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.719 [2024-07-14 15:10:11.845150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.719 qpair failed and we were unable to recover it. 00:37:32.719 [2024-07-14 15:10:11.845254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.719 [2024-07-14 15:10:11.845305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.719 qpair failed and we were unable to recover it. 00:37:32.719 [2024-07-14 15:10:11.845450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.719 [2024-07-14 15:10:11.845487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.719 qpair failed and we were unable to recover it. 00:37:32.719 [2024-07-14 15:10:11.845620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.719 [2024-07-14 15:10:11.845654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.719 qpair failed and we were unable to recover it. 00:37:32.719 [2024-07-14 15:10:11.845757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.719 [2024-07-14 15:10:11.845792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.719 qpair failed and we were unable to recover it. 00:37:32.719 [2024-07-14 15:10:11.845961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.719 [2024-07-14 15:10:11.845999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.719 qpair failed and we were unable to recover it. 00:37:32.719 [2024-07-14 15:10:11.846135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.719 [2024-07-14 15:10:11.846171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.719 qpair failed and we were unable to recover it. 00:37:32.719 [2024-07-14 15:10:11.846412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.719 [2024-07-14 15:10:11.846469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.719 qpair failed and we were unable to recover it. 00:37:32.719 [2024-07-14 15:10:11.846636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.719 [2024-07-14 15:10:11.846691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.719 qpair failed and we were unable to recover it. 00:37:32.719 [2024-07-14 15:10:11.846827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.719 [2024-07-14 15:10:11.846862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.719 qpair failed and we were unable to recover it. 00:37:32.719 [2024-07-14 15:10:11.846991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.719 [2024-07-14 15:10:11.847025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.719 qpair failed and we were unable to recover it. 00:37:32.719 [2024-07-14 15:10:11.847137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.719 [2024-07-14 15:10:11.847190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.719 qpair failed and we were unable to recover it. 00:37:32.719 [2024-07-14 15:10:11.847357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.719 [2024-07-14 15:10:11.847391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.719 qpair failed and we were unable to recover it. 00:37:32.719 [2024-07-14 15:10:11.847535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.719 [2024-07-14 15:10:11.847586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.719 qpair failed and we were unable to recover it. 00:37:32.719 [2024-07-14 15:10:11.847754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.719 [2024-07-14 15:10:11.847792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.719 qpair failed and we were unable to recover it. 00:37:32.719 [2024-07-14 15:10:11.847952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.719 [2024-07-14 15:10:11.847991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.719 qpair failed and we were unable to recover it. 00:37:32.719 [2024-07-14 15:10:11.848110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.719 [2024-07-14 15:10:11.848151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.719 qpair failed and we were unable to recover it. 00:37:32.719 [2024-07-14 15:10:11.848284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.719 [2024-07-14 15:10:11.848318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.719 qpair failed and we were unable to recover it. 00:37:32.719 [2024-07-14 15:10:11.848430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.719 [2024-07-14 15:10:11.848464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.719 qpair failed and we were unable to recover it. 00:37:32.719 [2024-07-14 15:10:11.848605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.719 [2024-07-14 15:10:11.848657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.719 qpair failed and we were unable to recover it. 00:37:32.719 [2024-07-14 15:10:11.848787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.719 [2024-07-14 15:10:11.848825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.719 qpair failed and we were unable to recover it. 00:37:32.719 [2024-07-14 15:10:11.848963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.719 [2024-07-14 15:10:11.849002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.719 qpair failed and we were unable to recover it. 00:37:32.719 [2024-07-14 15:10:11.849141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.719 [2024-07-14 15:10:11.849174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.719 qpair failed and we were unable to recover it. 00:37:32.719 [2024-07-14 15:10:11.849292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.719 [2024-07-14 15:10:11.849326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.719 qpair failed and we were unable to recover it. 00:37:32.719 [2024-07-14 15:10:11.849465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.720 [2024-07-14 15:10:11.849498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.720 qpair failed and we were unable to recover it. 00:37:32.720 [2024-07-14 15:10:11.849640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.720 [2024-07-14 15:10:11.849684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.720 qpair failed and we were unable to recover it. 00:37:32.720 [2024-07-14 15:10:11.849830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.720 [2024-07-14 15:10:11.849892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.720 qpair failed and we were unable to recover it. 00:37:32.720 [2024-07-14 15:10:11.850062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.720 [2024-07-14 15:10:11.850096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.720 qpair failed and we were unable to recover it. 00:37:32.720 [2024-07-14 15:10:11.850250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.720 [2024-07-14 15:10:11.850287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.720 qpair failed and we were unable to recover it. 00:37:32.720 [2024-07-14 15:10:11.850413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.720 [2024-07-14 15:10:11.850454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.720 qpair failed and we were unable to recover it. 00:37:32.720 [2024-07-14 15:10:11.850626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.720 [2024-07-14 15:10:11.850660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.720 qpair failed and we were unable to recover it. 00:37:32.720 [2024-07-14 15:10:11.850796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.720 [2024-07-14 15:10:11.850847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.720 qpair failed and we were unable to recover it. 00:37:32.720 [2024-07-14 15:10:11.851019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.720 [2024-07-14 15:10:11.851057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.720 qpair failed and we were unable to recover it. 00:37:32.720 [2024-07-14 15:10:11.851200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.720 [2024-07-14 15:10:11.851234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.720 qpair failed and we were unable to recover it. 00:37:32.720 [2024-07-14 15:10:11.851385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.720 [2024-07-14 15:10:11.851422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.720 qpair failed and we were unable to recover it. 00:37:32.720 [2024-07-14 15:10:11.851563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.720 [2024-07-14 15:10:11.851601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.720 qpair failed and we were unable to recover it. 00:37:32.720 [2024-07-14 15:10:11.851774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.720 [2024-07-14 15:10:11.851811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.720 qpair failed and we were unable to recover it. 00:37:32.720 [2024-07-14 15:10:11.851958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.720 [2024-07-14 15:10:11.851993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.720 qpair failed and we were unable to recover it. 00:37:32.720 [2024-07-14 15:10:11.852133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.720 [2024-07-14 15:10:11.852166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.720 qpair failed and we were unable to recover it. 00:37:32.720 [2024-07-14 15:10:11.852301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.720 [2024-07-14 15:10:11.852334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.720 qpair failed and we were unable to recover it. 00:37:32.720 [2024-07-14 15:10:11.852437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.720 [2024-07-14 15:10:11.852487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.720 qpair failed and we were unable to recover it. 00:37:32.720 [2024-07-14 15:10:11.852625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.720 [2024-07-14 15:10:11.852663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.720 qpair failed and we were unable to recover it. 00:37:32.720 [2024-07-14 15:10:11.852802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.720 [2024-07-14 15:10:11.852835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.720 qpair failed and we were unable to recover it. 00:37:32.720 [2024-07-14 15:10:11.853006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.720 [2024-07-14 15:10:11.853042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.720 qpair failed and we were unable to recover it. 00:37:32.720 [2024-07-14 15:10:11.853205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.720 [2024-07-14 15:10:11.853256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.720 qpair failed and we were unable to recover it. 00:37:32.720 [2024-07-14 15:10:11.853386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.720 [2024-07-14 15:10:11.853419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.720 qpair failed and we were unable to recover it. 00:37:32.720 [2024-07-14 15:10:11.853604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.720 [2024-07-14 15:10:11.853643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.720 qpair failed and we were unable to recover it. 00:37:32.720 [2024-07-14 15:10:11.853759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.720 [2024-07-14 15:10:11.853795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.720 qpair failed and we were unable to recover it. 00:37:32.720 [2024-07-14 15:10:11.853928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.720 [2024-07-14 15:10:11.853963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.720 qpair failed and we were unable to recover it. 00:37:32.720 [2024-07-14 15:10:11.854111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.720 [2024-07-14 15:10:11.854145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.720 qpair failed and we were unable to recover it. 00:37:32.720 [2024-07-14 15:10:11.854271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.720 [2024-07-14 15:10:11.854321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.720 qpair failed and we were unable to recover it. 00:37:32.720 [2024-07-14 15:10:11.854462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.720 [2024-07-14 15:10:11.854495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.720 qpair failed and we were unable to recover it. 00:37:32.720 [2024-07-14 15:10:11.854642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.720 [2024-07-14 15:10:11.854675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.720 qpair failed and we were unable to recover it. 00:37:32.720 [2024-07-14 15:10:11.854833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.720 [2024-07-14 15:10:11.854903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.720 qpair failed and we were unable to recover it. 00:37:32.720 [2024-07-14 15:10:11.855063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.720 [2024-07-14 15:10:11.855096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.720 qpair failed and we were unable to recover it. 00:37:32.720 [2024-07-14 15:10:11.855232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.720 [2024-07-14 15:10:11.855285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.720 qpair failed and we were unable to recover it. 00:37:32.721 [2024-07-14 15:10:11.855400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.721 [2024-07-14 15:10:11.855437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.721 qpair failed and we were unable to recover it. 00:37:32.721 [2024-07-14 15:10:11.855586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.721 [2024-07-14 15:10:11.855620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.721 qpair failed and we were unable to recover it. 00:37:32.721 [2024-07-14 15:10:11.855758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.721 [2024-07-14 15:10:11.855802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.721 qpair failed and we were unable to recover it. 00:37:32.721 [2024-07-14 15:10:11.855946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.721 [2024-07-14 15:10:11.855980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.721 qpair failed and we were unable to recover it. 00:37:32.721 [2024-07-14 15:10:11.856079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.721 [2024-07-14 15:10:11.856112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.721 qpair failed and we were unable to recover it. 00:37:32.721 [2024-07-14 15:10:11.856215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.721 [2024-07-14 15:10:11.856249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.721 qpair failed and we were unable to recover it. 00:37:32.721 [2024-07-14 15:10:11.856381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.721 [2024-07-14 15:10:11.856418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.721 qpair failed and we were unable to recover it. 00:37:32.721 [2024-07-14 15:10:11.856551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.721 [2024-07-14 15:10:11.856585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.721 qpair failed and we were unable to recover it. 00:37:32.721 [2024-07-14 15:10:11.856720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.721 [2024-07-14 15:10:11.856769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.721 qpair failed and we were unable to recover it. 00:37:32.721 [2024-07-14 15:10:11.856891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.721 [2024-07-14 15:10:11.856958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.721 qpair failed and we were unable to recover it. 00:37:32.721 [2024-07-14 15:10:11.857090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.721 [2024-07-14 15:10:11.857124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.721 qpair failed and we were unable to recover it. 00:37:32.721 [2024-07-14 15:10:11.857261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.721 [2024-07-14 15:10:11.857314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.721 qpair failed and we were unable to recover it. 00:37:32.721 [2024-07-14 15:10:11.857475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.721 [2024-07-14 15:10:11.857509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.721 qpair failed and we were unable to recover it. 00:37:32.721 [2024-07-14 15:10:11.857674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.721 [2024-07-14 15:10:11.857708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.721 qpair failed and we were unable to recover it. 00:37:32.721 [2024-07-14 15:10:11.857806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.721 [2024-07-14 15:10:11.857858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.721 qpair failed and we were unable to recover it. 00:37:32.721 [2024-07-14 15:10:11.857998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.721 [2024-07-14 15:10:11.858032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.721 qpair failed and we were unable to recover it. 00:37:32.721 [2024-07-14 15:10:11.858195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.721 [2024-07-14 15:10:11.858233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.721 qpair failed and we were unable to recover it. 00:37:32.721 [2024-07-14 15:10:11.858421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.721 [2024-07-14 15:10:11.858479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.721 qpair failed and we were unable to recover it. 00:37:32.721 [2024-07-14 15:10:11.858633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.721 [2024-07-14 15:10:11.858670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.721 qpair failed and we were unable to recover it. 00:37:32.721 [2024-07-14 15:10:11.858795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.721 [2024-07-14 15:10:11.858829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.721 qpair failed and we were unable to recover it. 00:37:32.721 [2024-07-14 15:10:11.858993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.721 [2024-07-14 15:10:11.859028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.721 qpair failed and we were unable to recover it. 00:37:32.721 [2024-07-14 15:10:11.859130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.721 [2024-07-14 15:10:11.859165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.721 qpair failed and we were unable to recover it. 00:37:32.721 [2024-07-14 15:10:11.859311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.721 [2024-07-14 15:10:11.859344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.721 qpair failed and we were unable to recover it. 00:37:32.721 [2024-07-14 15:10:11.859452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.721 [2024-07-14 15:10:11.859487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.721 qpair failed and we were unable to recover it. 00:37:32.721 [2024-07-14 15:10:11.859649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.721 [2024-07-14 15:10:11.859686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.721 qpair failed and we were unable to recover it. 00:37:32.721 [2024-07-14 15:10:11.859814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.721 [2024-07-14 15:10:11.859848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.721 qpair failed and we were unable to recover it. 00:37:32.721 [2024-07-14 15:10:11.860034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.721 [2024-07-14 15:10:11.860083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.721 qpair failed and we were unable to recover it. 00:37:32.721 [2024-07-14 15:10:11.860231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.721 [2024-07-14 15:10:11.860271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.721 qpair failed and we were unable to recover it. 00:37:32.721 [2024-07-14 15:10:11.860420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.721 [2024-07-14 15:10:11.860456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.721 qpair failed and we were unable to recover it. 00:37:32.721 [2024-07-14 15:10:11.860599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.721 [2024-07-14 15:10:11.860652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.721 qpair failed and we were unable to recover it. 00:37:32.721 [2024-07-14 15:10:11.860780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.721 [2024-07-14 15:10:11.860817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.721 qpair failed and we were unable to recover it. 00:37:32.721 [2024-07-14 15:10:11.860993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.721 [2024-07-14 15:10:11.861027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.721 qpair failed and we were unable to recover it. 00:37:32.721 [2024-07-14 15:10:11.861168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.721 [2024-07-14 15:10:11.861223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.721 qpair failed and we were unable to recover it. 00:37:32.721 [2024-07-14 15:10:11.861348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.721 [2024-07-14 15:10:11.861385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.721 qpair failed and we were unable to recover it. 00:37:32.722 [2024-07-14 15:10:11.861534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.722 [2024-07-14 15:10:11.861573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.722 qpair failed and we were unable to recover it. 00:37:32.722 [2024-07-14 15:10:11.861686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.722 [2024-07-14 15:10:11.861720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.722 qpair failed and we were unable to recover it. 00:37:32.722 [2024-07-14 15:10:11.861908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.722 [2024-07-14 15:10:11.861948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.722 qpair failed and we were unable to recover it. 00:37:32.722 [2024-07-14 15:10:11.862051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.722 [2024-07-14 15:10:11.862085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.722 qpair failed and we were unable to recover it. 00:37:32.722 [2024-07-14 15:10:11.862259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.722 [2024-07-14 15:10:11.862296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.722 qpair failed and we were unable to recover it. 00:37:32.722 [2024-07-14 15:10:11.862446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.722 [2024-07-14 15:10:11.862483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.722 qpair failed and we were unable to recover it. 00:37:32.722 [2024-07-14 15:10:11.862636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.722 [2024-07-14 15:10:11.862669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.722 qpair failed and we were unable to recover it. 00:37:32.722 [2024-07-14 15:10:11.862802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.722 [2024-07-14 15:10:11.862854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.722 qpair failed and we were unable to recover it. 00:37:32.722 [2024-07-14 15:10:11.863024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.722 [2024-07-14 15:10:11.863057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.722 qpair failed and we were unable to recover it. 00:37:32.722 [2024-07-14 15:10:11.863183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.722 [2024-07-14 15:10:11.863217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.722 qpair failed and we were unable to recover it. 00:37:32.722 [2024-07-14 15:10:11.863324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.722 [2024-07-14 15:10:11.863358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.722 qpair failed and we were unable to recover it. 00:37:32.722 [2024-07-14 15:10:11.863517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.722 [2024-07-14 15:10:11.863555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.722 qpair failed and we were unable to recover it. 00:37:32.722 [2024-07-14 15:10:11.863689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.722 [2024-07-14 15:10:11.863723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.722 qpair failed and we were unable to recover it. 00:37:32.722 [2024-07-14 15:10:11.863826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.722 [2024-07-14 15:10:11.863860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.722 qpair failed and we were unable to recover it. 00:37:32.722 [2024-07-14 15:10:11.864018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.722 [2024-07-14 15:10:11.864052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.722 qpair failed and we were unable to recover it. 00:37:32.722 [2024-07-14 15:10:11.864185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.722 [2024-07-14 15:10:11.864219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.722 qpair failed and we were unable to recover it. 00:37:32.722 [2024-07-14 15:10:11.864382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.722 [2024-07-14 15:10:11.864431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.722 qpair failed and we were unable to recover it. 00:37:32.722 [2024-07-14 15:10:11.864575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.722 [2024-07-14 15:10:11.864630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.722 qpair failed and we were unable to recover it. 00:37:32.722 [2024-07-14 15:10:11.864858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.722 [2024-07-14 15:10:11.864902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.722 qpair failed and we were unable to recover it. 00:37:32.722 [2024-07-14 15:10:11.865016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.722 [2024-07-14 15:10:11.865050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.722 qpair failed and we were unable to recover it. 00:37:32.722 [2024-07-14 15:10:11.865209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.722 [2024-07-14 15:10:11.865246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.722 qpair failed and we were unable to recover it. 00:37:32.722 [2024-07-14 15:10:11.865403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.722 [2024-07-14 15:10:11.865437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.722 qpair failed and we were unable to recover it. 00:37:32.722 [2024-07-14 15:10:11.865572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.722 [2024-07-14 15:10:11.865610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.722 qpair failed and we were unable to recover it. 00:37:32.722 [2024-07-14 15:10:11.865740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.722 [2024-07-14 15:10:11.865778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.722 qpair failed and we were unable to recover it. 00:37:32.722 [2024-07-14 15:10:11.865944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.722 [2024-07-14 15:10:11.865978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.722 qpair failed and we were unable to recover it. 00:37:32.722 [2024-07-14 15:10:11.866095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.722 [2024-07-14 15:10:11.866139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.722 qpair failed and we were unable to recover it. 00:37:32.722 [2024-07-14 15:10:11.866244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.722 [2024-07-14 15:10:11.866278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.722 qpair failed and we were unable to recover it. 00:37:32.722 [2024-07-14 15:10:11.866384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.722 [2024-07-14 15:10:11.866418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.722 qpair failed and we were unable to recover it. 00:37:32.722 [2024-07-14 15:10:11.866528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.722 [2024-07-14 15:10:11.866561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.722 qpair failed and we were unable to recover it. 00:37:32.722 [2024-07-14 15:10:11.866689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.722 [2024-07-14 15:10:11.866723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.722 qpair failed and we were unable to recover it. 00:37:32.722 [2024-07-14 15:10:11.866856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.722 [2024-07-14 15:10:11.866896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.722 qpair failed and we were unable to recover it. 00:37:32.722 [2024-07-14 15:10:11.867014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.722 [2024-07-14 15:10:11.867047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.722 qpair failed and we were unable to recover it. 00:37:32.722 [2024-07-14 15:10:11.867190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.722 [2024-07-14 15:10:11.867224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.722 qpair failed and we were unable to recover it. 00:37:32.722 [2024-07-14 15:10:11.867323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.722 [2024-07-14 15:10:11.867355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.722 qpair failed and we were unable to recover it. 00:37:32.722 [2024-07-14 15:10:11.867483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.722 [2024-07-14 15:10:11.867517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.722 qpair failed and we were unable to recover it. 00:37:32.722 [2024-07-14 15:10:11.867683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.723 [2024-07-14 15:10:11.867721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.723 qpair failed and we were unable to recover it. 00:37:32.723 [2024-07-14 15:10:11.867855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.723 [2024-07-14 15:10:11.867915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.723 qpair failed and we were unable to recover it. 00:37:32.723 [2024-07-14 15:10:11.868043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.723 [2024-07-14 15:10:11.868077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.723 qpair failed and we were unable to recover it. 00:37:32.723 [2024-07-14 15:10:11.868211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.723 [2024-07-14 15:10:11.868255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.723 qpair failed and we were unable to recover it. 00:37:32.723 [2024-07-14 15:10:11.868385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.723 [2024-07-14 15:10:11.868419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.723 qpair failed and we were unable to recover it. 00:37:32.723 [2024-07-14 15:10:11.868547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.723 [2024-07-14 15:10:11.868611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.723 qpair failed and we were unable to recover it. 00:37:32.723 [2024-07-14 15:10:11.868772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.723 [2024-07-14 15:10:11.868825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.723 qpair failed and we were unable to recover it. 00:37:32.723 [2024-07-14 15:10:11.868965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.723 [2024-07-14 15:10:11.869001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.723 qpair failed and we were unable to recover it. 00:37:32.723 [2024-07-14 15:10:11.869137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.723 [2024-07-14 15:10:11.869171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.723 qpair failed and we were unable to recover it. 00:37:32.723 [2024-07-14 15:10:11.869317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.723 [2024-07-14 15:10:11.869356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.723 qpair failed and we were unable to recover it. 00:37:32.723 [2024-07-14 15:10:11.869504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.723 [2024-07-14 15:10:11.869541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.723 qpair failed and we were unable to recover it. 00:37:32.723 [2024-07-14 15:10:11.869681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.723 [2024-07-14 15:10:11.869733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.723 qpair failed and we were unable to recover it. 00:37:32.723 [2024-07-14 15:10:11.869859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.723 [2024-07-14 15:10:11.869904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.723 qpair failed and we were unable to recover it. 00:37:32.723 [2024-07-14 15:10:11.870061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.723 [2024-07-14 15:10:11.870094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.723 qpair failed and we were unable to recover it. 00:37:32.723 [2024-07-14 15:10:11.870299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.723 [2024-07-14 15:10:11.870339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.723 qpair failed and we were unable to recover it. 00:37:32.723 [2024-07-14 15:10:11.870468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.723 [2024-07-14 15:10:11.870507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.723 qpair failed and we were unable to recover it. 00:37:32.723 [2024-07-14 15:10:11.870669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.723 [2024-07-14 15:10:11.870704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.723 qpair failed and we were unable to recover it. 00:37:32.723 [2024-07-14 15:10:11.870834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.723 [2024-07-14 15:10:11.870894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.723 qpair failed and we were unable to recover it. 00:37:32.723 [2024-07-14 15:10:11.871084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.723 [2024-07-14 15:10:11.871118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.723 qpair failed and we were unable to recover it. 00:37:32.723 [2024-07-14 15:10:11.871233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.723 [2024-07-14 15:10:11.871267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.723 qpair failed and we were unable to recover it. 00:37:32.723 [2024-07-14 15:10:11.871407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.723 [2024-07-14 15:10:11.871441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.723 qpair failed and we were unable to recover it. 00:37:32.723 [2024-07-14 15:10:11.871602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.723 [2024-07-14 15:10:11.871639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.723 qpair failed and we were unable to recover it. 00:37:32.723 [2024-07-14 15:10:11.871804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.723 [2024-07-14 15:10:11.871838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.723 qpair failed and we were unable to recover it. 00:37:32.723 [2024-07-14 15:10:11.872026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.723 [2024-07-14 15:10:11.872075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.723 qpair failed and we were unable to recover it. 00:37:32.723 [2024-07-14 15:10:11.872211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.723 [2024-07-14 15:10:11.872251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.723 qpair failed and we were unable to recover it. 00:37:32.723 [2024-07-14 15:10:11.872391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.723 [2024-07-14 15:10:11.872426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.723 qpair failed and we were unable to recover it. 00:37:32.723 [2024-07-14 15:10:11.872562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.723 [2024-07-14 15:10:11.872612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.723 qpair failed and we were unable to recover it. 00:37:32.723 [2024-07-14 15:10:11.872764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.723 [2024-07-14 15:10:11.872807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.723 qpair failed and we were unable to recover it. 00:37:32.723 [2024-07-14 15:10:11.872977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.723 [2024-07-14 15:10:11.873011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.723 qpair failed and we were unable to recover it. 00:37:32.723 [2024-07-14 15:10:11.873146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.723 [2024-07-14 15:10:11.873179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.723 qpair failed and we were unable to recover it. 00:37:32.723 [2024-07-14 15:10:11.873319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.723 [2024-07-14 15:10:11.873356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.723 qpair failed and we were unable to recover it. 00:37:32.723 [2024-07-14 15:10:11.873511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.723 [2024-07-14 15:10:11.873544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.723 qpair failed and we were unable to recover it. 00:37:32.724 [2024-07-14 15:10:11.873676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.724 [2024-07-14 15:10:11.873733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.724 qpair failed and we were unable to recover it. 00:37:32.724 [2024-07-14 15:10:11.873861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.724 [2024-07-14 15:10:11.873935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.724 qpair failed and we were unable to recover it. 00:37:32.724 [2024-07-14 15:10:11.874055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.724 [2024-07-14 15:10:11.874092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.724 qpair failed and we were unable to recover it. 00:37:32.724 [2024-07-14 15:10:11.874208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.724 [2024-07-14 15:10:11.874242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.724 qpair failed and we were unable to recover it. 00:37:32.724 [2024-07-14 15:10:11.874409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.724 [2024-07-14 15:10:11.874447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.724 qpair failed and we were unable to recover it. 00:37:32.724 [2024-07-14 15:10:11.874585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.724 [2024-07-14 15:10:11.874619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.724 qpair failed and we were unable to recover it. 00:37:32.724 [2024-07-14 15:10:11.874759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.724 [2024-07-14 15:10:11.874810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.724 qpair failed and we were unable to recover it. 00:37:32.724 [2024-07-14 15:10:11.874975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.724 [2024-07-14 15:10:11.875009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.724 qpair failed and we were unable to recover it. 00:37:32.724 [2024-07-14 15:10:11.875147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.724 [2024-07-14 15:10:11.875181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.724 qpair failed and we were unable to recover it. 00:37:32.724 [2024-07-14 15:10:11.875342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.724 [2024-07-14 15:10:11.875381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.724 qpair failed and we were unable to recover it. 00:37:32.724 [2024-07-14 15:10:11.875518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.724 [2024-07-14 15:10:11.875556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.724 qpair failed and we were unable to recover it. 00:37:32.724 [2024-07-14 15:10:11.875722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.724 [2024-07-14 15:10:11.875755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.724 qpair failed and we were unable to recover it. 00:37:32.724 [2024-07-14 15:10:11.875915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.724 [2024-07-14 15:10:11.875951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.724 qpair failed and we were unable to recover it. 00:37:32.724 [2024-07-14 15:10:11.876056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.724 [2024-07-14 15:10:11.876090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.724 qpair failed and we were unable to recover it. 00:37:32.724 [2024-07-14 15:10:11.876261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.724 [2024-07-14 15:10:11.876310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.724 qpair failed and we were unable to recover it. 00:37:32.724 [2024-07-14 15:10:11.876457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.724 [2024-07-14 15:10:11.876495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.724 qpair failed and we were unable to recover it. 00:37:32.724 [2024-07-14 15:10:11.876614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.724 [2024-07-14 15:10:11.876652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.724 qpair failed and we were unable to recover it. 00:37:32.724 [2024-07-14 15:10:11.876765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.724 [2024-07-14 15:10:11.876802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.724 qpair failed and we were unable to recover it. 00:37:32.724 [2024-07-14 15:10:11.876944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.724 [2024-07-14 15:10:11.876979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.724 qpair failed and we were unable to recover it. 00:37:32.724 [2024-07-14 15:10:11.877118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.724 [2024-07-14 15:10:11.877156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.724 qpair failed and we were unable to recover it. 00:37:32.724 [2024-07-14 15:10:11.877297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.724 [2024-07-14 15:10:11.877334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.724 qpair failed and we were unable to recover it. 00:37:32.724 [2024-07-14 15:10:11.877458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.724 [2024-07-14 15:10:11.877509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.724 qpair failed and we were unable to recover it. 00:37:32.724 [2024-07-14 15:10:11.877664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.724 [2024-07-14 15:10:11.877701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.724 qpair failed and we were unable to recover it. 00:37:32.724 [2024-07-14 15:10:11.877858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.724 [2024-07-14 15:10:11.877917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.724 qpair failed and we were unable to recover it. 00:37:32.724 [2024-07-14 15:10:11.878051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.724 [2024-07-14 15:10:11.878084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.724 qpair failed and we were unable to recover it. 00:37:32.724 [2024-07-14 15:10:11.878204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.724 [2024-07-14 15:10:11.878256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.724 qpair failed and we were unable to recover it. 00:37:32.724 [2024-07-14 15:10:11.878436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.724 [2024-07-14 15:10:11.878477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.724 qpair failed and we were unable to recover it. 00:37:32.724 [2024-07-14 15:10:11.878624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.724 [2024-07-14 15:10:11.878686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.724 qpair failed and we were unable to recover it. 00:37:32.724 [2024-07-14 15:10:11.878818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.724 [2024-07-14 15:10:11.878855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.724 qpair failed and we were unable to recover it. 00:37:32.724 [2024-07-14 15:10:11.879003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.724 [2024-07-14 15:10:11.879038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.724 qpair failed and we were unable to recover it. 00:37:32.724 [2024-07-14 15:10:11.879234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.724 [2024-07-14 15:10:11.879302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.724 qpair failed and we were unable to recover it. 00:37:32.724 [2024-07-14 15:10:11.879471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.724 [2024-07-14 15:10:11.879526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.724 qpair failed and we were unable to recover it. 00:37:32.724 [2024-07-14 15:10:11.879655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.724 [2024-07-14 15:10:11.879708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.724 qpair failed and we were unable to recover it. 00:37:32.724 [2024-07-14 15:10:11.879868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.724 [2024-07-14 15:10:11.879938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.724 qpair failed and we were unable to recover it. 00:37:32.724 [2024-07-14 15:10:11.880072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.724 [2024-07-14 15:10:11.880124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.724 qpair failed and we were unable to recover it. 00:37:32.724 [2024-07-14 15:10:11.880307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.724 [2024-07-14 15:10:11.880364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.724 qpair failed and we were unable to recover it. 00:37:32.724 [2024-07-14 15:10:11.880536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.724 [2024-07-14 15:10:11.880629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.724 qpair failed and we were unable to recover it. 00:37:32.724 [2024-07-14 15:10:11.880804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.724 [2024-07-14 15:10:11.880839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.724 qpair failed and we were unable to recover it. 00:37:32.725 [2024-07-14 15:10:11.881010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.725 [2024-07-14 15:10:11.881092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.725 qpair failed and we were unable to recover it. 00:37:32.725 [2024-07-14 15:10:11.881256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.725 [2024-07-14 15:10:11.881295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.725 qpair failed and we were unable to recover it. 00:37:32.725 [2024-07-14 15:10:11.881441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.725 [2024-07-14 15:10:11.881478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.725 qpair failed and we were unable to recover it. 00:37:32.725 [2024-07-14 15:10:11.881603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.725 [2024-07-14 15:10:11.881640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.725 qpair failed and we were unable to recover it. 00:37:32.725 [2024-07-14 15:10:11.881794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.725 [2024-07-14 15:10:11.881827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.725 qpair failed and we were unable to recover it. 00:37:32.725 [2024-07-14 15:10:11.881966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.725 [2024-07-14 15:10:11.882000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.725 qpair failed and we were unable to recover it. 00:37:32.725 [2024-07-14 15:10:11.882112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.725 [2024-07-14 15:10:11.882147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.725 qpair failed and we were unable to recover it. 00:37:32.725 [2024-07-14 15:10:11.882365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.725 [2024-07-14 15:10:11.882403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.725 qpair failed and we were unable to recover it. 00:37:32.725 [2024-07-14 15:10:11.882520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.725 [2024-07-14 15:10:11.882557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.725 qpair failed and we were unable to recover it. 00:37:32.725 [2024-07-14 15:10:11.882707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.725 [2024-07-14 15:10:11.882744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.725 qpair failed and we were unable to recover it. 00:37:32.725 [2024-07-14 15:10:11.882902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.725 [2024-07-14 15:10:11.882962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.725 qpair failed and we were unable to recover it. 00:37:32.725 [2024-07-14 15:10:11.883088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.725 [2024-07-14 15:10:11.883152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.725 qpair failed and we were unable to recover it. 00:37:32.725 [2024-07-14 15:10:11.883304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.725 [2024-07-14 15:10:11.883341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.725 qpair failed and we were unable to recover it. 00:37:32.725 [2024-07-14 15:10:11.883454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.725 [2024-07-14 15:10:11.883492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.725 qpair failed and we were unable to recover it. 00:37:32.725 [2024-07-14 15:10:11.883611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.725 [2024-07-14 15:10:11.883670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.725 qpair failed and we were unable to recover it. 00:37:32.725 [2024-07-14 15:10:11.883806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.725 [2024-07-14 15:10:11.883844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.725 qpair failed and we were unable to recover it. 00:37:32.725 [2024-07-14 15:10:11.884002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.725 [2024-07-14 15:10:11.884051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.725 qpair failed and we were unable to recover it. 00:37:32.725 [2024-07-14 15:10:11.884190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.725 [2024-07-14 15:10:11.884246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.725 qpair failed and we were unable to recover it. 00:37:32.725 [2024-07-14 15:10:11.884435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.725 [2024-07-14 15:10:11.884488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.725 qpair failed and we were unable to recover it. 00:37:32.725 [2024-07-14 15:10:11.884597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.725 [2024-07-14 15:10:11.884632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.725 qpair failed and we were unable to recover it. 00:37:32.725 [2024-07-14 15:10:11.884738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.725 [2024-07-14 15:10:11.884773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.725 qpair failed and we were unable to recover it. 00:37:32.725 [2024-07-14 15:10:11.884932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.725 [2024-07-14 15:10:11.884970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.725 qpair failed and we were unable to recover it. 00:37:32.725 [2024-07-14 15:10:11.885106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.725 [2024-07-14 15:10:11.885144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.725 qpair failed and we were unable to recover it. 00:37:32.725 [2024-07-14 15:10:11.885276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.725 [2024-07-14 15:10:11.885309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.725 qpair failed and we were unable to recover it. 00:37:32.725 [2024-07-14 15:10:11.885468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.725 [2024-07-14 15:10:11.885517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.725 qpair failed and we were unable to recover it. 00:37:32.725 [2024-07-14 15:10:11.885635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.725 [2024-07-14 15:10:11.885672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.725 qpair failed and we were unable to recover it. 00:37:32.725 [2024-07-14 15:10:11.885822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.725 [2024-07-14 15:10:11.885857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.725 qpair failed and we were unable to recover it. 00:37:32.725 [2024-07-14 15:10:11.885999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.725 [2024-07-14 15:10:11.886034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.725 qpair failed and we were unable to recover it. 00:37:32.725 [2024-07-14 15:10:11.886177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.725 [2024-07-14 15:10:11.886231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.725 qpair failed and we were unable to recover it. 00:37:32.725 [2024-07-14 15:10:11.886413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.725 [2024-07-14 15:10:11.886466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.725 qpair failed and we were unable to recover it. 00:37:32.725 [2024-07-14 15:10:11.886693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.725 [2024-07-14 15:10:11.886749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.725 qpair failed and we were unable to recover it. 00:37:32.725 [2024-07-14 15:10:11.886888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.725 [2024-07-14 15:10:11.886930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.725 qpair failed and we were unable to recover it. 00:37:32.725 [2024-07-14 15:10:11.887037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.725 [2024-07-14 15:10:11.887070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.725 qpair failed and we were unable to recover it. 00:37:32.725 [2024-07-14 15:10:11.887211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.725 [2024-07-14 15:10:11.887249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.725 qpair failed and we were unable to recover it. 00:37:32.725 [2024-07-14 15:10:11.887433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.725 [2024-07-14 15:10:11.887490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.725 qpair failed and we were unable to recover it. 00:37:32.725 [2024-07-14 15:10:11.887643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.725 [2024-07-14 15:10:11.887680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.725 qpair failed and we were unable to recover it. 00:37:32.725 [2024-07-14 15:10:11.887846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.725 [2024-07-14 15:10:11.887889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.725 qpair failed and we were unable to recover it. 00:37:32.725 [2024-07-14 15:10:11.888043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.725 [2024-07-14 15:10:11.888083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.725 qpair failed and we were unable to recover it. 00:37:32.726 [2024-07-14 15:10:11.888224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.726 [2024-07-14 15:10:11.888278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.726 qpair failed and we were unable to recover it. 00:37:32.726 [2024-07-14 15:10:11.888498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.726 [2024-07-14 15:10:11.888551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.726 qpair failed and we were unable to recover it. 00:37:32.726 [2024-07-14 15:10:11.888683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.726 [2024-07-14 15:10:11.888748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.726 qpair failed and we were unable to recover it. 00:37:32.726 [2024-07-14 15:10:11.888865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.726 [2024-07-14 15:10:11.888916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.726 qpair failed and we were unable to recover it. 00:37:32.726 [2024-07-14 15:10:11.889076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.726 [2024-07-14 15:10:11.889115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.726 qpair failed and we were unable to recover it. 00:37:32.726 [2024-07-14 15:10:11.889276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.726 [2024-07-14 15:10:11.889313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.726 qpair failed and we were unable to recover it. 00:37:32.726 [2024-07-14 15:10:11.889438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.726 [2024-07-14 15:10:11.889476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.726 qpair failed and we were unable to recover it. 00:37:32.726 [2024-07-14 15:10:11.889589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.726 [2024-07-14 15:10:11.889626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.726 qpair failed and we were unable to recover it. 00:37:32.726 [2024-07-14 15:10:11.889798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.726 [2024-07-14 15:10:11.889852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.726 qpair failed and we were unable to recover it. 00:37:32.726 [2024-07-14 15:10:11.890021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.726 [2024-07-14 15:10:11.890060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.726 qpair failed and we were unable to recover it. 00:37:32.726 [2024-07-14 15:10:11.890270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.726 [2024-07-14 15:10:11.890322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.726 qpair failed and we were unable to recover it. 00:37:32.726 [2024-07-14 15:10:11.890500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.726 [2024-07-14 15:10:11.890558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.726 qpair failed and we were unable to recover it. 00:37:32.726 [2024-07-14 15:10:11.890716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.726 [2024-07-14 15:10:11.890771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.726 qpair failed and we were unable to recover it. 00:37:32.726 [2024-07-14 15:10:11.890888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.726 [2024-07-14 15:10:11.890933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.726 qpair failed and we were unable to recover it. 00:37:32.726 [2024-07-14 15:10:11.891094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.726 [2024-07-14 15:10:11.891144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.726 qpair failed and we were unable to recover it. 00:37:32.726 [2024-07-14 15:10:11.891314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.726 [2024-07-14 15:10:11.891369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.726 qpair failed and we were unable to recover it. 00:37:32.726 [2024-07-14 15:10:11.891568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.726 [2024-07-14 15:10:11.891636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.726 qpair failed and we were unable to recover it. 00:37:32.726 [2024-07-14 15:10:11.891805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.726 [2024-07-14 15:10:11.891840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.726 qpair failed and we were unable to recover it. 00:37:32.726 [2024-07-14 15:10:11.891988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.726 [2024-07-14 15:10:11.892023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.726 qpair failed and we were unable to recover it. 00:37:32.726 [2024-07-14 15:10:11.892137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.726 [2024-07-14 15:10:11.892190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.726 qpair failed and we were unable to recover it. 00:37:32.726 [2024-07-14 15:10:11.892396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.726 [2024-07-14 15:10:11.892434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.726 qpair failed and we were unable to recover it. 00:37:32.726 [2024-07-14 15:10:11.892580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.726 [2024-07-14 15:10:11.892618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.726 qpair failed and we were unable to recover it. 00:37:32.726 [2024-07-14 15:10:11.892770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.726 [2024-07-14 15:10:11.892819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.726 qpair failed and we were unable to recover it. 00:37:32.726 [2024-07-14 15:10:11.892977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.726 [2024-07-14 15:10:11.893026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.726 qpair failed and we were unable to recover it. 00:37:32.726 [2024-07-14 15:10:11.893157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.726 [2024-07-14 15:10:11.893194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.726 qpair failed and we were unable to recover it. 00:37:32.726 [2024-07-14 15:10:11.893343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.726 [2024-07-14 15:10:11.893395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.726 qpair failed and we were unable to recover it. 00:37:32.726 [2024-07-14 15:10:11.893534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.726 [2024-07-14 15:10:11.893587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.726 qpair failed and we were unable to recover it. 00:37:32.726 [2024-07-14 15:10:11.893725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.726 [2024-07-14 15:10:11.893759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.726 qpair failed and we were unable to recover it. 00:37:32.726 [2024-07-14 15:10:11.893920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.726 [2024-07-14 15:10:11.893957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.726 qpair failed and we were unable to recover it. 00:37:32.726 [2024-07-14 15:10:11.894071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.726 [2024-07-14 15:10:11.894128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.726 qpair failed and we were unable to recover it. 00:37:32.726 [2024-07-14 15:10:11.894313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.726 [2024-07-14 15:10:11.894351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.726 qpair failed and we were unable to recover it. 00:37:32.726 [2024-07-14 15:10:11.894475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.726 [2024-07-14 15:10:11.894513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.726 qpair failed and we were unable to recover it. 00:37:32.726 [2024-07-14 15:10:11.894632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.726 [2024-07-14 15:10:11.894671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.726 qpair failed and we were unable to recover it. 00:37:32.726 [2024-07-14 15:10:11.894843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.726 [2024-07-14 15:10:11.894887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.726 qpair failed and we were unable to recover it. 00:37:32.726 [2024-07-14 15:10:11.895034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.726 [2024-07-14 15:10:11.895069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.726 qpair failed and we were unable to recover it. 00:37:32.726 [2024-07-14 15:10:11.895213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.727 [2024-07-14 15:10:11.895263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.727 qpair failed and we were unable to recover it. 00:37:32.727 [2024-07-14 15:10:11.895412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.727 [2024-07-14 15:10:11.895450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.727 qpair failed and we were unable to recover it. 00:37:32.727 [2024-07-14 15:10:11.895618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.727 [2024-07-14 15:10:11.895653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.727 qpair failed and we were unable to recover it. 00:37:32.727 [2024-07-14 15:10:11.895765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.727 [2024-07-14 15:10:11.895799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.727 qpair failed and we were unable to recover it. 00:37:32.727 [2024-07-14 15:10:11.895971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.727 [2024-07-14 15:10:11.896011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.727 qpair failed and we were unable to recover it. 00:37:32.727 [2024-07-14 15:10:11.896130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.727 [2024-07-14 15:10:11.896183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.727 qpair failed and we were unable to recover it. 00:37:32.727 [2024-07-14 15:10:11.896391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.727 [2024-07-14 15:10:11.896428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.727 qpair failed and we were unable to recover it. 00:37:32.727 [2024-07-14 15:10:11.896577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.727 [2024-07-14 15:10:11.896615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.727 qpair failed and we were unable to recover it. 00:37:32.727 [2024-07-14 15:10:11.896776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.727 [2024-07-14 15:10:11.896816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.727 qpair failed and we were unable to recover it. 00:37:32.727 [2024-07-14 15:10:11.896976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.727 [2024-07-14 15:10:11.897025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.727 qpair failed and we were unable to recover it. 00:37:32.727 [2024-07-14 15:10:11.897191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.727 [2024-07-14 15:10:11.897240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.727 qpair failed and we were unable to recover it. 00:37:32.727 [2024-07-14 15:10:11.897433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.727 [2024-07-14 15:10:11.897493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.727 qpair failed and we were unable to recover it. 00:37:32.727 [2024-07-14 15:10:11.897680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.727 [2024-07-14 15:10:11.897749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.727 qpair failed and we were unable to recover it. 00:37:32.727 [2024-07-14 15:10:11.897946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.727 [2024-07-14 15:10:11.897980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.727 qpair failed and we were unable to recover it. 00:37:32.727 [2024-07-14 15:10:11.898111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.727 [2024-07-14 15:10:11.898169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.727 qpair failed and we were unable to recover it. 00:37:32.727 [2024-07-14 15:10:11.898299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.727 [2024-07-14 15:10:11.898337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.727 qpair failed and we were unable to recover it. 00:37:32.727 [2024-07-14 15:10:11.898548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.727 [2024-07-14 15:10:11.898586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.727 qpair failed and we were unable to recover it. 00:37:32.727 [2024-07-14 15:10:11.898722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.727 [2024-07-14 15:10:11.898786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.727 qpair failed and we were unable to recover it. 00:37:32.727 [2024-07-14 15:10:11.898938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.727 [2024-07-14 15:10:11.898972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.727 qpair failed and we were unable to recover it. 00:37:32.727 [2024-07-14 15:10:11.899111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.727 [2024-07-14 15:10:11.899144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.727 qpair failed and we were unable to recover it. 00:37:32.727 [2024-07-14 15:10:11.899305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.727 [2024-07-14 15:10:11.899361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.727 qpair failed and we were unable to recover it. 00:37:32.727 [2024-07-14 15:10:11.899517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.727 [2024-07-14 15:10:11.899576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.727 qpair failed and we were unable to recover it. 00:37:32.727 [2024-07-14 15:10:11.899783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.727 [2024-07-14 15:10:11.899820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.727 qpair failed and we were unable to recover it. 00:37:32.727 [2024-07-14 15:10:11.899961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.727 [2024-07-14 15:10:11.899995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.727 qpair failed and we were unable to recover it. 00:37:32.727 [2024-07-14 15:10:11.900132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.727 [2024-07-14 15:10:11.900185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.727 qpair failed and we were unable to recover it. 00:37:32.727 [2024-07-14 15:10:11.900372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.727 [2024-07-14 15:10:11.900409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.727 qpair failed and we were unable to recover it. 00:37:32.727 [2024-07-14 15:10:11.900517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.727 [2024-07-14 15:10:11.900554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.727 qpair failed and we were unable to recover it. 00:37:32.727 [2024-07-14 15:10:11.900703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.727 [2024-07-14 15:10:11.900739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.727 qpair failed and we were unable to recover it. 00:37:32.727 [2024-07-14 15:10:11.900909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.727 [2024-07-14 15:10:11.900944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.727 qpair failed and we were unable to recover it. 00:37:32.727 [2024-07-14 15:10:11.901104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.727 [2024-07-14 15:10:11.901153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.727 qpair failed and we were unable to recover it. 00:37:32.727 [2024-07-14 15:10:11.901325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.727 [2024-07-14 15:10:11.901365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.727 qpair failed and we were unable to recover it. 00:37:32.728 [2024-07-14 15:10:11.901553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.728 [2024-07-14 15:10:11.901592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.728 qpair failed and we were unable to recover it. 00:37:32.728 [2024-07-14 15:10:11.901740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.728 [2024-07-14 15:10:11.901776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.728 qpair failed and we were unable to recover it. 00:37:32.728 [2024-07-14 15:10:11.901930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.728 [2024-07-14 15:10:11.901979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.728 qpair failed and we were unable to recover it. 00:37:32.728 [2024-07-14 15:10:11.902135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.728 [2024-07-14 15:10:11.902183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.728 qpair failed and we were unable to recover it. 00:37:32.728 [2024-07-14 15:10:11.902344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.728 [2024-07-14 15:10:11.902383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.728 qpair failed and we were unable to recover it. 00:37:32.728 [2024-07-14 15:10:11.902497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.728 [2024-07-14 15:10:11.902534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.728 qpair failed and we were unable to recover it. 00:37:32.728 [2024-07-14 15:10:11.902658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.728 [2024-07-14 15:10:11.902696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.728 qpair failed and we were unable to recover it. 00:37:32.728 [2024-07-14 15:10:11.902853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.728 [2024-07-14 15:10:11.902894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.728 qpair failed and we were unable to recover it. 00:37:32.728 [2024-07-14 15:10:11.903012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.728 [2024-07-14 15:10:11.903046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.728 qpair failed and we were unable to recover it. 00:37:32.728 [2024-07-14 15:10:11.903196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.728 [2024-07-14 15:10:11.903234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.728 qpair failed and we were unable to recover it. 00:37:32.728 [2024-07-14 15:10:11.903367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.728 [2024-07-14 15:10:11.903420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.728 qpair failed and we were unable to recover it. 00:37:32.728 [2024-07-14 15:10:11.903550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.728 [2024-07-14 15:10:11.903590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.728 qpair failed and we were unable to recover it. 00:37:32.728 [2024-07-14 15:10:11.903761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.728 [2024-07-14 15:10:11.903798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.728 qpair failed and we were unable to recover it. 00:37:32.728 [2024-07-14 15:10:11.903992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.728 [2024-07-14 15:10:11.904046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.728 qpair failed and we were unable to recover it. 00:37:32.728 [2024-07-14 15:10:11.904186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.728 [2024-07-14 15:10:11.904238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.728 qpair failed and we were unable to recover it. 00:37:32.728 [2024-07-14 15:10:11.904386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.728 [2024-07-14 15:10:11.904423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.728 qpair failed and we were unable to recover it. 00:37:32.728 [2024-07-14 15:10:11.904540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.728 [2024-07-14 15:10:11.904577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.728 qpair failed and we were unable to recover it. 00:37:32.728 [2024-07-14 15:10:11.904699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.728 [2024-07-14 15:10:11.904736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.728 qpair failed and we were unable to recover it. 00:37:32.728 [2024-07-14 15:10:11.904885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.728 [2024-07-14 15:10:11.904960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.728 qpair failed and we were unable to recover it. 00:37:32.728 [2024-07-14 15:10:11.905088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.728 [2024-07-14 15:10:11.905123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.728 qpair failed and we were unable to recover it. 00:37:32.728 [2024-07-14 15:10:11.905316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.728 [2024-07-14 15:10:11.905353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.728 qpair failed and we were unable to recover it. 00:37:32.728 [2024-07-14 15:10:11.905516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.728 [2024-07-14 15:10:11.905558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.728 qpair failed and we were unable to recover it. 00:37:32.728 [2024-07-14 15:10:11.905725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.728 [2024-07-14 15:10:11.905759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.728 qpair failed and we were unable to recover it. 00:37:32.728 [2024-07-14 15:10:11.905931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.728 [2024-07-14 15:10:11.905966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.728 qpair failed and we were unable to recover it. 00:37:32.728 [2024-07-14 15:10:11.906083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.728 [2024-07-14 15:10:11.906117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.728 qpair failed and we were unable to recover it. 00:37:32.728 [2024-07-14 15:10:11.906255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.728 [2024-07-14 15:10:11.906289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.728 qpair failed and we were unable to recover it. 00:37:32.728 [2024-07-14 15:10:11.906420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.728 [2024-07-14 15:10:11.906457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.728 qpair failed and we were unable to recover it. 00:37:32.728 [2024-07-14 15:10:11.906574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.728 [2024-07-14 15:10:11.906611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.728 qpair failed and we were unable to recover it. 00:37:32.728 [2024-07-14 15:10:11.906746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.728 [2024-07-14 15:10:11.906783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.728 qpair failed and we were unable to recover it. 00:37:32.728 [2024-07-14 15:10:11.906962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.728 [2024-07-14 15:10:11.907010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.728 qpair failed and we were unable to recover it. 00:37:32.728 [2024-07-14 15:10:11.907156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.728 [2024-07-14 15:10:11.907205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.728 qpair failed and we were unable to recover it. 00:37:32.728 [2024-07-14 15:10:11.907345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.728 [2024-07-14 15:10:11.907384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.728 qpair failed and we were unable to recover it. 00:37:32.728 [2024-07-14 15:10:11.907512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.728 [2024-07-14 15:10:11.907552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.728 qpair failed and we were unable to recover it. 00:37:32.728 [2024-07-14 15:10:11.907685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.728 [2024-07-14 15:10:11.907737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.728 qpair failed and we were unable to recover it. 00:37:32.728 [2024-07-14 15:10:11.907855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.728 [2024-07-14 15:10:11.907906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.728 qpair failed and we were unable to recover it. 00:37:32.728 [2024-07-14 15:10:11.908038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.728 [2024-07-14 15:10:11.908072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.728 qpair failed and we were unable to recover it. 00:37:32.728 [2024-07-14 15:10:11.908171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.728 [2024-07-14 15:10:11.908204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.728 qpair failed and we were unable to recover it. 00:37:32.728 [2024-07-14 15:10:11.908333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.728 [2024-07-14 15:10:11.908366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.728 qpair failed and we were unable to recover it. 00:37:32.728 [2024-07-14 15:10:11.908507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.728 [2024-07-14 15:10:11.908543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.728 qpair failed and we were unable to recover it. 00:37:32.728 [2024-07-14 15:10:11.908749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.729 [2024-07-14 15:10:11.908786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.729 qpair failed and we were unable to recover it. 00:37:32.729 [2024-07-14 15:10:11.908897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.729 [2024-07-14 15:10:11.908955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.729 qpair failed and we were unable to recover it. 00:37:32.729 [2024-07-14 15:10:11.909086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.729 [2024-07-14 15:10:11.909119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.729 qpair failed and we were unable to recover it. 00:37:32.729 [2024-07-14 15:10:11.909281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.729 [2024-07-14 15:10:11.909316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.729 qpair failed and we were unable to recover it. 00:37:32.729 [2024-07-14 15:10:11.909423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.729 [2024-07-14 15:10:11.909457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.729 qpair failed and we were unable to recover it. 00:37:32.729 [2024-07-14 15:10:11.909612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.729 [2024-07-14 15:10:11.909649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.729 qpair failed and we were unable to recover it. 00:37:32.729 [2024-07-14 15:10:11.909811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.729 [2024-07-14 15:10:11.909845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.729 qpair failed and we were unable to recover it. 00:37:32.729 [2024-07-14 15:10:11.909970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.729 [2024-07-14 15:10:11.910005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.729 qpair failed and we were unable to recover it. 00:37:32.729 [2024-07-14 15:10:11.910135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.729 [2024-07-14 15:10:11.910186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.729 qpair failed and we were unable to recover it. 00:37:32.729 [2024-07-14 15:10:11.910348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.729 [2024-07-14 15:10:11.910381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.729 qpair failed and we were unable to recover it. 00:37:32.729 [2024-07-14 15:10:11.910493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.729 [2024-07-14 15:10:11.910542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.729 qpair failed and we were unable to recover it. 00:37:32.729 [2024-07-14 15:10:11.910662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.729 [2024-07-14 15:10:11.910699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.729 qpair failed and we were unable to recover it. 00:37:32.729 [2024-07-14 15:10:11.910862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.729 [2024-07-14 15:10:11.910903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.729 qpair failed and we were unable to recover it. 00:37:32.729 [2024-07-14 15:10:11.911039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.729 [2024-07-14 15:10:11.911072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.729 qpair failed and we were unable to recover it. 00:37:32.729 [2024-07-14 15:10:11.911172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.729 [2024-07-14 15:10:11.911206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.729 qpair failed and we were unable to recover it. 00:37:32.729 [2024-07-14 15:10:11.911346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.729 [2024-07-14 15:10:11.911383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.729 qpair failed and we were unable to recover it. 00:37:32.729 [2024-07-14 15:10:11.911528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.729 [2024-07-14 15:10:11.911565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.729 qpair failed and we were unable to recover it. 00:37:32.729 [2024-07-14 15:10:11.911748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.729 [2024-07-14 15:10:11.911800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.729 qpair failed and we were unable to recover it. 00:37:32.729 [2024-07-14 15:10:11.911966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.729 [2024-07-14 15:10:11.912015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.729 qpair failed and we were unable to recover it. 00:37:32.729 [2024-07-14 15:10:11.912142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.729 [2024-07-14 15:10:11.912178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.729 qpair failed and we were unable to recover it. 00:37:32.729 [2024-07-14 15:10:11.912368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.729 [2024-07-14 15:10:11.912421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.729 qpair failed and we were unable to recover it. 00:37:32.729 [2024-07-14 15:10:11.912559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.729 [2024-07-14 15:10:11.912611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.729 qpair failed and we were unable to recover it. 00:37:32.729 [2024-07-14 15:10:11.912749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.729 [2024-07-14 15:10:11.912784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.729 qpair failed and we were unable to recover it. 00:37:32.729 [2024-07-14 15:10:11.912909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.729 [2024-07-14 15:10:11.912944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.729 qpair failed and we were unable to recover it. 00:37:32.729 [2024-07-14 15:10:11.913100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.729 [2024-07-14 15:10:11.913152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.729 qpair failed and we were unable to recover it. 00:37:32.729 [2024-07-14 15:10:11.913298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.729 [2024-07-14 15:10:11.913337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.729 qpair failed and we were unable to recover it. 00:37:32.729 [2024-07-14 15:10:11.913467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.729 [2024-07-14 15:10:11.913501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.729 qpair failed and we were unable to recover it. 00:37:32.729 [2024-07-14 15:10:11.913628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.729 [2024-07-14 15:10:11.913662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.729 qpair failed and we were unable to recover it. 00:37:32.729 [2024-07-14 15:10:11.913779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.729 [2024-07-14 15:10:11.913823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.729 qpair failed and we were unable to recover it. 00:37:32.729 [2024-07-14 15:10:11.913943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.729 [2024-07-14 15:10:11.913978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.729 qpair failed and we were unable to recover it. 00:37:32.729 [2024-07-14 15:10:11.914124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.729 [2024-07-14 15:10:11.914158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.729 qpair failed and we were unable to recover it. 00:37:32.729 [2024-07-14 15:10:11.914346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.729 [2024-07-14 15:10:11.914400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.729 qpair failed and we were unable to recover it. 00:37:32.729 [2024-07-14 15:10:11.914531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.729 [2024-07-14 15:10:11.914572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.729 qpair failed and we were unable to recover it. 00:37:32.729 [2024-07-14 15:10:11.914732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.729 [2024-07-14 15:10:11.914767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.729 qpair failed and we were unable to recover it. 00:37:32.729 [2024-07-14 15:10:11.914873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.729 [2024-07-14 15:10:11.914915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.729 qpair failed and we were unable to recover it. 00:37:32.729 [2024-07-14 15:10:11.915030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.729 [2024-07-14 15:10:11.915065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.729 qpair failed and we were unable to recover it. 00:37:32.729 [2024-07-14 15:10:11.915207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.729 [2024-07-14 15:10:11.915261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.729 qpair failed and we were unable to recover it. 00:37:32.729 [2024-07-14 15:10:11.915392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.729 [2024-07-14 15:10:11.915443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.729 qpair failed and we were unable to recover it. 00:37:32.729 [2024-07-14 15:10:11.915625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.729 [2024-07-14 15:10:11.915662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.729 qpair failed and we were unable to recover it. 00:37:32.729 [2024-07-14 15:10:11.915815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.729 [2024-07-14 15:10:11.915849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.729 qpair failed and we were unable to recover it. 00:37:32.730 [2024-07-14 15:10:11.916035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.730 [2024-07-14 15:10:11.916089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.730 qpair failed and we were unable to recover it. 00:37:32.730 [2024-07-14 15:10:11.916235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.730 [2024-07-14 15:10:11.916278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.730 qpair failed and we were unable to recover it. 00:37:32.730 [2024-07-14 15:10:11.916433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.730 [2024-07-14 15:10:11.916469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.730 qpair failed and we were unable to recover it. 00:37:32.730 [2024-07-14 15:10:11.916672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.730 [2024-07-14 15:10:11.916726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.730 qpair failed and we were unable to recover it. 00:37:32.730 [2024-07-14 15:10:11.916884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.730 [2024-07-14 15:10:11.916922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.730 qpair failed and we were unable to recover it. 00:37:32.730 [2024-07-14 15:10:11.917053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.730 [2024-07-14 15:10:11.917090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.730 qpair failed and we were unable to recover it. 00:37:32.730 [2024-07-14 15:10:11.917254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.730 [2024-07-14 15:10:11.917292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.730 qpair failed and we were unable to recover it. 00:37:32.730 [2024-07-14 15:10:11.917419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.730 [2024-07-14 15:10:11.917472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.730 qpair failed and we were unable to recover it. 00:37:32.730 [2024-07-14 15:10:11.917595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.730 [2024-07-14 15:10:11.917634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.730 qpair failed and we were unable to recover it. 00:37:32.730 [2024-07-14 15:10:11.917760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.730 [2024-07-14 15:10:11.917798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.730 qpair failed and we were unable to recover it. 00:37:32.730 [2024-07-14 15:10:11.917974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.730 [2024-07-14 15:10:11.918023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.730 qpair failed and we were unable to recover it. 00:37:32.730 [2024-07-14 15:10:11.918205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.730 [2024-07-14 15:10:11.918252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.730 qpair failed and we were unable to recover it. 00:37:32.730 [2024-07-14 15:10:11.918404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.730 [2024-07-14 15:10:11.918445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.730 qpair failed and we were unable to recover it. 00:37:32.730 [2024-07-14 15:10:11.918655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.730 [2024-07-14 15:10:11.918718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.730 qpair failed and we were unable to recover it. 00:37:32.730 [2024-07-14 15:10:11.918852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.730 [2024-07-14 15:10:11.918897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.730 qpair failed and we were unable to recover it. 00:37:32.730 [2024-07-14 15:10:11.919018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.730 [2024-07-14 15:10:11.919052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.730 qpair failed and we were unable to recover it. 00:37:32.730 [2024-07-14 15:10:11.919218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.730 [2024-07-14 15:10:11.919255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.730 qpair failed and we were unable to recover it. 00:37:32.730 [2024-07-14 15:10:11.919479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.730 [2024-07-14 15:10:11.919541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.730 qpair failed and we were unable to recover it. 00:37:32.730 [2024-07-14 15:10:11.919686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.730 [2024-07-14 15:10:11.919724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.730 qpair failed and we were unable to recover it. 00:37:32.730 [2024-07-14 15:10:11.919886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.730 [2024-07-14 15:10:11.919944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.730 qpair failed and we were unable to recover it. 00:37:32.730 [2024-07-14 15:10:11.920060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.730 [2024-07-14 15:10:11.920094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.730 qpair failed and we were unable to recover it. 00:37:32.730 [2024-07-14 15:10:11.920214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.730 [2024-07-14 15:10:11.920264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.730 qpair failed and we were unable to recover it. 00:37:32.730 [2024-07-14 15:10:11.920454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.730 [2024-07-14 15:10:11.920508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.730 qpair failed and we were unable to recover it. 00:37:32.730 [2024-07-14 15:10:11.920686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.730 [2024-07-14 15:10:11.920723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.730 qpair failed and we were unable to recover it. 00:37:32.730 [2024-07-14 15:10:11.920854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.730 [2024-07-14 15:10:11.920909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.730 qpair failed and we were unable to recover it. 00:37:32.730 [2024-07-14 15:10:11.921056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.730 [2024-07-14 15:10:11.921105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.730 qpair failed and we were unable to recover it. 00:37:32.730 [2024-07-14 15:10:11.921283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.730 [2024-07-14 15:10:11.921319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.730 qpair failed and we were unable to recover it. 00:37:32.730 [2024-07-14 15:10:11.921461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.730 [2024-07-14 15:10:11.921513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.730 qpair failed and we were unable to recover it. 00:37:32.730 [2024-07-14 15:10:11.921691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.730 [2024-07-14 15:10:11.921752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.730 qpair failed and we were unable to recover it. 00:37:32.730 [2024-07-14 15:10:11.921902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.730 [2024-07-14 15:10:11.921936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.730 qpair failed and we were unable to recover it. 00:37:32.730 [2024-07-14 15:10:11.922067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.730 [2024-07-14 15:10:11.922100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.730 qpair failed and we were unable to recover it. 00:37:32.730 [2024-07-14 15:10:11.922262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.730 [2024-07-14 15:10:11.922300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.730 qpair failed and we were unable to recover it. 00:37:32.730 [2024-07-14 15:10:11.922518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.730 [2024-07-14 15:10:11.922583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.730 qpair failed and we were unable to recover it. 00:37:32.730 [2024-07-14 15:10:11.922706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.730 [2024-07-14 15:10:11.922743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.730 qpair failed and we were unable to recover it. 00:37:32.730 [2024-07-14 15:10:11.922901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.730 [2024-07-14 15:10:11.922953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.730 qpair failed and we were unable to recover it. 00:37:32.730 [2024-07-14 15:10:11.923132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.730 [2024-07-14 15:10:11.923189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.730 qpair failed and we were unable to recover it. 00:37:32.730 [2024-07-14 15:10:11.923365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.730 [2024-07-14 15:10:11.923428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.730 qpair failed and we were unable to recover it. 00:37:32.730 [2024-07-14 15:10:11.923670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.730 [2024-07-14 15:10:11.923733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.730 qpair failed and we were unable to recover it. 00:37:32.730 [2024-07-14 15:10:11.923866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.730 [2024-07-14 15:10:11.923908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.730 qpair failed and we were unable to recover it. 00:37:32.730 [2024-07-14 15:10:11.924074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.731 [2024-07-14 15:10:11.924108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.731 qpair failed and we were unable to recover it. 00:37:32.731 [2024-07-14 15:10:11.924317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.731 [2024-07-14 15:10:11.924381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.731 qpair failed and we were unable to recover it. 00:37:32.731 [2024-07-14 15:10:11.924584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.731 [2024-07-14 15:10:11.924647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.731 qpair failed and we were unable to recover it. 00:37:32.731 [2024-07-14 15:10:11.924754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.731 [2024-07-14 15:10:11.924790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.731 qpair failed and we were unable to recover it. 00:37:32.731 [2024-07-14 15:10:11.924918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.731 [2024-07-14 15:10:11.924965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.731 qpair failed and we were unable to recover it. 00:37:32.731 [2024-07-14 15:10:11.925107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.731 [2024-07-14 15:10:11.925140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.731 qpair failed and we were unable to recover it. 00:37:32.731 [2024-07-14 15:10:11.925316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.731 [2024-07-14 15:10:11.925354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.731 qpair failed and we were unable to recover it. 00:37:32.731 [2024-07-14 15:10:11.925489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.731 [2024-07-14 15:10:11.925543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.731 qpair failed and we were unable to recover it. 00:37:32.731 [2024-07-14 15:10:11.925687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.731 [2024-07-14 15:10:11.925725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.731 qpair failed and we were unable to recover it. 00:37:32.731 [2024-07-14 15:10:11.925869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.731 [2024-07-14 15:10:11.925916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.731 qpair failed and we were unable to recover it. 00:37:32.731 [2024-07-14 15:10:11.926067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.731 [2024-07-14 15:10:11.926116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.731 qpair failed and we were unable to recover it. 00:37:32.731 [2024-07-14 15:10:11.926324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.731 [2024-07-14 15:10:11.926378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.731 qpair failed and we were unable to recover it. 00:37:32.731 [2024-07-14 15:10:11.926542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.731 [2024-07-14 15:10:11.926599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.731 qpair failed and we were unable to recover it. 00:37:32.731 [2024-07-14 15:10:11.926760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.731 [2024-07-14 15:10:11.926794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.731 qpair failed and we were unable to recover it. 00:37:32.731 [2024-07-14 15:10:11.926951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.731 [2024-07-14 15:10:11.927004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.731 qpair failed and we were unable to recover it. 00:37:32.731 [2024-07-14 15:10:11.927149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.731 [2024-07-14 15:10:11.927213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.731 qpair failed and we were unable to recover it. 00:37:32.731 [2024-07-14 15:10:11.927385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.731 [2024-07-14 15:10:11.927424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.731 qpair failed and we were unable to recover it. 00:37:32.731 [2024-07-14 15:10:11.927683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.731 [2024-07-14 15:10:11.927740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.731 qpair failed and we were unable to recover it. 00:37:32.731 [2024-07-14 15:10:11.927867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.731 [2024-07-14 15:10:11.927919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.731 qpair failed and we were unable to recover it. 00:37:32.731 [2024-07-14 15:10:11.928080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.731 [2024-07-14 15:10:11.928130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.731 qpair failed and we were unable to recover it. 00:37:32.731 [2024-07-14 15:10:11.928298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.731 [2024-07-14 15:10:11.928355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.731 qpair failed and we were unable to recover it. 00:37:32.731 [2024-07-14 15:10:11.928514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.731 [2024-07-14 15:10:11.928549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.731 qpair failed and we were unable to recover it. 00:37:32.731 [2024-07-14 15:10:11.928664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.731 [2024-07-14 15:10:11.928698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.731 qpair failed and we were unable to recover it. 00:37:32.731 [2024-07-14 15:10:11.928823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.731 [2024-07-14 15:10:11.928858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.731 qpair failed and we were unable to recover it. 00:37:32.731 [2024-07-14 15:10:11.929007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.731 [2024-07-14 15:10:11.929041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.731 qpair failed and we were unable to recover it. 00:37:32.731 [2024-07-14 15:10:11.929163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.731 [2024-07-14 15:10:11.929199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.731 qpair failed and we were unable to recover it. 00:37:32.731 [2024-07-14 15:10:11.929330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.731 [2024-07-14 15:10:11.929364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.731 qpair failed and we were unable to recover it. 00:37:32.731 [2024-07-14 15:10:11.929500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.731 [2024-07-14 15:10:11.929534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.731 qpair failed and we were unable to recover it. 00:37:32.731 [2024-07-14 15:10:11.929668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.731 [2024-07-14 15:10:11.929703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.731 qpair failed and we were unable to recover it. 00:37:32.731 [2024-07-14 15:10:11.929870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.731 [2024-07-14 15:10:11.929912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.731 qpair failed and we were unable to recover it. 00:37:32.731 [2024-07-14 15:10:11.930010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.731 [2024-07-14 15:10:11.930044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.731 qpair failed and we were unable to recover it. 00:37:32.731 [2024-07-14 15:10:11.930204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.731 [2024-07-14 15:10:11.930269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.731 qpair failed and we were unable to recover it. 00:37:32.731 [2024-07-14 15:10:11.930428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.731 [2024-07-14 15:10:11.930483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.731 qpair failed and we were unable to recover it. 00:37:32.731 [2024-07-14 15:10:11.930629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.731 [2024-07-14 15:10:11.930687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.731 qpair failed and we were unable to recover it. 00:37:32.731 [2024-07-14 15:10:11.930856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.731 [2024-07-14 15:10:11.930899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.731 qpair failed and we were unable to recover it. 00:37:32.731 [2024-07-14 15:10:11.931092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.731 [2024-07-14 15:10:11.931143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.731 qpair failed and we were unable to recover it. 00:37:32.731 [2024-07-14 15:10:11.931334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.731 [2024-07-14 15:10:11.931385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.731 qpair failed and we were unable to recover it. 00:37:32.731 [2024-07-14 15:10:11.931636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.731 [2024-07-14 15:10:11.931692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.731 qpair failed and we were unable to recover it. 00:37:32.731 [2024-07-14 15:10:11.931830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.731 [2024-07-14 15:10:11.931864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.731 qpair failed and we were unable to recover it. 00:37:32.731 [2024-07-14 15:10:11.932024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.731 [2024-07-14 15:10:11.932059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.731 qpair failed and we were unable to recover it. 00:37:32.731 [2024-07-14 15:10:11.932223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.731 [2024-07-14 15:10:11.932259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.731 qpair failed and we were unable to recover it. 00:37:32.731 [2024-07-14 15:10:11.932363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.732 [2024-07-14 15:10:11.932397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.732 qpair failed and we were unable to recover it. 00:37:32.732 [2024-07-14 15:10:11.932512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.732 [2024-07-14 15:10:11.932550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.732 qpair failed and we were unable to recover it. 00:37:32.732 [2024-07-14 15:10:11.932724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.732 [2024-07-14 15:10:11.932761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.732 qpair failed and we were unable to recover it. 00:37:32.732 [2024-07-14 15:10:11.932864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.732 [2024-07-14 15:10:11.932927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.732 qpair failed and we were unable to recover it. 00:37:32.732 [2024-07-14 15:10:11.933056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.732 [2024-07-14 15:10:11.933089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.732 qpair failed and we were unable to recover it. 00:37:32.732 [2024-07-14 15:10:11.933256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.732 [2024-07-14 15:10:11.933309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.732 qpair failed and we were unable to recover it. 00:37:32.732 [2024-07-14 15:10:11.933499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.732 [2024-07-14 15:10:11.933552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.732 qpair failed and we were unable to recover it. 00:37:32.732 [2024-07-14 15:10:11.933741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.732 [2024-07-14 15:10:11.933795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.732 qpair failed and we were unable to recover it. 00:37:32.732 [2024-07-14 15:10:11.933960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.732 [2024-07-14 15:10:11.933995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.732 qpair failed and we were unable to recover it. 00:37:32.732 [2024-07-14 15:10:11.934203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.732 [2024-07-14 15:10:11.934257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.732 qpair failed and we were unable to recover it. 00:37:32.732 [2024-07-14 15:10:11.934416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.732 [2024-07-14 15:10:11.934456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.732 qpair failed and we were unable to recover it. 00:37:32.732 [2024-07-14 15:10:11.934620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.732 [2024-07-14 15:10:11.934659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.732 qpair failed and we were unable to recover it. 00:37:32.732 [2024-07-14 15:10:11.934819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.732 [2024-07-14 15:10:11.934854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.732 qpair failed and we were unable to recover it. 00:37:32.732 [2024-07-14 15:10:11.934977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.732 [2024-07-14 15:10:11.935011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.732 qpair failed and we were unable to recover it. 00:37:32.732 [2024-07-14 15:10:11.935143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.732 [2024-07-14 15:10:11.935199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.732 qpair failed and we were unable to recover it. 00:37:32.732 [2024-07-14 15:10:11.935309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.732 [2024-07-14 15:10:11.935343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.732 qpair failed and we were unable to recover it. 00:37:32.732 [2024-07-14 15:10:11.935486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.732 [2024-07-14 15:10:11.935520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.732 qpair failed and we were unable to recover it. 00:37:32.732 [2024-07-14 15:10:11.935657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.732 [2024-07-14 15:10:11.935691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.732 qpair failed and we were unable to recover it. 00:37:32.732 [2024-07-14 15:10:11.935830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.732 [2024-07-14 15:10:11.935865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.732 qpair failed and we were unable to recover it. 00:37:32.732 [2024-07-14 15:10:11.936018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.732 [2024-07-14 15:10:11.936051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.732 qpair failed and we were unable to recover it. 00:37:32.732 [2024-07-14 15:10:11.936242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.732 [2024-07-14 15:10:11.936306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.732 qpair failed and we were unable to recover it. 00:37:32.732 [2024-07-14 15:10:11.936563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.732 [2024-07-14 15:10:11.936621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.732 qpair failed and we were unable to recover it. 00:37:32.732 [2024-07-14 15:10:11.936798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.732 [2024-07-14 15:10:11.936836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.732 qpair failed and we were unable to recover it. 00:37:32.732 [2024-07-14 15:10:11.936996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.732 [2024-07-14 15:10:11.937030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.732 qpair failed and we were unable to recover it. 00:37:32.732 [2024-07-14 15:10:11.937199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.732 [2024-07-14 15:10:11.937236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.732 qpair failed and we were unable to recover it. 00:37:32.732 [2024-07-14 15:10:11.937356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.732 [2024-07-14 15:10:11.937406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.732 qpair failed and we were unable to recover it. 00:37:32.732 [2024-07-14 15:10:11.937574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.732 [2024-07-14 15:10:11.937611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.732 qpair failed and we were unable to recover it. 00:37:32.732 [2024-07-14 15:10:11.937794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.732 [2024-07-14 15:10:11.937828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.732 qpair failed and we were unable to recover it. 00:37:32.732 [2024-07-14 15:10:11.937996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.732 [2024-07-14 15:10:11.938032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.732 qpair failed and we were unable to recover it. 00:37:32.732 [2024-07-14 15:10:11.938196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.732 [2024-07-14 15:10:11.938234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.732 qpair failed and we were unable to recover it. 00:37:32.732 [2024-07-14 15:10:11.938380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-07-14 15:10:11.938416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-07-14 15:10:11.938593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-07-14 15:10:11.938631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-07-14 15:10:11.938782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-07-14 15:10:11.938819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-07-14 15:10:11.938977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-07-14 15:10:11.939025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-07-14 15:10:11.939177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-07-14 15:10:11.939233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-07-14 15:10:11.939457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-07-14 15:10:11.939512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-07-14 15:10:11.939684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-07-14 15:10:11.939740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-07-14 15:10:11.939917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-07-14 15:10:11.939952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-07-14 15:10:11.940056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-07-14 15:10:11.940089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-07-14 15:10:11.940232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-07-14 15:10:11.940283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-07-14 15:10:11.940439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-07-14 15:10:11.940477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-07-14 15:10:11.940704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-07-14 15:10:11.940747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-07-14 15:10:11.940883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-07-14 15:10:11.940936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-07-14 15:10:11.941056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-07-14 15:10:11.941089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-07-14 15:10:11.941233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-07-14 15:10:11.941266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-07-14 15:10:11.941420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-07-14 15:10:11.941457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-07-14 15:10:11.941573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-07-14 15:10:11.941611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-07-14 15:10:11.941794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-07-14 15:10:11.941832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-07-14 15:10:11.941991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-07-14 15:10:11.942040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-07-14 15:10:11.942215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-07-14 15:10:11.942255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-07-14 15:10:11.942463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-07-14 15:10:11.942522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-07-14 15:10:11.942666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-07-14 15:10:11.942703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-07-14 15:10:11.942893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-07-14 15:10:11.942952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-07-14 15:10:11.943066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-07-14 15:10:11.943099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-07-14 15:10:11.943224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-07-14 15:10:11.943276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-07-14 15:10:11.943410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-07-14 15:10:11.943448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-07-14 15:10:11.943648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-07-14 15:10:11.943686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-07-14 15:10:11.943822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-07-14 15:10:11.943887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-07-14 15:10:11.944061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-07-14 15:10:11.944097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-07-14 15:10:11.944262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-07-14 15:10:11.944297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-07-14 15:10:11.944434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-07-14 15:10:11.944469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-07-14 15:10:11.944705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-07-14 15:10:11.944742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.733 qpair failed and we were unable to recover it. 00:37:32.733 [2024-07-14 15:10:11.944915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.733 [2024-07-14 15:10:11.944951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-07-14 15:10:11.945078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-07-14 15:10:11.945112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-07-14 15:10:11.945303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-07-14 15:10:11.945342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-07-14 15:10:11.945520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-07-14 15:10:11.945558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-07-14 15:10:11.945696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-07-14 15:10:11.945733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-07-14 15:10:11.945891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-07-14 15:10:11.945942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-07-14 15:10:11.946064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-07-14 15:10:11.946114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-07-14 15:10:11.946264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-07-14 15:10:11.946300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-07-14 15:10:11.946473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-07-14 15:10:11.946528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-07-14 15:10:11.946660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-07-14 15:10:11.946714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-07-14 15:10:11.946834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-07-14 15:10:11.946892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-07-14 15:10:11.947042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-07-14 15:10:11.947078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-07-14 15:10:11.947209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-07-14 15:10:11.947247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-07-14 15:10:11.947421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-07-14 15:10:11.947485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-07-14 15:10:11.947626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-07-14 15:10:11.947664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-07-14 15:10:11.947811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-07-14 15:10:11.947849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-07-14 15:10:11.948019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-07-14 15:10:11.948068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-07-14 15:10:11.948255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-07-14 15:10:11.948308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-07-14 15:10:11.948475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-07-14 15:10:11.948529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-07-14 15:10:11.948639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-07-14 15:10:11.948680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-07-14 15:10:11.948788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-07-14 15:10:11.948823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-07-14 15:10:11.948992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-07-14 15:10:11.949046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-07-14 15:10:11.949176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-07-14 15:10:11.949215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-07-14 15:10:11.949340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-07-14 15:10:11.949385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-07-14 15:10:11.949544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-07-14 15:10:11.949583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-07-14 15:10:11.949736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-07-14 15:10:11.949774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-07-14 15:10:11.949951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-07-14 15:10:11.949985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-07-14 15:10:11.950121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-07-14 15:10:11.950163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-07-14 15:10:11.950313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-07-14 15:10:11.950351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-07-14 15:10:11.950461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-07-14 15:10:11.950499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-07-14 15:10:11.950683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-07-14 15:10:11.950720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-07-14 15:10:11.950891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-07-14 15:10:11.950962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-07-14 15:10:11.951108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-07-14 15:10:11.951152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-07-14 15:10:11.951366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-07-14 15:10:11.951405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-07-14 15:10:11.951564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-07-14 15:10:11.951618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-07-14 15:10:11.951766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-07-14 15:10:11.951804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-07-14 15:10:11.951989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-07-14 15:10:11.952023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-07-14 15:10:11.952167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-07-14 15:10:11.952242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-07-14 15:10:11.952431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-07-14 15:10:11.952472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-07-14 15:10:11.952658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-07-14 15:10:11.952753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-07-14 15:10:11.952940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.734 [2024-07-14 15:10:11.952975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.734 qpair failed and we were unable to recover it. 00:37:32.734 [2024-07-14 15:10:11.953088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-07-14 15:10:11.953121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-07-14 15:10:11.953299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-07-14 15:10:11.953332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-07-14 15:10:11.953485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-07-14 15:10:11.953548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-07-14 15:10:11.953717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-07-14 15:10:11.953756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-07-14 15:10:11.953887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-07-14 15:10:11.953942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-07-14 15:10:11.954071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-07-14 15:10:11.954105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-07-14 15:10:11.954264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-07-14 15:10:11.954297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-07-14 15:10:11.954487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-07-14 15:10:11.954524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-07-14 15:10:11.954701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-07-14 15:10:11.954740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-07-14 15:10:11.954943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-07-14 15:10:11.954977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-07-14 15:10:11.955082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-07-14 15:10:11.955116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-07-14 15:10:11.955296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-07-14 15:10:11.955333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-07-14 15:10:11.955442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-07-14 15:10:11.955479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-07-14 15:10:11.955599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-07-14 15:10:11.955637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-07-14 15:10:11.955781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-07-14 15:10:11.955818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-07-14 15:10:11.955966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-07-14 15:10:11.956000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-07-14 15:10:11.956108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-07-14 15:10:11.956145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-07-14 15:10:11.956250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-07-14 15:10:11.956284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-07-14 15:10:11.956446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-07-14 15:10:11.956492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-07-14 15:10:11.956698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-07-14 15:10:11.956736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-07-14 15:10:11.956911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-07-14 15:10:11.956966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-07-14 15:10:11.957070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-07-14 15:10:11.957103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-07-14 15:10:11.957277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-07-14 15:10:11.957315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-07-14 15:10:11.957463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-07-14 15:10:11.957500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-07-14 15:10:11.957677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.735 [2024-07-14 15:10:11.957714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.735 qpair failed and we were unable to recover it. 00:37:32.735 [2024-07-14 15:10:11.957914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.034 [2024-07-14 15:10:11.957963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.034 qpair failed and we were unable to recover it. 00:37:33.034 [2024-07-14 15:10:11.958126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.034 [2024-07-14 15:10:11.958175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.034 qpair failed and we were unable to recover it. 00:37:33.034 [2024-07-14 15:10:11.958345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.034 [2024-07-14 15:10:11.958386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.034 qpair failed and we were unable to recover it. 00:37:33.034 [2024-07-14 15:10:11.958567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.034 [2024-07-14 15:10:11.958605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.034 qpair failed and we were unable to recover it. 00:37:33.034 [2024-07-14 15:10:11.958821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.034 [2024-07-14 15:10:11.958860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.034 qpair failed and we were unable to recover it. 00:37:33.034 [2024-07-14 15:10:11.959040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.034 [2024-07-14 15:10:11.959073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.034 qpair failed and we were unable to recover it. 00:37:33.034 [2024-07-14 15:10:11.959207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.034 [2024-07-14 15:10:11.959246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.034 qpair failed and we were unable to recover it. 00:37:33.034 [2024-07-14 15:10:11.959386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.034 [2024-07-14 15:10:11.959439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.034 qpair failed and we were unable to recover it. 00:37:33.034 [2024-07-14 15:10:11.959579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.034 [2024-07-14 15:10:11.959617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.034 qpair failed and we were unable to recover it. 00:37:33.034 [2024-07-14 15:10:11.959775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.034 [2024-07-14 15:10:11.959813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.034 qpair failed and we were unable to recover it. 00:37:33.034 [2024-07-14 15:10:11.959977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.034 [2024-07-14 15:10:11.960027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.034 qpair failed and we were unable to recover it. 00:37:33.034 [2024-07-14 15:10:11.960145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.034 [2024-07-14 15:10:11.960188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.034 qpair failed and we were unable to recover it. 00:37:33.034 [2024-07-14 15:10:11.960363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.034 [2024-07-14 15:10:11.960406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.034 qpair failed and we were unable to recover it. 00:37:33.034 [2024-07-14 15:10:11.960594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.034 [2024-07-14 15:10:11.960654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.034 qpair failed and we were unable to recover it. 00:37:33.034 [2024-07-14 15:10:11.960768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.034 [2024-07-14 15:10:11.960803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.034 qpair failed and we were unable to recover it. 00:37:33.034 [2024-07-14 15:10:11.960923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.034 [2024-07-14 15:10:11.960956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.034 qpair failed and we were unable to recover it. 00:37:33.034 [2024-07-14 15:10:11.961080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.034 [2024-07-14 15:10:11.961144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.034 qpair failed and we were unable to recover it. 00:37:33.034 [2024-07-14 15:10:11.961313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.034 [2024-07-14 15:10:11.961347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.034 qpair failed and we were unable to recover it. 00:37:33.034 [2024-07-14 15:10:11.961503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.034 [2024-07-14 15:10:11.961562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.034 qpair failed and we were unable to recover it. 00:37:33.034 [2024-07-14 15:10:11.961725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.034 [2024-07-14 15:10:11.961780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.034 qpair failed and we were unable to recover it. 00:37:33.034 [2024-07-14 15:10:11.961936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.034 [2024-07-14 15:10:11.961970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.034 qpair failed and we were unable to recover it. 00:37:33.034 [2024-07-14 15:10:11.962127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.034 [2024-07-14 15:10:11.962165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.034 qpair failed and we were unable to recover it. 00:37:33.034 [2024-07-14 15:10:11.962288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.034 [2024-07-14 15:10:11.962333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.034 qpair failed and we were unable to recover it. 00:37:33.034 [2024-07-14 15:10:11.962473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.034 [2024-07-14 15:10:11.962521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.034 qpair failed and we were unable to recover it. 00:37:33.034 [2024-07-14 15:10:11.962642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.034 [2024-07-14 15:10:11.962680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.034 qpair failed and we were unable to recover it. 00:37:33.034 [2024-07-14 15:10:11.962809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.034 [2024-07-14 15:10:11.962882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.034 qpair failed and we were unable to recover it. 00:37:33.034 [2024-07-14 15:10:11.963025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.034 [2024-07-14 15:10:11.963062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.034 qpair failed and we were unable to recover it. 00:37:33.034 [2024-07-14 15:10:11.963264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.034 [2024-07-14 15:10:11.963303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.034 qpair failed and we were unable to recover it. 00:37:33.034 [2024-07-14 15:10:11.963479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.034 [2024-07-14 15:10:11.963518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.034 qpair failed and we were unable to recover it. 00:37:33.034 [2024-07-14 15:10:11.963671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.034 [2024-07-14 15:10:11.963710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.034 qpair failed and we were unable to recover it. 00:37:33.034 [2024-07-14 15:10:11.963888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.034 [2024-07-14 15:10:11.963933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.034 qpair failed and we were unable to recover it. 00:37:33.034 [2024-07-14 15:10:11.964047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.034 [2024-07-14 15:10:11.964082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.034 qpair failed and we were unable to recover it. 00:37:33.034 [2024-07-14 15:10:11.964242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.034 [2024-07-14 15:10:11.964294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.034 qpair failed and we were unable to recover it. 00:37:33.034 [2024-07-14 15:10:11.964483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.034 [2024-07-14 15:10:11.964541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.034 qpair failed and we were unable to recover it. 00:37:33.034 [2024-07-14 15:10:11.964659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.034 [2024-07-14 15:10:11.964694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.034 qpair failed and we were unable to recover it. 00:37:33.034 [2024-07-14 15:10:11.964853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.034 [2024-07-14 15:10:11.964894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.034 qpair failed and we were unable to recover it. 00:37:33.034 [2024-07-14 15:10:11.965024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.034 [2024-07-14 15:10:11.965062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.034 qpair failed and we were unable to recover it. 00:37:33.034 [2024-07-14 15:10:11.965223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.034 [2024-07-14 15:10:11.965263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.034 qpair failed and we were unable to recover it. 00:37:33.034 [2024-07-14 15:10:11.965407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.034 [2024-07-14 15:10:11.965445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.034 qpair failed and we were unable to recover it. 00:37:33.034 [2024-07-14 15:10:11.965622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.034 [2024-07-14 15:10:11.965679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.035 qpair failed and we were unable to recover it. 00:37:33.035 [2024-07-14 15:10:11.965814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.035 [2024-07-14 15:10:11.965851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.035 qpair failed and we were unable to recover it. 00:37:33.035 [2024-07-14 15:10:11.966001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.035 [2024-07-14 15:10:11.966035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.035 qpair failed and we were unable to recover it. 00:37:33.035 [2024-07-14 15:10:11.966201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.035 [2024-07-14 15:10:11.966235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.035 qpair failed and we were unable to recover it. 00:37:33.035 [2024-07-14 15:10:11.966391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.035 [2024-07-14 15:10:11.966428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.035 qpair failed and we were unable to recover it. 00:37:33.035 [2024-07-14 15:10:11.966603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.035 [2024-07-14 15:10:11.966641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.035 qpair failed and we were unable to recover it. 00:37:33.035 [2024-07-14 15:10:11.966794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.035 [2024-07-14 15:10:11.966829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.035 qpair failed and we were unable to recover it. 00:37:33.035 [2024-07-14 15:10:11.966966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.035 [2024-07-14 15:10:11.967000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.035 qpair failed and we were unable to recover it. 00:37:33.035 [2024-07-14 15:10:11.967114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.035 [2024-07-14 15:10:11.967170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.035 qpair failed and we were unable to recover it. 00:37:33.035 [2024-07-14 15:10:11.967367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.035 [2024-07-14 15:10:11.967405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.035 qpair failed and we were unable to recover it. 00:37:33.035 [2024-07-14 15:10:11.967521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.035 [2024-07-14 15:10:11.967558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.035 qpair failed and we were unable to recover it. 00:37:33.035 [2024-07-14 15:10:11.967726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.035 [2024-07-14 15:10:11.967760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.035 qpair failed and we were unable to recover it. 00:37:33.035 [2024-07-14 15:10:11.967909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.035 [2024-07-14 15:10:11.967951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.035 qpair failed and we were unable to recover it. 00:37:33.035 [2024-07-14 15:10:11.968066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.035 [2024-07-14 15:10:11.968100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.035 qpair failed and we were unable to recover it. 00:37:33.035 [2024-07-14 15:10:11.968240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.035 [2024-07-14 15:10:11.968277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.035 qpair failed and we were unable to recover it. 00:37:33.035 [2024-07-14 15:10:11.968411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.035 [2024-07-14 15:10:11.968469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.035 qpair failed and we were unable to recover it. 00:37:33.035 [2024-07-14 15:10:11.968631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.035 [2024-07-14 15:10:11.968669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.035 qpair failed and we were unable to recover it. 00:37:33.035 [2024-07-14 15:10:11.968792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.035 [2024-07-14 15:10:11.968829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.035 qpair failed and we were unable to recover it. 00:37:33.035 [2024-07-14 15:10:11.968981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.035 [2024-07-14 15:10:11.969015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.035 qpair failed and we were unable to recover it. 00:37:33.035 [2024-07-14 15:10:11.969130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.035 [2024-07-14 15:10:11.969181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.035 qpair failed and we were unable to recover it. 00:37:33.035 [2024-07-14 15:10:11.969359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.035 [2024-07-14 15:10:11.969397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.035 qpair failed and we were unable to recover it. 00:37:33.035 [2024-07-14 15:10:11.969592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.035 [2024-07-14 15:10:11.969629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.035 qpair failed and we were unable to recover it. 00:37:33.035 [2024-07-14 15:10:11.969744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.035 [2024-07-14 15:10:11.969781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.035 qpair failed and we were unable to recover it. 00:37:33.035 [2024-07-14 15:10:11.969952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-07-14 15:10:11.969987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-07-14 15:10:11.970107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-07-14 15:10:11.970141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-07-14 15:10:11.970321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-07-14 15:10:11.970358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-07-14 15:10:11.970472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-07-14 15:10:11.970509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-07-14 15:10:11.970716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-07-14 15:10:11.970754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-07-14 15:10:11.970931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-07-14 15:10:11.970965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-07-14 15:10:11.971104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-07-14 15:10:11.971147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-07-14 15:10:11.971307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-07-14 15:10:11.971343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-07-14 15:10:11.971499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-07-14 15:10:11.971543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-07-14 15:10:11.971682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-07-14 15:10:11.971731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-07-14 15:10:11.971902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-07-14 15:10:11.971947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-07-14 15:10:11.972061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-07-14 15:10:11.972099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-07-14 15:10:11.972283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-07-14 15:10:11.972320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-07-14 15:10:11.972493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-07-14 15:10:11.972531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-07-14 15:10:11.972660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-07-14 15:10:11.972697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-07-14 15:10:11.972843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-07-14 15:10:11.972889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-07-14 15:10:11.973056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-07-14 15:10:11.973089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-07-14 15:10:11.973278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-07-14 15:10:11.973317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-07-14 15:10:11.973499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-07-14 15:10:11.973537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-07-14 15:10:11.973724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-07-14 15:10:11.973762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-07-14 15:10:11.973892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-07-14 15:10:11.973959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-07-14 15:10:11.974071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-07-14 15:10:11.974115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-07-14 15:10:11.974329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-07-14 15:10:11.974363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-07-14 15:10:11.974466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-07-14 15:10:11.974528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-07-14 15:10:11.974675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-07-14 15:10:11.974719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-07-14 15:10:11.974894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-07-14 15:10:11.974931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-07-14 15:10:11.975071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-07-14 15:10:11.975104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-07-14 15:10:11.975230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-07-14 15:10:11.975268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-07-14 15:10:11.975469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-07-14 15:10:11.975506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-07-14 15:10:11.975676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-07-14 15:10:11.975713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-07-14 15:10:11.975842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-07-14 15:10:11.975889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-07-14 15:10:11.976060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-07-14 15:10:11.976109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-07-14 15:10:11.976266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-07-14 15:10:11.976303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-07-14 15:10:11.976458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-07-14 15:10:11.976510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-07-14 15:10:11.976670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-07-14 15:10:11.976708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-07-14 15:10:11.976927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-07-14 15:10:11.976962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-07-14 15:10:11.977131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-07-14 15:10:11.977166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-07-14 15:10:11.977310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-07-14 15:10:11.977344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-07-14 15:10:11.977486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-07-14 15:10:11.977521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-07-14 15:10:11.977682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-07-14 15:10:11.977716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-07-14 15:10:11.977888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-07-14 15:10:11.977930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-07-14 15:10:11.978085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-07-14 15:10:11.978149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-07-14 15:10:11.978318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-07-14 15:10:11.978355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-07-14 15:10:11.978502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-07-14 15:10:11.978540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-07-14 15:10:11.978715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-07-14 15:10:11.978753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-07-14 15:10:11.978894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-07-14 15:10:11.978946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-07-14 15:10:11.979136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-07-14 15:10:11.979191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-07-14 15:10:11.979344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-07-14 15:10:11.979399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.036 qpair failed and we were unable to recover it. 00:37:33.036 [2024-07-14 15:10:11.979557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.036 [2024-07-14 15:10:11.979598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-07-14 15:10:11.979753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-07-14 15:10:11.979791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-07-14 15:10:11.979951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-07-14 15:10:11.979985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-07-14 15:10:11.980124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-07-14 15:10:11.980174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-07-14 15:10:11.980278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-07-14 15:10:11.980329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-07-14 15:10:11.980494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-07-14 15:10:11.980550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-07-14 15:10:11.980752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-07-14 15:10:11.980786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-07-14 15:10:11.980945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-07-14 15:10:11.980993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-07-14 15:10:11.981118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-07-14 15:10:11.981160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-07-14 15:10:11.981287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-07-14 15:10:11.981323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-07-14 15:10:11.981488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-07-14 15:10:11.981540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-07-14 15:10:11.981678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-07-14 15:10:11.981712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-07-14 15:10:11.981859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-07-14 15:10:11.981915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-07-14 15:10:11.982071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-07-14 15:10:11.982123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-07-14 15:10:11.982267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-07-14 15:10:11.982302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-07-14 15:10:11.982427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-07-14 15:10:11.982461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-07-14 15:10:11.982602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-07-14 15:10:11.982637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-07-14 15:10:11.982780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-07-14 15:10:11.982814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-07-14 15:10:11.982948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-07-14 15:10:11.982982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-07-14 15:10:11.983089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-07-14 15:10:11.983123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-07-14 15:10:11.983237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-07-14 15:10:11.983270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-07-14 15:10:11.983381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-07-14 15:10:11.983415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-07-14 15:10:11.983583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-07-14 15:10:11.983638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-07-14 15:10:11.983775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-07-14 15:10:11.983820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-07-14 15:10:11.983988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-07-14 15:10:11.984023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-07-14 15:10:11.984159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-07-14 15:10:11.984213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-07-14 15:10:11.984347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-07-14 15:10:11.984401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-07-14 15:10:11.984590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-07-14 15:10:11.984643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-07-14 15:10:11.984792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-07-14 15:10:11.984827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-07-14 15:10:11.984988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-07-14 15:10:11.985038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-07-14 15:10:11.985252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-07-14 15:10:11.985307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-07-14 15:10:11.985550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-07-14 15:10:11.985612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.037 [2024-07-14 15:10:11.985767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.037 [2024-07-14 15:10:11.985807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.037 qpair failed and we were unable to recover it. 00:37:33.038 [2024-07-14 15:10:11.985987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-07-14 15:10:11.986024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-07-14 15:10:11.986202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-07-14 15:10:11.986275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-07-14 15:10:11.986536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-07-14 15:10:11.986595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-07-14 15:10:11.986768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-07-14 15:10:11.986803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-07-14 15:10:11.986950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-07-14 15:10:11.986986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-07-14 15:10:11.987148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-07-14 15:10:11.987214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-07-14 15:10:11.987355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-07-14 15:10:11.987399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-07-14 15:10:11.987529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-07-14 15:10:11.987567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-07-14 15:10:11.987757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-07-14 15:10:11.987792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-07-14 15:10:11.987940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-07-14 15:10:11.987985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-07-14 15:10:11.988110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-07-14 15:10:11.988168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-07-14 15:10:11.988423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-07-14 15:10:11.988460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-07-14 15:10:11.988587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-07-14 15:10:11.988624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-07-14 15:10:11.988801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-07-14 15:10:11.988838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-07-14 15:10:11.989011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-07-14 15:10:11.989045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-07-14 15:10:11.989202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-07-14 15:10:11.989240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-07-14 15:10:11.989424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-07-14 15:10:11.989462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-07-14 15:10:11.989682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-07-14 15:10:11.989759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-07-14 15:10:11.989928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-07-14 15:10:11.989963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-07-14 15:10:11.990111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-07-14 15:10:11.990145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-07-14 15:10:11.990341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-07-14 15:10:11.990398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-07-14 15:10:11.990528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-07-14 15:10:11.990565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-07-14 15:10:11.990716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-07-14 15:10:11.990754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-07-14 15:10:11.990891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-07-14 15:10:11.990925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-07-14 15:10:11.991068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-07-14 15:10:11.991102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-07-14 15:10:11.991254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.038 [2024-07-14 15:10:11.991291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.038 qpair failed and we were unable to recover it. 00:37:33.038 [2024-07-14 15:10:11.991493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-07-14 15:10:11.991530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.039 [2024-07-14 15:10:11.991680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-07-14 15:10:11.991717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.039 [2024-07-14 15:10:11.991888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-07-14 15:10:11.991942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.039 [2024-07-14 15:10:11.992047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-07-14 15:10:11.992080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.039 [2024-07-14 15:10:11.992187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-07-14 15:10:11.992222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.039 [2024-07-14 15:10:11.992371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-07-14 15:10:11.992409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.039 [2024-07-14 15:10:11.992561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-07-14 15:10:11.992597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.039 [2024-07-14 15:10:11.992747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-07-14 15:10:11.992785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.039 [2024-07-14 15:10:11.992898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-07-14 15:10:11.992951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.039 [2024-07-14 15:10:11.993098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-07-14 15:10:11.993147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.039 [2024-07-14 15:10:11.993343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-07-14 15:10:11.993397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.039 [2024-07-14 15:10:11.993540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-07-14 15:10:11.993599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.039 [2024-07-14 15:10:11.993764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-07-14 15:10:11.993799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.039 [2024-07-14 15:10:11.993963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-07-14 15:10:11.993999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.039 [2024-07-14 15:10:11.994127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-07-14 15:10:11.994177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.039 qpair failed and we were unable to recover it. 00:37:33.039 [2024-07-14 15:10:11.994328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.039 [2024-07-14 15:10:11.994365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-07-14 15:10:11.994506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-07-14 15:10:11.994542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-07-14 15:10:11.994681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-07-14 15:10:11.994716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-07-14 15:10:11.994869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-07-14 15:10:11.994926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-07-14 15:10:11.995084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-07-14 15:10:11.995120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-07-14 15:10:11.995341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-07-14 15:10:11.995402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-07-14 15:10:11.995556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-07-14 15:10:11.995617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-07-14 15:10:11.995774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-07-14 15:10:11.995812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-07-14 15:10:11.996006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-07-14 15:10:11.996040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-07-14 15:10:11.996201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-07-14 15:10:11.996264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-07-14 15:10:11.996439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-07-14 15:10:11.996477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-07-14 15:10:11.996640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-07-14 15:10:11.996691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-07-14 15:10:11.996842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-07-14 15:10:11.996889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-07-14 15:10:11.997086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-07-14 15:10:11.997135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-07-14 15:10:11.997319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-07-14 15:10:11.997355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-07-14 15:10:11.997537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-07-14 15:10:11.997576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-07-14 15:10:11.997760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-07-14 15:10:11.997798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-07-14 15:10:11.997966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-07-14 15:10:11.998000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-07-14 15:10:11.998201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-07-14 15:10:11.998255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-07-14 15:10:11.998472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-07-14 15:10:11.998529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-07-14 15:10:11.998713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-07-14 15:10:11.998748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-07-14 15:10:11.998885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-07-14 15:10:11.998923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-07-14 15:10:11.999077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-07-14 15:10:11.999111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-07-14 15:10:11.999289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-07-14 15:10:11.999323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-07-14 15:10:11.999510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-07-14 15:10:11.999566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-07-14 15:10:11.999686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-07-14 15:10:11.999723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-07-14 15:10:11.999849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-07-14 15:10:11.999891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-07-14 15:10:12.000008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-07-14 15:10:12.000042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-07-14 15:10:12.000183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-07-14 15:10:12.000221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-07-14 15:10:12.000348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-07-14 15:10:12.000381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-07-14 15:10:12.000513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-07-14 15:10:12.000546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-07-14 15:10:12.000736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-07-14 15:10:12.000773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-07-14 15:10:12.000923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-07-14 15:10:12.000958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-07-14 15:10:12.001115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-07-14 15:10:12.001184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-07-14 15:10:12.001322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.040 [2024-07-14 15:10:12.001363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.040 qpair failed and we were unable to recover it. 00:37:33.040 [2024-07-14 15:10:12.001591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-07-14 15:10:12.001630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-07-14 15:10:12.001786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-07-14 15:10:12.001824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-07-14 15:10:12.001992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-07-14 15:10:12.002027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-07-14 15:10:12.002150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-07-14 15:10:12.002185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-07-14 15:10:12.002361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-07-14 15:10:12.002398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-07-14 15:10:12.002558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-07-14 15:10:12.002616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-07-14 15:10:12.002797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-07-14 15:10:12.002835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-07-14 15:10:12.002982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-07-14 15:10:12.003016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-07-14 15:10:12.003129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-07-14 15:10:12.003180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-07-14 15:10:12.003331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-07-14 15:10:12.003364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-07-14 15:10:12.003505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-07-14 15:10:12.003557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-07-14 15:10:12.003731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-07-14 15:10:12.003768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-07-14 15:10:12.003954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-07-14 15:10:12.003988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-07-14 15:10:12.004099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-07-14 15:10:12.004132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-07-14 15:10:12.004300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-07-14 15:10:12.004342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-07-14 15:10:12.004560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-07-14 15:10:12.004597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-07-14 15:10:12.004717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-07-14 15:10:12.004756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-07-14 15:10:12.004914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-07-14 15:10:12.004965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-07-14 15:10:12.005117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-07-14 15:10:12.005154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-07-14 15:10:12.005308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-07-14 15:10:12.005348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-07-14 15:10:12.005544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-07-14 15:10:12.005584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-07-14 15:10:12.005783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-07-14 15:10:12.005834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-07-14 15:10:12.006002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-07-14 15:10:12.006038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-07-14 15:10:12.006158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-07-14 15:10:12.006194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-07-14 15:10:12.006356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-07-14 15:10:12.006390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-07-14 15:10:12.006540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-07-14 15:10:12.006577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-07-14 15:10:12.006728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-07-14 15:10:12.006765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-07-14 15:10:12.006918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-07-14 15:10:12.006953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-07-14 15:10:12.007098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-07-14 15:10:12.007132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.041 [2024-07-14 15:10:12.007296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.041 [2024-07-14 15:10:12.007352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.041 qpair failed and we were unable to recover it. 00:37:33.042 [2024-07-14 15:10:12.007526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-07-14 15:10:12.007564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-07-14 15:10:12.007710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-07-14 15:10:12.007747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-07-14 15:10:12.007894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-07-14 15:10:12.007947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-07-14 15:10:12.008065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-07-14 15:10:12.008114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-07-14 15:10:12.008286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-07-14 15:10:12.008344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-07-14 15:10:12.008533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-07-14 15:10:12.008586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-07-14 15:10:12.008720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-07-14 15:10:12.008754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-07-14 15:10:12.008909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-07-14 15:10:12.008945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-07-14 15:10:12.009108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-07-14 15:10:12.009164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-07-14 15:10:12.009365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-07-14 15:10:12.009426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-07-14 15:10:12.009636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-07-14 15:10:12.009692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-07-14 15:10:12.009817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-07-14 15:10:12.009863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-07-14 15:10:12.010045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-07-14 15:10:12.010083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-07-14 15:10:12.010194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-07-14 15:10:12.010232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-07-14 15:10:12.010379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-07-14 15:10:12.010416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-07-14 15:10:12.010636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-07-14 15:10:12.010711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-07-14 15:10:12.010842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-07-14 15:10:12.010904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-07-14 15:10:12.011085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-07-14 15:10:12.011137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-07-14 15:10:12.011353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-07-14 15:10:12.011407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-07-14 15:10:12.011561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-07-14 15:10:12.011618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-07-14 15:10:12.011725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-07-14 15:10:12.011760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-07-14 15:10:12.011899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-07-14 15:10:12.011955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-07-14 15:10:12.012085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-07-14 15:10:12.012122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-07-14 15:10:12.012269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-07-14 15:10:12.012306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-07-14 15:10:12.012507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-07-14 15:10:12.012544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-07-14 15:10:12.012660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-07-14 15:10:12.012697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-07-14 15:10:12.012815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-07-14 15:10:12.012852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-07-14 15:10:12.013010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-07-14 15:10:12.013066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-07-14 15:10:12.013224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-07-14 15:10:12.013278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-07-14 15:10:12.013431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-07-14 15:10:12.013472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-07-14 15:10:12.013624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-07-14 15:10:12.013663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-07-14 15:10:12.013863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-07-14 15:10:12.013906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-07-14 15:10:12.014066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.042 [2024-07-14 15:10:12.014101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.042 qpair failed and we were unable to recover it. 00:37:33.042 [2024-07-14 15:10:12.014275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-07-14 15:10:12.014313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-07-14 15:10:12.014490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-07-14 15:10:12.014547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-07-14 15:10:12.014736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-07-14 15:10:12.014787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-07-14 15:10:12.014952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-07-14 15:10:12.014987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-07-14 15:10:12.015106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-07-14 15:10:12.015140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-07-14 15:10:12.015322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-07-14 15:10:12.015360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-07-14 15:10:12.015503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-07-14 15:10:12.015541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-07-14 15:10:12.015735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-07-14 15:10:12.015773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-07-14 15:10:12.015937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-07-14 15:10:12.016005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-07-14 15:10:12.016150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-07-14 15:10:12.016185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-07-14 15:10:12.016351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-07-14 15:10:12.016385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-07-14 15:10:12.016628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-07-14 15:10:12.016685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-07-14 15:10:12.016840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-07-14 15:10:12.016886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-07-14 15:10:12.017047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-07-14 15:10:12.017081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-07-14 15:10:12.017260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-07-14 15:10:12.017316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-07-14 15:10:12.017513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-07-14 15:10:12.017567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-07-14 15:10:12.017741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-07-14 15:10:12.017790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-07-14 15:10:12.017973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-07-14 15:10:12.018009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-07-14 15:10:12.018129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-07-14 15:10:12.018186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-07-14 15:10:12.018412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-07-14 15:10:12.018446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-07-14 15:10:12.018582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-07-14 15:10:12.018637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-07-14 15:10:12.018787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-07-14 15:10:12.018825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-07-14 15:10:12.019054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-07-14 15:10:12.019089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-07-14 15:10:12.019246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-07-14 15:10:12.019285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-07-14 15:10:12.019405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-07-14 15:10:12.019442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-07-14 15:10:12.019616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-07-14 15:10:12.019654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-07-14 15:10:12.019801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-07-14 15:10:12.019838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-07-14 15:10:12.020001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-07-14 15:10:12.020035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-07-14 15:10:12.020206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-07-14 15:10:12.020240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-07-14 15:10:12.020385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-07-14 15:10:12.020456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-07-14 15:10:12.020674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-07-14 15:10:12.020733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.043 [2024-07-14 15:10:12.020893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.043 [2024-07-14 15:10:12.020928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.043 qpair failed and we were unable to recover it. 00:37:33.044 [2024-07-14 15:10:12.021039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-07-14 15:10:12.021073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-07-14 15:10:12.021250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-07-14 15:10:12.021288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-07-14 15:10:12.021430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-07-14 15:10:12.021503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-07-14 15:10:12.021672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-07-14 15:10:12.021709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-07-14 15:10:12.021837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-07-14 15:10:12.021874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-07-14 15:10:12.022045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-07-14 15:10:12.022080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-07-14 15:10:12.022257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-07-14 15:10:12.022294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-07-14 15:10:12.022499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-07-14 15:10:12.022537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-07-14 15:10:12.022756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-07-14 15:10:12.022792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-07-14 15:10:12.022936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-07-14 15:10:12.022987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-07-14 15:10:12.023120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-07-14 15:10:12.023170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-07-14 15:10:12.023349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-07-14 15:10:12.023387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-07-14 15:10:12.023571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-07-14 15:10:12.023631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-07-14 15:10:12.023791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-07-14 15:10:12.023830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-07-14 15:10:12.023990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-07-14 15:10:12.024024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-07-14 15:10:12.024164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-07-14 15:10:12.024198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-07-14 15:10:12.024328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-07-14 15:10:12.024362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-07-14 15:10:12.024533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-07-14 15:10:12.024567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-07-14 15:10:12.024706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-07-14 15:10:12.024758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-07-14 15:10:12.024920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-07-14 15:10:12.024955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-07-14 15:10:12.025053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-07-14 15:10:12.025085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-07-14 15:10:12.025193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-07-14 15:10:12.025227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-07-14 15:10:12.025358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-07-14 15:10:12.025396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.044 [2024-07-14 15:10:12.025550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.044 [2024-07-14 15:10:12.025584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.044 qpair failed and we were unable to recover it. 00:37:33.047 [2024-07-14 15:10:12.025700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-07-14 15:10:12.025735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-07-14 15:10:12.025874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-07-14 15:10:12.025916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-07-14 15:10:12.026054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-07-14 15:10:12.026092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-07-14 15:10:12.026246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-07-14 15:10:12.026284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-07-14 15:10:12.026431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-07-14 15:10:12.026470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-07-14 15:10:12.026633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-07-14 15:10:12.026667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-07-14 15:10:12.026841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-07-14 15:10:12.026886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-07-14 15:10:12.027015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-07-14 15:10:12.027049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-07-14 15:10:12.027168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-07-14 15:10:12.027203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-07-14 15:10:12.027307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-07-14 15:10:12.027340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-07-14 15:10:12.027544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-07-14 15:10:12.027578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-07-14 15:10:12.027708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-07-14 15:10:12.027743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-07-14 15:10:12.027886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-07-14 15:10:12.027937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-07-14 15:10:12.028060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-07-14 15:10:12.028094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-07-14 15:10:12.028237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-07-14 15:10:12.028271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-07-14 15:10:12.028418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-07-14 15:10:12.028456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-07-14 15:10:12.028607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-07-14 15:10:12.028645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-07-14 15:10:12.028828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-07-14 15:10:12.028862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-07-14 15:10:12.029023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-07-14 15:10:12.029072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-07-14 15:10:12.029254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-07-14 15:10:12.029308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-07-14 15:10:12.029513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-07-14 15:10:12.029551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-07-14 15:10:12.029708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-07-14 15:10:12.029747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.047 qpair failed and we were unable to recover it. 00:37:33.047 [2024-07-14 15:10:12.029904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.047 [2024-07-14 15:10:12.029959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-07-14 15:10:12.030100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-07-14 15:10:12.030135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-07-14 15:10:12.030302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-07-14 15:10:12.030341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-07-14 15:10:12.030574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-07-14 15:10:12.030631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-07-14 15:10:12.030841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-07-14 15:10:12.030889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-07-14 15:10:12.031067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-07-14 15:10:12.031117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-07-14 15:10:12.031231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-07-14 15:10:12.031284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-07-14 15:10:12.031481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-07-14 15:10:12.031514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-07-14 15:10:12.031739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-07-14 15:10:12.031803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-07-14 15:10:12.031939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-07-14 15:10:12.031990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-07-14 15:10:12.032099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-07-14 15:10:12.032133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-07-14 15:10:12.032266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-07-14 15:10:12.032317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-07-14 15:10:12.032468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-07-14 15:10:12.032506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-07-14 15:10:12.032665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-07-14 15:10:12.032701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-07-14 15:10:12.032810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-07-14 15:10:12.032861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-07-14 15:10:12.033043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-07-14 15:10:12.033092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-07-14 15:10:12.033240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-07-14 15:10:12.033278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-07-14 15:10:12.033469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-07-14 15:10:12.033526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-07-14 15:10:12.033741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-07-14 15:10:12.033775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-07-14 15:10:12.033938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-07-14 15:10:12.033972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-07-14 15:10:12.034105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-07-14 15:10:12.034143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-07-14 15:10:12.034283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-07-14 15:10:12.034322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-07-14 15:10:12.034482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-07-14 15:10:12.034517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-07-14 15:10:12.034708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-07-14 15:10:12.034746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-07-14 15:10:12.034899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-07-14 15:10:12.034940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-07-14 15:10:12.035083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-07-14 15:10:12.035117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-07-14 15:10:12.035250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-07-14 15:10:12.035294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-07-14 15:10:12.035446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-07-14 15:10:12.035483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-07-14 15:10:12.035663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-07-14 15:10:12.035697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-07-14 15:10:12.035823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-07-14 15:10:12.035857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-07-14 15:10:12.036035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-07-14 15:10:12.036069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-07-14 15:10:12.036176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-07-14 15:10:12.036208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-07-14 15:10:12.036343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-07-14 15:10:12.036377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-07-14 15:10:12.036528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-07-14 15:10:12.036578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-07-14 15:10:12.036733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-07-14 15:10:12.036802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-07-14 15:10:12.036931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-07-14 15:10:12.036970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-07-14 15:10:12.037111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-07-14 15:10:12.037147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-07-14 15:10:12.037391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-07-14 15:10:12.037446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-07-14 15:10:12.037584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-07-14 15:10:12.037638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-07-14 15:10:12.037776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-07-14 15:10:12.037812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-07-14 15:10:12.037917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-07-14 15:10:12.037951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-07-14 15:10:12.038134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-07-14 15:10:12.038202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-07-14 15:10:12.038475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-07-14 15:10:12.038536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-07-14 15:10:12.038770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-07-14 15:10:12.038810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-07-14 15:10:12.038968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-07-14 15:10:12.039015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-07-14 15:10:12.039197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-07-14 15:10:12.039233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-07-14 15:10:12.039476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-07-14 15:10:12.039532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-07-14 15:10:12.039722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-07-14 15:10:12.039757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-07-14 15:10:12.039894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-07-14 15:10:12.039938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-07-14 15:10:12.040086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.048 [2024-07-14 15:10:12.040121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.048 qpair failed and we were unable to recover it. 00:37:33.048 [2024-07-14 15:10:12.040322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.040361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.040551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.040590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.040733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.040771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.040953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.041002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.041173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.041234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.041413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.041455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.041656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.041709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.041864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.041924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.042060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.042095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.042254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.042294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.042457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.042501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.042696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.042765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.042912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.042950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.043092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.043133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.043288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.043323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.043460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.043495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.043663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.043701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.043825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.043863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.044014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.044049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.044212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.044250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.044403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.044441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.044645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.044682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.044839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.044884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.045014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.045059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.045230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.045279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.045442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.045499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.045634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.045687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.045847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.045888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.046010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.046046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.046205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.046257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.046410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.046463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.046773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.046845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.047025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.047061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.047221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.047259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.047447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.047515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.047752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.047811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.047961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.048002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.048177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.048217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.048427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.048485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.048738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.048777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.048973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.049012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.049129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.049161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.049278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.049322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.049534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.049621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.049746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.049783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.049929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.049965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.050104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.050139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.050256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.050290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.050445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.050483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.050602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.050639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.050810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.050871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.051052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.051089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.051295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.051361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.051586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.051647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.051785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.051825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.051987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.052034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.052287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.052346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.052581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.049 [2024-07-14 15:10:12.052635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.049 qpair failed and we were unable to recover it. 00:37:33.049 [2024-07-14 15:10:12.052808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-07-14 15:10:12.052842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-07-14 15:10:12.052994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-07-14 15:10:12.053041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-07-14 15:10:12.053173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-07-14 15:10:12.053218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-07-14 15:10:12.053409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-07-14 15:10:12.053468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-07-14 15:10:12.053694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-07-14 15:10:12.053733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-07-14 15:10:12.053859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-07-14 15:10:12.053914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-07-14 15:10:12.054071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-07-14 15:10:12.054115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-07-14 15:10:12.054292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-07-14 15:10:12.054327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-07-14 15:10:12.054542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-07-14 15:10:12.054599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-07-14 15:10:12.054757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-07-14 15:10:12.054795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-07-14 15:10:12.054959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-07-14 15:10:12.055004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-07-14 15:10:12.055175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-07-14 15:10:12.055212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-07-14 15:10:12.055329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-07-14 15:10:12.055366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-07-14 15:10:12.055483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-07-14 15:10:12.055534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-07-14 15:10:12.055670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-07-14 15:10:12.055708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-07-14 15:10:12.055891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-07-14 15:10:12.055946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-07-14 15:10:12.056113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-07-14 15:10:12.056162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-07-14 15:10:12.056327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-07-14 15:10:12.056384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-07-14 15:10:12.056555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-07-14 15:10:12.056607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-07-14 15:10:12.056761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-07-14 15:10:12.056796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-07-14 15:10:12.056935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-07-14 15:10:12.056971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-07-14 15:10:12.057103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-07-14 15:10:12.057137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-07-14 15:10:12.057281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-07-14 15:10:12.057333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-07-14 15:10:12.057452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-07-14 15:10:12.057490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-07-14 15:10:12.057639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-07-14 15:10:12.057677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-07-14 15:10:12.057866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-07-14 15:10:12.057940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-07-14 15:10:12.058069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-07-14 15:10:12.058107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-07-14 15:10:12.058223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-07-14 15:10:12.058263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-07-14 15:10:12.058442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-07-14 15:10:12.058495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-07-14 15:10:12.058655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-07-14 15:10:12.058706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-07-14 15:10:12.058882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-07-14 15:10:12.058935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-07-14 15:10:12.059098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-07-14 15:10:12.059137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-07-14 15:10:12.059315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-07-14 15:10:12.059358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-07-14 15:10:12.059505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-07-14 15:10:12.059580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-07-14 15:10:12.059729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-07-14 15:10:12.059768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-07-14 15:10:12.059978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-07-14 15:10:12.060027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-07-14 15:10:12.060193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-07-14 15:10:12.060262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-07-14 15:10:12.060431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-07-14 15:10:12.060473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.050 qpair failed and we were unable to recover it. 00:37:33.050 [2024-07-14 15:10:12.060624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.050 [2024-07-14 15:10:12.060664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-07-14 15:10:12.060842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-07-14 15:10:12.060889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-07-14 15:10:12.061063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-07-14 15:10:12.061113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-07-14 15:10:12.061364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-07-14 15:10:12.061424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-07-14 15:10:12.061681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-07-14 15:10:12.061740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-07-14 15:10:12.061904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-07-14 15:10:12.061955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-07-14 15:10:12.062092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-07-14 15:10:12.062126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-07-14 15:10:12.062280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-07-14 15:10:12.062318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-07-14 15:10:12.062462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-07-14 15:10:12.062499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-07-14 15:10:12.062762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-07-14 15:10:12.062820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-07-14 15:10:12.063000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-07-14 15:10:12.063035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-07-14 15:10:12.063146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-07-14 15:10:12.063197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-07-14 15:10:12.063407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-07-14 15:10:12.063445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-07-14 15:10:12.063586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-07-14 15:10:12.063635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-07-14 15:10:12.063809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-07-14 15:10:12.063846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-07-14 15:10:12.063993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-07-14 15:10:12.064028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-07-14 15:10:12.064224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-07-14 15:10:12.064278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-07-14 15:10:12.064521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-07-14 15:10:12.064562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-07-14 15:10:12.064705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-07-14 15:10:12.064763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-07-14 15:10:12.064960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-07-14 15:10:12.064995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-07-14 15:10:12.065109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-07-14 15:10:12.065143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-07-14 15:10:12.065309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-07-14 15:10:12.065344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-07-14 15:10:12.065475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-07-14 15:10:12.065512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-07-14 15:10:12.065685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-07-14 15:10:12.065723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-07-14 15:10:12.065850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-07-14 15:10:12.065890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-07-14 15:10:12.066062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-07-14 15:10:12.066111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-07-14 15:10:12.066294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-07-14 15:10:12.066334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-07-14 15:10:12.066514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-07-14 15:10:12.066552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-07-14 15:10:12.066698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-07-14 15:10:12.066737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-07-14 15:10:12.066883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-07-14 15:10:12.066950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-07-14 15:10:12.067117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-07-14 15:10:12.067166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-07-14 15:10:12.067360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-07-14 15:10:12.067414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-07-14 15:10:12.067676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-07-14 15:10:12.067732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-07-14 15:10:12.067851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-07-14 15:10:12.067904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-07-14 15:10:12.068062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-07-14 15:10:12.068102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-07-14 15:10:12.068391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-07-14 15:10:12.068451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-07-14 15:10:12.068722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-07-14 15:10:12.068778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-07-14 15:10:12.068943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-07-14 15:10:12.068979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-07-14 15:10:12.069116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-07-14 15:10:12.069150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-07-14 15:10:12.069257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-07-14 15:10:12.069309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-07-14 15:10:12.069423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-07-14 15:10:12.069460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-07-14 15:10:12.069590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-07-14 15:10:12.069658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-07-14 15:10:12.069829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-07-14 15:10:12.069867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-07-14 15:10:12.070035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-07-14 15:10:12.070069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-07-14 15:10:12.070206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-07-14 15:10:12.070240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-07-14 15:10:12.070347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-07-14 15:10:12.070397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-07-14 15:10:12.070586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-07-14 15:10:12.070625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-07-14 15:10:12.070799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.051 [2024-07-14 15:10:12.070837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.051 qpair failed and we were unable to recover it. 00:37:33.051 [2024-07-14 15:10:12.071002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-07-14 15:10:12.071037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-07-14 15:10:12.071212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-07-14 15:10:12.071249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-07-14 15:10:12.071421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-07-14 15:10:12.071459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-07-14 15:10:12.071630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-07-14 15:10:12.071668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-07-14 15:10:12.071842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-07-14 15:10:12.071887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-07-14 15:10:12.072037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-07-14 15:10:12.072087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-07-14 15:10:12.072258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-07-14 15:10:12.072308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-07-14 15:10:12.072538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-07-14 15:10:12.072597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-07-14 15:10:12.072778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-07-14 15:10:12.072818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-07-14 15:10:12.072964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-07-14 15:10:12.073000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-07-14 15:10:12.073143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-07-14 15:10:12.073202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-07-14 15:10:12.073427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-07-14 15:10:12.073500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-07-14 15:10:12.073650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-07-14 15:10:12.073709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-07-14 15:10:12.073869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-07-14 15:10:12.073929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-07-14 15:10:12.074040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-07-14 15:10:12.074072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-07-14 15:10:12.074259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-07-14 15:10:12.074309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-07-14 15:10:12.074533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-07-14 15:10:12.074590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-07-14 15:10:12.074701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-07-14 15:10:12.074736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-07-14 15:10:12.074909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-07-14 15:10:12.074944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-07-14 15:10:12.075108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-07-14 15:10:12.075161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-07-14 15:10:12.075313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-07-14 15:10:12.075367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-07-14 15:10:12.075578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-07-14 15:10:12.075638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-07-14 15:10:12.075808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-07-14 15:10:12.075842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-07-14 15:10:12.075988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-07-14 15:10:12.076022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-07-14 15:10:12.076169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-07-14 15:10:12.076246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-07-14 15:10:12.076497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-07-14 15:10:12.076560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-07-14 15:10:12.076693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-07-14 15:10:12.076735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-07-14 15:10:12.076903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-07-14 15:10:12.076939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-07-14 15:10:12.077126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-07-14 15:10:12.077180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-07-14 15:10:12.077368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-07-14 15:10:12.077409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-07-14 15:10:12.077679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-07-14 15:10:12.077745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-07-14 15:10:12.077917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-07-14 15:10:12.077967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-07-14 15:10:12.078130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-07-14 15:10:12.078168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-07-14 15:10:12.078321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-07-14 15:10:12.078359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-07-14 15:10:12.078512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-07-14 15:10:12.078551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-07-14 15:10:12.078704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-07-14 15:10:12.078752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-07-14 15:10:12.078903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-07-14 15:10:12.078957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-07-14 15:10:12.079098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-07-14 15:10:12.079133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-07-14 15:10:12.079303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-07-14 15:10:12.079356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-07-14 15:10:12.079503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-07-14 15:10:12.079597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-07-14 15:10:12.079736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-07-14 15:10:12.079771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-07-14 15:10:12.079888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-07-14 15:10:12.079921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-07-14 15:10:12.080037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-07-14 15:10:12.080072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-07-14 15:10:12.080206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-07-14 15:10:12.080240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-07-14 15:10:12.080377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-07-14 15:10:12.080412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-07-14 15:10:12.080575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-07-14 15:10:12.080624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-07-14 15:10:12.080770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-07-14 15:10:12.080807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-07-14 15:10:12.080970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-07-14 15:10:12.081020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-07-14 15:10:12.081159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-07-14 15:10:12.081201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-07-14 15:10:12.081358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-07-14 15:10:12.081435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.052 qpair failed and we were unable to recover it. 00:37:33.052 [2024-07-14 15:10:12.081672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.052 [2024-07-14 15:10:12.081735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.081895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.081930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.082064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.082099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.082305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.082375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.082569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.082617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.082790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.082842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.083002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.083035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.083172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.083214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.083398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.083443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.083575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.083610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.083737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.083788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.084022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.084057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.084239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.084277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.084447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.084485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.084601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.084639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.084768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.084806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.084971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.085027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.085175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.085212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.085353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.085406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.085566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.085617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.085752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.085787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.085931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.085966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.086080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.086116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.086255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.086290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.086402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.086436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.086570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.086605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.086712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.086746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.086903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.086954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.087093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.087128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.087248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.087283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.087433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.087468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.087660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.087698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.087823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.087861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.088032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.088067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.088177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.088229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.088345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.088384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.088552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.088591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.088739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.088777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.088894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.088948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.089071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.089120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.089289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.089345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.089498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.089552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.089689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.089724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.089863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.089925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.090076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.090113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.090307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.090382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.090543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.090582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.090702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.090740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.090899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.090950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.091055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.091088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.091223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.091257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.091424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.091461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.091655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.091692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.091862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.091926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.092065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.092099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.095989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.096040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.096195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.096242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.096372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.053 [2024-07-14 15:10:12.096412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.053 qpair failed and we were unable to recover it. 00:37:33.053 [2024-07-14 15:10:12.096553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-07-14 15:10:12.096591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-07-14 15:10:12.096741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-07-14 15:10:12.096779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-07-14 15:10:12.096934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-07-14 15:10:12.096968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-07-14 15:10:12.097104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-07-14 15:10:12.097154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-07-14 15:10:12.097309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-07-14 15:10:12.097346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-07-14 15:10:12.097500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-07-14 15:10:12.097553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-07-14 15:10:12.097714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-07-14 15:10:12.097752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-07-14 15:10:12.097929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-07-14 15:10:12.097965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-07-14 15:10:12.098133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-07-14 15:10:12.098168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-07-14 15:10:12.098307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-07-14 15:10:12.098343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-07-14 15:10:12.098459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-07-14 15:10:12.098493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-07-14 15:10:12.098661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-07-14 15:10:12.098697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-07-14 15:10:12.098804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-07-14 15:10:12.098856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-07-14 15:10:12.099000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-07-14 15:10:12.099034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-07-14 15:10:12.099188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-07-14 15:10:12.099225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-07-14 15:10:12.099395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-07-14 15:10:12.099432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-07-14 15:10:12.099558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-07-14 15:10:12.099595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-07-14 15:10:12.099715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-07-14 15:10:12.099752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-07-14 15:10:12.099905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-07-14 15:10:12.099956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-07-14 15:10:12.100071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-07-14 15:10:12.100103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-07-14 15:10:12.100228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-07-14 15:10:12.100264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-07-14 15:10:12.100456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-07-14 15:10:12.100493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-07-14 15:10:12.100607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-07-14 15:10:12.100642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-07-14 15:10:12.100815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-07-14 15:10:12.100853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-07-14 15:10:12.101016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-07-14 15:10:12.101050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-07-14 15:10:12.101249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-07-14 15:10:12.101298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-07-14 15:10:12.101461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-07-14 15:10:12.101503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-07-14 15:10:12.101694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-07-14 15:10:12.101737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-07-14 15:10:12.101935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-07-14 15:10:12.101970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-07-14 15:10:12.102088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-07-14 15:10:12.102121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-07-14 15:10:12.102267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-07-14 15:10:12.102302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-07-14 15:10:12.102413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-07-14 15:10:12.102445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-07-14 15:10:12.102610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-07-14 15:10:12.102648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-07-14 15:10:12.102801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-07-14 15:10:12.102835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-07-14 15:10:12.102976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-07-14 15:10:12.103010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-07-14 15:10:12.103136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-07-14 15:10:12.103190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-07-14 15:10:12.103347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-07-14 15:10:12.103381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-07-14 15:10:12.103495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-07-14 15:10:12.103527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-07-14 15:10:12.103718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.054 [2024-07-14 15:10:12.103760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.054 qpair failed and we were unable to recover it. 00:37:33.054 [2024-07-14 15:10:12.103891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.103924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-07-14 15:10:12.104067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.104101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-07-14 15:10:12.104211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.104243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-07-14 15:10:12.104431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.104469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-07-14 15:10:12.104616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.104653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-07-14 15:10:12.104805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.104841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-07-14 15:10:12.105024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.105074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-07-14 15:10:12.105224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.105262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-07-14 15:10:12.105415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.105468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-07-14 15:10:12.105635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.105693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-07-14 15:10:12.105827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.105862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-07-14 15:10:12.106016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.106050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-07-14 15:10:12.106188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.106224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-07-14 15:10:12.106346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.106378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-07-14 15:10:12.106516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.106550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-07-14 15:10:12.106658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.106690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-07-14 15:10:12.106825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.106859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-07-14 15:10:12.106982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.107015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-07-14 15:10:12.107149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.107203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-07-14 15:10:12.107360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.107404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-07-14 15:10:12.107535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.107576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-07-14 15:10:12.107696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.107740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-07-14 15:10:12.107921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.107957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-07-14 15:10:12.108109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.108165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-07-14 15:10:12.108350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.108388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-07-14 15:10:12.108565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.108625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-07-14 15:10:12.108748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.108784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-07-14 15:10:12.108911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.108946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-07-14 15:10:12.109045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.109077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-07-14 15:10:12.109226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.109263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-07-14 15:10:12.109431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.109496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-07-14 15:10:12.109644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.109682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-07-14 15:10:12.109828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.109865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-07-14 15:10:12.110074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.110123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-07-14 15:10:12.110313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.110378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-07-14 15:10:12.110515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.110573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-07-14 15:10:12.110737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.110771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-07-14 15:10:12.110934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.110989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-07-14 15:10:12.111152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.111210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-07-14 15:10:12.111349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.111399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-07-14 15:10:12.111539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.111573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-07-14 15:10:12.111708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.111742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-07-14 15:10:12.111907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.111957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-07-14 15:10:12.112093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.112125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-07-14 15:10:12.112302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.112336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-07-14 15:10:12.112539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.112576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-07-14 15:10:12.112751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.112788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-07-14 15:10:12.112925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.112957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-07-14 15:10:12.113104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.113141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-07-14 15:10:12.113411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.113449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-07-14 15:10:12.113560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.113595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-07-14 15:10:12.113746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.113779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-07-14 15:10:12.113918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.113953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-07-14 15:10:12.114068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.114101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.055 [2024-07-14 15:10:12.114252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.055 [2024-07-14 15:10:12.114289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.055 qpair failed and we were unable to recover it. 00:37:33.056 [2024-07-14 15:10:12.114431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-07-14 15:10:12.114469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-07-14 15:10:12.114587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-07-14 15:10:12.114624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-07-14 15:10:12.114769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-07-14 15:10:12.114818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-07-14 15:10:12.114948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-07-14 15:10:12.114989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-07-14 15:10:12.115141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-07-14 15:10:12.115196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-07-14 15:10:12.115325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-07-14 15:10:12.115374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-07-14 15:10:12.115519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-07-14 15:10:12.115576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-07-14 15:10:12.115714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-07-14 15:10:12.115756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-07-14 15:10:12.115911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-07-14 15:10:12.115947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-07-14 15:10:12.116064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-07-14 15:10:12.116095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-07-14 15:10:12.116198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-07-14 15:10:12.116231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-07-14 15:10:12.116386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-07-14 15:10:12.116428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-07-14 15:10:12.116560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-07-14 15:10:12.116612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-07-14 15:10:12.116735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-07-14 15:10:12.116772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-07-14 15:10:12.116936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-07-14 15:10:12.116971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-07-14 15:10:12.117111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-07-14 15:10:12.117144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-07-14 15:10:12.117280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-07-14 15:10:12.117313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-07-14 15:10:12.117473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-07-14 15:10:12.117510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-07-14 15:10:12.117712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-07-14 15:10:12.117749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-07-14 15:10:12.117867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-07-14 15:10:12.117928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-07-14 15:10:12.118044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-07-14 15:10:12.118078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-07-14 15:10:12.118187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-07-14 15:10:12.118221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-07-14 15:10:12.118354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-07-14 15:10:12.118404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-07-14 15:10:12.118555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-07-14 15:10:12.118591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-07-14 15:10:12.118801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-07-14 15:10:12.118838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-07-14 15:10:12.118985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-07-14 15:10:12.119020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-07-14 15:10:12.119150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-07-14 15:10:12.119183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-07-14 15:10:12.119307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-07-14 15:10:12.119350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-07-14 15:10:12.119496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-07-14 15:10:12.119534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-07-14 15:10:12.119663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-07-14 15:10:12.119700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-07-14 15:10:12.119899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-07-14 15:10:12.119948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-07-14 15:10:12.120091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-07-14 15:10:12.120128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-07-14 15:10:12.120241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-07-14 15:10:12.120276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-07-14 15:10:12.120468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-07-14 15:10:12.120521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-07-14 15:10:12.120654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-07-14 15:10:12.120710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-07-14 15:10:12.120893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-07-14 15:10:12.120947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-07-14 15:10:12.121054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-07-14 15:10:12.121087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-07-14 15:10:12.121203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-07-14 15:10:12.121237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-07-14 15:10:12.121358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-07-14 15:10:12.121394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-07-14 15:10:12.121539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-07-14 15:10:12.121574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-07-14 15:10:12.121710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-07-14 15:10:12.121744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-07-14 15:10:12.121849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-07-14 15:10:12.121898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-07-14 15:10:12.122002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-07-14 15:10:12.122035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-07-14 15:10:12.122144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-07-14 15:10:12.122178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-07-14 15:10:12.122288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-07-14 15:10:12.122322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-07-14 15:10:12.122431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-07-14 15:10:12.122468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-07-14 15:10:12.122635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-07-14 15:10:12.122669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-07-14 15:10:12.122778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-07-14 15:10:12.122810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-07-14 15:10:12.122946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-07-14 15:10:12.122986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-07-14 15:10:12.123166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-07-14 15:10:12.123217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-07-14 15:10:12.123378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.056 [2024-07-14 15:10:12.123432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.056 qpair failed and we were unable to recover it. 00:37:33.056 [2024-07-14 15:10:12.123536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-07-14 15:10:12.123584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-07-14 15:10:12.123693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-07-14 15:10:12.123728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-07-14 15:10:12.123895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-07-14 15:10:12.123931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-07-14 15:10:12.124107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-07-14 15:10:12.124143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-07-14 15:10:12.124260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-07-14 15:10:12.124295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-07-14 15:10:12.124407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-07-14 15:10:12.124446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-07-14 15:10:12.124584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-07-14 15:10:12.124618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-07-14 15:10:12.124749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-07-14 15:10:12.124782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-07-14 15:10:12.124915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-07-14 15:10:12.124948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-07-14 15:10:12.125104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-07-14 15:10:12.125159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-07-14 15:10:12.125286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-07-14 15:10:12.125339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-07-14 15:10:12.125444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-07-14 15:10:12.125477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-07-14 15:10:12.125642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-07-14 15:10:12.125677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-07-14 15:10:12.125815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-07-14 15:10:12.125850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-07-14 15:10:12.125989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-07-14 15:10:12.126027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-07-14 15:10:12.126166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-07-14 15:10:12.126201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-07-14 15:10:12.126319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-07-14 15:10:12.126352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-07-14 15:10:12.126462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-07-14 15:10:12.126495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-07-14 15:10:12.126635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-07-14 15:10:12.126669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-07-14 15:10:12.126776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-07-14 15:10:12.126807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-07-14 15:10:12.126914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-07-14 15:10:12.126947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-07-14 15:10:12.127059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-07-14 15:10:12.127095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-07-14 15:10:12.127222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-07-14 15:10:12.127276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-07-14 15:10:12.127430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-07-14 15:10:12.127482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-07-14 15:10:12.127621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-07-14 15:10:12.127656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-07-14 15:10:12.127758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-07-14 15:10:12.127792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-07-14 15:10:12.127921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-07-14 15:10:12.127957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-07-14 15:10:12.128072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-07-14 15:10:12.128107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-07-14 15:10:12.128222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-07-14 15:10:12.128257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-07-14 15:10:12.128361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-07-14 15:10:12.128394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-07-14 15:10:12.128533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-07-14 15:10:12.128568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-07-14 15:10:12.128672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-07-14 15:10:12.128704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-07-14 15:10:12.128812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-07-14 15:10:12.128846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-07-14 15:10:12.128984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-07-14 15:10:12.129020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-07-14 15:10:12.129126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-07-14 15:10:12.129159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-07-14 15:10:12.129271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-07-14 15:10:12.129307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.057 [2024-07-14 15:10:12.129448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.057 [2024-07-14 15:10:12.129484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.057 qpair failed and we were unable to recover it. 00:37:33.058 [2024-07-14 15:10:12.129596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.058 [2024-07-14 15:10:12.129631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.058 qpair failed and we were unable to recover it. 00:37:33.058 [2024-07-14 15:10:12.129796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.058 [2024-07-14 15:10:12.129831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.058 qpair failed and we were unable to recover it. 00:37:33.058 [2024-07-14 15:10:12.129950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.058 [2024-07-14 15:10:12.129994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.058 qpair failed and we were unable to recover it. 00:37:33.058 [2024-07-14 15:10:12.130102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.058 [2024-07-14 15:10:12.130139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.058 qpair failed and we were unable to recover it. 00:37:33.058 [2024-07-14 15:10:12.130275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.058 [2024-07-14 15:10:12.130309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.058 qpair failed and we were unable to recover it. 00:37:33.058 [2024-07-14 15:10:12.130447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.058 [2024-07-14 15:10:12.130481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.058 qpair failed and we were unable to recover it. 00:37:33.058 [2024-07-14 15:10:12.130578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.058 [2024-07-14 15:10:12.130609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.058 qpair failed and we were unable to recover it. 00:37:33.058 [2024-07-14 15:10:12.130718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.058 [2024-07-14 15:10:12.130752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.058 qpair failed and we were unable to recover it. 00:37:33.058 [2024-07-14 15:10:12.130863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.058 [2024-07-14 15:10:12.130911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.058 qpair failed and we were unable to recover it. 00:37:33.058 [2024-07-14 15:10:12.131035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.058 [2024-07-14 15:10:12.131072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.058 qpair failed and we were unable to recover it. 00:37:33.058 [2024-07-14 15:10:12.131251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.058 [2024-07-14 15:10:12.131304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.058 qpair failed and we were unable to recover it. 00:37:33.058 [2024-07-14 15:10:12.131432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.058 [2024-07-14 15:10:12.131486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.058 qpair failed and we were unable to recover it. 00:37:33.058 [2024-07-14 15:10:12.131616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.058 [2024-07-14 15:10:12.131651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.058 qpair failed and we were unable to recover it. 00:37:33.058 [2024-07-14 15:10:12.131748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.058 [2024-07-14 15:10:12.131781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.058 qpair failed and we were unable to recover it. 00:37:33.058 [2024-07-14 15:10:12.131911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.058 [2024-07-14 15:10:12.131946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.058 qpair failed and we were unable to recover it. 00:37:33.058 [2024-07-14 15:10:12.132086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.058 [2024-07-14 15:10:12.132121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-07-14 15:10:12.132227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-07-14 15:10:12.132261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-07-14 15:10:12.132404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-07-14 15:10:12.132439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-07-14 15:10:12.132574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-07-14 15:10:12.132629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-07-14 15:10:12.132765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-07-14 15:10:12.132823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-07-14 15:10:12.133000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-07-14 15:10:12.133035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-07-14 15:10:12.133161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-07-14 15:10:12.133199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-07-14 15:10:12.133341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-07-14 15:10:12.133378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-07-14 15:10:12.133561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-07-14 15:10:12.133620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-07-14 15:10:12.133806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-07-14 15:10:12.133842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-07-14 15:10:12.133965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-07-14 15:10:12.133998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-07-14 15:10:12.134127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-07-14 15:10:12.134180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-07-14 15:10:12.134360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-07-14 15:10:12.134412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-07-14 15:10:12.134592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-07-14 15:10:12.134644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-07-14 15:10:12.134781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-07-14 15:10:12.134816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-07-14 15:10:12.134992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-07-14 15:10:12.135036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-07-14 15:10:12.135203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-07-14 15:10:12.135258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-07-14 15:10:12.135475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-07-14 15:10:12.135536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-07-14 15:10:12.135706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-07-14 15:10:12.135777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-07-14 15:10:12.135954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-07-14 15:10:12.135993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-07-14 15:10:12.136138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-07-14 15:10:12.136191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-07-14 15:10:12.136353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-07-14 15:10:12.136436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-07-14 15:10:12.136568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-07-14 15:10:12.136606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-07-14 15:10:12.136742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-07-14 15:10:12.136780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-07-14 15:10:12.136937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-07-14 15:10:12.136987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-07-14 15:10:12.137168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-07-14 15:10:12.137223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-07-14 15:10:12.137388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-07-14 15:10:12.137429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-07-14 15:10:12.137564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-07-14 15:10:12.137602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-07-14 15:10:12.137712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-07-14 15:10:12.137755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-07-14 15:10:12.137893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-07-14 15:10:12.137946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-07-14 15:10:12.138081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-07-14 15:10:12.138114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-07-14 15:10:12.138264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.059 [2024-07-14 15:10:12.138301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.059 qpair failed and we were unable to recover it. 00:37:33.059 [2024-07-14 15:10:12.138470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-07-14 15:10:12.138509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-07-14 15:10:12.138625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-07-14 15:10:12.138662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-07-14 15:10:12.138812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-07-14 15:10:12.138861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-07-14 15:10:12.139009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-07-14 15:10:12.139058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-07-14 15:10:12.139252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-07-14 15:10:12.139292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-07-14 15:10:12.139427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-07-14 15:10:12.139465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-07-14 15:10:12.139638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-07-14 15:10:12.139675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-07-14 15:10:12.139833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-07-14 15:10:12.139871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-07-14 15:10:12.140014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-07-14 15:10:12.140049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-07-14 15:10:12.140223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-07-14 15:10:12.140277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-07-14 15:10:12.140465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-07-14 15:10:12.140516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-07-14 15:10:12.140696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-07-14 15:10:12.140764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-07-14 15:10:12.140889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-07-14 15:10:12.140952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-07-14 15:10:12.141097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-07-14 15:10:12.141132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-07-14 15:10:12.141309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-07-14 15:10:12.141345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-07-14 15:10:12.141463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-07-14 15:10:12.141519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-07-14 15:10:12.141583] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:33.060 [2024-07-14 15:10:12.141765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-07-14 15:10:12.141804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-07-14 15:10:12.141983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-07-14 15:10:12.142024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-07-14 15:10:12.142147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-07-14 15:10:12.142198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-07-14 15:10:12.142352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-07-14 15:10:12.142390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-07-14 15:10:12.142538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-07-14 15:10:12.142590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-07-14 15:10:12.142710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-07-14 15:10:12.142749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-07-14 15:10:12.142869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-07-14 15:10:12.142934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-07-14 15:10:12.143050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-07-14 15:10:12.143083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-07-14 15:10:12.143186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-07-14 15:10:12.143220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-07-14 15:10:12.143381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-07-14 15:10:12.143419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-07-14 15:10:12.143592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-07-14 15:10:12.143630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-07-14 15:10:12.143779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-07-14 15:10:12.143816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-07-14 15:10:12.143965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-07-14 15:10:12.144001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-07-14 15:10:12.144135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-07-14 15:10:12.144184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-07-14 15:10:12.144335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.060 [2024-07-14 15:10:12.144390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.060 qpair failed and we were unable to recover it. 00:37:33.060 [2024-07-14 15:10:12.144543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-07-14 15:10:12.144597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-07-14 15:10:12.144731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-07-14 15:10:12.144765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-07-14 15:10:12.144914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-07-14 15:10:12.144949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-07-14 15:10:12.145091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-07-14 15:10:12.145126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-07-14 15:10:12.145243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-07-14 15:10:12.145277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-07-14 15:10:12.145424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-07-14 15:10:12.145458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-07-14 15:10:12.145621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-07-14 15:10:12.145655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-07-14 15:10:12.145764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-07-14 15:10:12.145797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-07-14 15:10:12.145936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-07-14 15:10:12.145971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-07-14 15:10:12.146116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-07-14 15:10:12.146152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-07-14 15:10:12.146286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-07-14 15:10:12.146338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-07-14 15:10:12.146501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-07-14 15:10:12.146556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-07-14 15:10:12.146669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-07-14 15:10:12.146704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-07-14 15:10:12.146850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-07-14 15:10:12.146912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-07-14 15:10:12.147123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-07-14 15:10:12.147178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-07-14 15:10:12.147382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-07-14 15:10:12.147423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-07-14 15:10:12.147609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-07-14 15:10:12.147675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-07-14 15:10:12.147846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-07-14 15:10:12.147892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-07-14 15:10:12.148042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-07-14 15:10:12.148092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-07-14 15:10:12.148277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-07-14 15:10:12.148328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-07-14 15:10:12.148477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-07-14 15:10:12.148535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-07-14 15:10:12.148644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-07-14 15:10:12.148680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-07-14 15:10:12.148852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-07-14 15:10:12.148902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-07-14 15:10:12.149064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-07-14 15:10:12.149098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-07-14 15:10:12.149240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-07-14 15:10:12.149278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-07-14 15:10:12.149448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-07-14 15:10:12.149498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-07-14 15:10:12.149649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-07-14 15:10:12.149686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-07-14 15:10:12.149831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-07-14 15:10:12.149868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-07-14 15:10:12.150014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-07-14 15:10:12.150048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-07-14 15:10:12.150190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-07-14 15:10:12.150224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-07-14 15:10:12.150380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-07-14 15:10:12.150417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-07-14 15:10:12.150586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-07-14 15:10:12.150628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-07-14 15:10:12.150753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-07-14 15:10:12.150788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-07-14 15:10:12.150920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-07-14 15:10:12.150955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-07-14 15:10:12.151056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-07-14 15:10:12.151095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-07-14 15:10:12.151221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.061 [2024-07-14 15:10:12.151270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.061 qpair failed and we were unable to recover it. 00:37:33.061 [2024-07-14 15:10:12.151460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-07-14 15:10:12.151514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-07-14 15:10:12.151650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-07-14 15:10:12.151705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-07-14 15:10:12.151866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-07-14 15:10:12.151929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-07-14 15:10:12.152066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-07-14 15:10:12.152101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-07-14 15:10:12.152210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-07-14 15:10:12.152244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-07-14 15:10:12.152378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-07-14 15:10:12.152433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-07-14 15:10:12.152562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-07-14 15:10:12.152598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-07-14 15:10:12.152713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-07-14 15:10:12.152748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-07-14 15:10:12.152917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-07-14 15:10:12.152953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-07-14 15:10:12.153076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-07-14 15:10:12.153111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-07-14 15:10:12.153227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-07-14 15:10:12.153262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-07-14 15:10:12.153374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-07-14 15:10:12.153408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-07-14 15:10:12.153567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-07-14 15:10:12.153601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-07-14 15:10:12.153713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-07-14 15:10:12.153747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-07-14 15:10:12.153891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-07-14 15:10:12.153927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-07-14 15:10:12.154100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-07-14 15:10:12.154135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-07-14 15:10:12.154305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-07-14 15:10:12.154358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-07-14 15:10:12.154542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-07-14 15:10:12.154609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-07-14 15:10:12.154748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-07-14 15:10:12.154783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-07-14 15:10:12.154922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-07-14 15:10:12.154976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-07-14 15:10:12.155133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-07-14 15:10:12.155185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-07-14 15:10:12.155429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-07-14 15:10:12.155485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-07-14 15:10:12.155631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-07-14 15:10:12.155665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-07-14 15:10:12.155831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-07-14 15:10:12.155867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-07-14 15:10:12.156019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-07-14 15:10:12.156056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-07-14 15:10:12.156209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-07-14 15:10:12.156259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-07-14 15:10:12.156453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-07-14 15:10:12.156518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-07-14 15:10:12.156676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-07-14 15:10:12.156747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-07-14 15:10:12.156919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-07-14 15:10:12.156954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-07-14 15:10:12.157072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-07-14 15:10:12.157107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-07-14 15:10:12.157262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-07-14 15:10:12.157314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-07-14 15:10:12.157534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-07-14 15:10:12.157588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-07-14 15:10:12.157744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-07-14 15:10:12.157779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-07-14 15:10:12.157976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-07-14 15:10:12.158030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-07-14 15:10:12.158162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-07-14 15:10:12.158219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-07-14 15:10:12.158369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-07-14 15:10:12.158426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-07-14 15:10:12.158592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-07-14 15:10:12.158651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-07-14 15:10:12.158793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-07-14 15:10:12.158827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-07-14 15:10:12.158987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-07-14 15:10:12.159027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-07-14 15:10:12.159173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-07-14 15:10:12.159210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-07-14 15:10:12.159345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-07-14 15:10:12.159414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-07-14 15:10:12.159606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-07-14 15:10:12.159664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-07-14 15:10:12.159826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-07-14 15:10:12.159859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-07-14 15:10:12.159992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-07-14 15:10:12.160053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-07-14 15:10:12.160246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-07-14 15:10:12.160316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.062 qpair failed and we were unable to recover it. 00:37:33.062 [2024-07-14 15:10:12.160574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.062 [2024-07-14 15:10:12.160615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-07-14 15:10:12.160806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-07-14 15:10:12.160845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-07-14 15:10:12.161043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-07-14 15:10:12.161078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-07-14 15:10:12.161258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-07-14 15:10:12.161309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-07-14 15:10:12.161598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-07-14 15:10:12.161675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-07-14 15:10:12.161821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-07-14 15:10:12.161855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-07-14 15:10:12.162028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-07-14 15:10:12.162064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-07-14 15:10:12.162268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-07-14 15:10:12.162323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-07-14 15:10:12.162562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-07-14 15:10:12.162622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-07-14 15:10:12.162789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-07-14 15:10:12.162832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-07-14 15:10:12.163035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-07-14 15:10:12.163070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-07-14 15:10:12.163250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-07-14 15:10:12.163305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-07-14 15:10:12.163464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-07-14 15:10:12.163506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-07-14 15:10:12.163693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-07-14 15:10:12.163732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-07-14 15:10:12.163870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-07-14 15:10:12.163911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-07-14 15:10:12.164052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-07-14 15:10:12.164087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-07-14 15:10:12.164255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-07-14 15:10:12.164293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-07-14 15:10:12.164501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-07-14 15:10:12.164545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-07-14 15:10:12.164699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-07-14 15:10:12.164738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-07-14 15:10:12.164887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-07-14 15:10:12.164940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-07-14 15:10:12.165070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-07-14 15:10:12.165104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-07-14 15:10:12.165237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-07-14 15:10:12.165289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-07-14 15:10:12.165483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-07-14 15:10:12.165522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-07-14 15:10:12.165690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-07-14 15:10:12.165728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-07-14 15:10:12.165867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-07-14 15:10:12.165933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-07-14 15:10:12.166048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-07-14 15:10:12.166082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-07-14 15:10:12.166235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-07-14 15:10:12.166270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-07-14 15:10:12.166438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-07-14 15:10:12.166477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-07-14 15:10:12.166651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-07-14 15:10:12.166688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-07-14 15:10:12.166870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-07-14 15:10:12.166909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-07-14 15:10:12.167053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-07-14 15:10:12.167087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-07-14 15:10:12.167283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-07-14 15:10:12.167338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-07-14 15:10:12.167501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-07-14 15:10:12.167541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-07-14 15:10:12.167668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-07-14 15:10:12.167707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-07-14 15:10:12.167890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-07-14 15:10:12.167945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-07-14 15:10:12.168084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-07-14 15:10:12.168117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-07-14 15:10:12.168247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-07-14 15:10:12.168299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-07-14 15:10:12.168455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-07-14 15:10:12.168494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.063 [2024-07-14 15:10:12.168676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.063 [2024-07-14 15:10:12.168713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.063 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.168914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.168959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.169066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.169100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.169238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.169270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.169419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.169457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.169604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.169641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.169783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.169833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.170008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.170042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.170201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.170238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.170419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.170452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.170604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.170640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.170771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.170808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.170982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.171017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.171171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.171209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.171369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.171405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.171534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.171588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.171764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.171801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.171961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.171996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.172127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.172176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.172302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.172344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.172527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.172581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.172716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.172769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.172933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.172968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.173100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.173135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.173272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.173308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.173441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.173475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.173606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.173640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.173773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.173807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.173966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.174019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.174204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.174257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.174475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.174527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.174687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.174722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.174887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.174923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.175087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.175133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.175270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.175308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.175424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.175461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.175640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.175697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.175838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.175883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.176083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.176132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.176320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.176377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.176530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.176570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.176732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.176772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.176956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.176990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.177126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.177181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.177327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.177366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.177522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.177560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.177715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.177752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.177938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.177988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.178135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.178184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.178341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.178393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.178529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.178586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.178720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.178755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.178894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.178929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.179076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.179129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.179240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.179278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.179395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.179429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.179597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.179634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.179765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.179800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.179943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.179977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.180090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.180128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.180356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.180394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.180533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.180571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.180695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.180732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.180895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.180929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.181060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.181094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.181249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.181286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.181494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.181532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.181673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.181710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.181869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.181912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.182073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.182107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.064 [2024-07-14 15:10:12.182277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.064 [2024-07-14 15:10:12.182344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.064 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.182495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.182536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.182764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.182802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.182981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.183017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.183163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.183218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.183407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.183448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.183603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.183644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.183791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.183830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.183983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.184032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.184172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.184208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.184322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.184373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.184494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.184531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.184726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.184764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.184908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.184943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.185046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.185079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.185238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.185275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.185390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.185427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.185555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.185595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.185792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.185841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.185992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.186040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.186210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.186249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.186463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.186528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.186740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.186778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.186914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.186978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.187094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.187126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.187264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.187298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.187506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.187563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.187752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.187790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.187966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.188000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.188116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.188171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.188353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.188404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.188542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.188593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.188738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.188775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.188929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.188979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.189113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.189162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.189330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.189386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.189572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.189626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.189794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.189829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.190015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.190065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.190220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.190258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.190413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.190447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.190713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.190773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.190939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.190974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.191119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.191171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.191300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.191352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.191496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.191532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.191649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.191686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.191843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.191898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.192030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.192066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.192224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.192278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.192423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.192476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.192634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.192686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.192849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.192894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.193000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.193035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.193252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.193307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.193527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.193586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.193751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.193790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.193945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.193979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.194118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.194171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.194349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.194415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.194633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.194690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.194834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.194872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.195067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.195116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.195328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.195368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.195592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.195651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.195799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.195846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.196005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.196040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.196223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.196279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.196474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.196534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.065 [2024-07-14 15:10:12.196729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.065 [2024-07-14 15:10:12.196801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.065 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.196993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.197029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.197193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.197231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.197368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.197422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.197597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.197664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.197826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.197862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.198014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.198049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.198178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.198228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.198391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.198460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.198608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.198664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.198828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.198888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.199027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.199062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.199194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.199248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.199458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.199515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.199720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.199778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.199969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.200015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.200129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.200161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.200428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.200486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.200749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.200807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.200988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.201035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.201239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.201295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.201587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.201647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.201811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.201860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.202063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.202098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.202249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.202291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.202464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.202539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.202695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.202732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.202915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.202949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.203074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.203123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.203303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.203343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.203475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.203532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.203705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.203744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.203909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.203964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.204090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.204139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.204301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.204340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.204465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.204503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.204676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.204713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.204873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.204918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.205076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.205111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.205315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.205375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.205611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.205672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.205849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.205898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.206054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.206088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.206212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.206262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.206423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.206478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.206627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.206679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.206820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.206855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.206997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.207046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.207163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.207220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.207424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.207485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.207723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.207776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.207946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.207981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.208125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.208160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.208302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.208341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.208495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.208533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.208681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.208718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.208889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.208939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.209128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.209176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.209359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.209399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.209520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.209558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.209730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.209768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.209894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.209948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.210127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.210197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.210412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.210514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.210733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.210791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.210969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.211005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.211136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.211172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.211336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.211375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.211553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.211607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.211840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.211885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.212016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.212051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.212192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.212226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.212442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.212480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.212649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.212699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.212854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.212904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.213068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.213103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.213245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.066 [2024-07-14 15:10:12.213296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.066 qpair failed and we were unable to recover it. 00:37:33.066 [2024-07-14 15:10:12.213521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.213573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.213718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.213757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.213915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.213969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.214081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.214122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.214257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.214295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.214442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.214481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.214642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.214680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.214862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.214902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.215051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.215100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.215299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.215340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.215478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.215533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.215682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.215720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.215844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.215889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.216066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.216116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.216263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.216300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.216567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.216626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.216769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.216815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.217014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.217049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.217222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.217271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.217441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.217478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.217590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.217623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.217795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.217829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.218003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.218038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.218209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.218261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.218459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.218508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.218715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.218771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.218927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.218977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.219112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.219145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.219323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.219361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.219534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.219604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.219746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.219799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.219979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.220013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.220166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.220203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.220338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.220375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.220522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.220559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.220723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.220756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.220936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.220970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.221109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.221143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.221330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.221368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.221577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.221615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.221742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.221779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.221915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.221983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.222111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.222160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.222327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.222387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.222605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.222661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.222770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.222805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.222923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.222972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.223118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.223181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.223370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.223409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.223559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.223616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.223775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.223817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.223990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.224031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.224170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.224214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.224368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.224405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.224576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.224614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.224781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.224817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.225003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.225051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.225241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.225297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.225449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.225512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.225726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.225795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.225975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.226018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.226165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.226214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.226398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.226471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.226679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.226739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.226856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.226904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.227038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.227071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.227176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.227209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.227371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.227407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.227659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.227696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.227842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.227890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.228054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.228091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.228240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.228275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.228386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.228439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.228709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.228767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.228924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.228959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.229070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.229105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.229269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.229319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.229521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.229586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.067 qpair failed and we were unable to recover it. 00:37:33.067 [2024-07-14 15:10:12.229732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.067 [2024-07-14 15:10:12.229769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.229969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.230005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.230117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.230151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.230290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.230341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.230452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.230488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.230689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.230731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.230846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.230890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.231073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.231106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.231252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.231305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.231503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.231543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.231688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.231726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.231855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.231896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.232068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.232101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.232249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.232286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.232487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.232524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.232650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.232687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.232857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.232927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.233038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.233072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.233246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.233283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.233551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.233614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.233801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.233838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.233999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.234033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.234157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.234218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.234369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.234403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.234539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.234590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.234764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.234802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.234938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.234972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.235133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.235185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.235326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.235363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.235518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.235552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.235662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.235696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.235850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.235907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.236084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.236133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.236273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.236314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.236470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.236523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.236683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.236739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.236875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.236917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.237056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.237091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.237258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.237292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.237487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.237525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.237644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.237695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.237837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.237874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.238038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.238071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.238226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.238276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.238434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.238488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.238674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.238735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.238884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.238921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.239097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.239152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.239345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.239387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.239500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.239538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.239800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.239860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.240041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.240076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.240238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.240275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.240408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.240461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.240615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.240665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.240818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.240858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.241047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.241097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.241288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.241344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.241605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.241663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.241805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.241840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.242019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.242054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.242223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.242261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.242426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.242484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.242661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.242699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.242823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.242857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.242970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.243011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.243156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.243187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.243337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.243372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.243497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.243532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.243655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.243691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.243872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.243935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.244076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.068 [2024-07-14 15:10:12.244109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.068 qpair failed and we were unable to recover it. 00:37:33.068 [2024-07-14 15:10:12.244285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.244340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.244474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.244525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.244713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.244764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.244906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.244940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.245069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.245108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.245262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.245294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.245480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.245534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.245651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.245685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.245818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.245865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.246005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.246041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.246181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.246220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.246397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.246435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.246587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.246624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.246780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.246818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.246985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.247033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.247199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.247238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.247366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.247416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.247528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.247565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.247716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.247752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.247905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.247939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.248073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.248105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.248307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.248360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.248592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.248628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.248788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.248827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.249003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.249036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.249181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.249219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.249393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.249429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.249618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.249654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.249786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.249820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.249982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.250029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.250191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.250230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.250406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.250445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.250633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.250669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.250821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.250858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.251046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.251094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.251207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.251242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.251402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.251452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.251600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.251650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.251781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.251815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.251932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.251967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.252090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.252129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.252255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.252292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.252443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.252480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.252603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.252640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.252788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.252824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.252995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.253028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.253217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.253269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.253423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.253474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.253640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.253693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.253834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.253867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.254014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.254046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.254244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.254295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.254515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.254553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.254767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.254830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.254977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.255010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.255154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.255187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.255322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.255355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.255458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.255492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.255621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.255655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.255812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.255860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.256049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.256085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.256271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.256326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.256568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.256628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.256779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.256817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.256966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.257000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.257152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.257199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.257396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.257449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.257586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.257637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.257783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.257816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.257956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.257990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.258118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.258157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.258294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.258327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.258465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.258499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.258605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.258639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.258757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.258791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.258950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.258988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.259144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.259182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.259381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.259434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.069 [2024-07-14 15:10:12.259622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.069 [2024-07-14 15:10:12.259674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.069 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.259837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.259870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.260022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.260057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.260195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.260233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.260361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.260397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.260571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.260608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.260798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.260833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.260945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.260989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.261144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.261181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.261328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.261365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.261514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.261551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.261771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.261836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.261997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.262034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.262215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.262268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.262429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.262466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.262584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.262628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.262802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.262839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.263007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.263041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.263199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.263236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.263407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.263445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.263596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.263633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.263788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.263825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.264015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.264047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.264155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.264206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.264385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.264423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.264547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.264598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.264779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.264816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.264971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.265005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.265139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.265172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.265334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.265370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.265495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.265532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.265735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.265772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.265938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.265973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.266160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.266197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.266405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.266441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.266588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.266624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.266777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.266814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.266994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.267042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.267200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.267247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.267417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.267456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.267616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.267652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.267817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.267850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.267984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.268017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.268135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.268185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.268304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.268337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.268486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.268534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.268665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.268704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.268854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.268893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.269032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.269064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.269215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.269252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.269384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.269417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.269554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.269603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.269728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.269766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.269921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.269955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.270071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.270104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.270212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.270244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.270398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.270433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.270610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.270646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.270820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.270855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.271007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.271054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.271246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.271301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.271425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.271478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.271662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.271713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.271929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.271964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.272086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.272121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.272242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.272287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.272398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.272431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.272566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.272599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.272704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.272737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.272896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.272941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.273071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.070 [2024-07-14 15:10:12.273104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.070 qpair failed and we were unable to recover it. 00:37:33.070 [2024-07-14 15:10:12.273270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.273305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.273476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.273508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.273642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.273675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.273806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.273838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.273995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.274028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.274186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.274233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.274393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.274446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.274596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.274647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.274751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.274784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.274888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.274922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.275081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.275137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.275319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.275376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.275536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.275588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.275762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.275795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.275957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.275991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.276130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.276162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.276278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.276311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.276484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.276520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.276656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.276692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.276823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.276860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.277022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.277055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.277200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.277237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.277416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.277465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.277633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.277669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.277815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.277850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.277991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.278024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.278167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.278199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.278304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.278354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.278507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.278544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.278751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.278788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.278959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.278991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.279104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.279136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.279332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.279368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.279507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.279543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.279666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.279703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.279892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.279925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.280027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.280060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.280165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.280216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.280377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.280409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.280525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.280577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.280744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.280781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.280960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.280993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.281125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.281174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.281299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.281335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.281546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.281582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.281721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.281758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.281922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.281955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.282070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.282102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.282239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.282291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.282411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.282446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.282616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.282652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.282803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.282844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.282988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.283021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.283134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.283167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.283268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.283300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.283449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.283486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.283695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.283731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.283841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.283886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.284026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.284064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.284204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.284237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.284415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.284451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.284646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.284679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.284823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.284856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.284995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.285043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.285194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.285233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.285453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.285491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.285663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.285700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.285885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.285939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.286071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.286118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.286289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.286324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.286478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.286515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.286674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.286724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.286909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.286941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.287045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.287078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.287238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.287271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.287462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.287498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.071 [2024-07-14 15:10:12.287622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.071 [2024-07-14 15:10:12.287658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.071 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.287813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.287849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.288014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.288047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.288213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.288250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.288371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.288407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.288573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.288609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.288758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.288794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.288949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.288982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.289112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.289159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.289351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.289404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.289561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.289612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.289722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.289755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.289898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.289932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.290096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.290147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.290328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.290376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.290533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.290591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.290732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.290766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.290926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.290965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.291106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.291142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.291292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.291328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.291449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.291485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.291630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.291666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.291793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.291829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.291996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.292029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.292192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.292228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.292385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.292457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.292592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.292643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.292766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.292802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.292919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.292969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.293110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.293142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.293302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.293338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.293454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.293490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.293665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.293701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.293887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.293938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.294041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.294073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.294213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.294245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.294350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.294401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.294519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.294555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.294754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.294790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.294981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.295015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.295167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.295203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.295329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.295380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.295496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.295532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.295680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.295717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.295863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.295902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.296001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.296034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.296140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.296191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.296316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.296367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.296510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.296547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.296697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.296732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.296889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.296937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.297092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.297127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.297270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.297303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.297432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.297484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.297646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.297698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.297843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.297894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.298032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.298065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.298213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.298246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.298404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.298436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.298574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.298608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.298719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.298752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.298863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.298907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.299010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.299043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.299153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.299186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.299295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.299345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.299456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.299491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.299611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.299647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.299768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.299804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.299963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.299996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.300143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.300195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.072 [2024-07-14 15:10:12.300316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.072 [2024-07-14 15:10:12.300352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.072 qpair failed and we were unable to recover it. 00:37:33.073 [2024-07-14 15:10:12.300553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.073 [2024-07-14 15:10:12.300588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.073 qpair failed and we were unable to recover it. 00:37:33.073 [2024-07-14 15:10:12.300740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.073 [2024-07-14 15:10:12.300776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.073 qpair failed and we were unable to recover it. 00:37:33.073 [2024-07-14 15:10:12.300918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.073 [2024-07-14 15:10:12.300953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.073 qpair failed and we were unable to recover it. 00:37:33.073 [2024-07-14 15:10:12.301061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.073 [2024-07-14 15:10:12.301093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.073 qpair failed and we were unable to recover it. 00:37:33.073 [2024-07-14 15:10:12.301228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.073 [2024-07-14 15:10:12.301278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.073 qpair failed and we were unable to recover it. 00:37:33.073 [2024-07-14 15:10:12.301431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.073 [2024-07-14 15:10:12.301467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.073 qpair failed and we were unable to recover it. 00:37:33.073 [2024-07-14 15:10:12.301628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.073 [2024-07-14 15:10:12.301664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.073 qpair failed and we were unable to recover it. 00:37:33.073 [2024-07-14 15:10:12.301842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.073 [2024-07-14 15:10:12.301888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.073 qpair failed and we were unable to recover it. 00:37:33.073 [2024-07-14 15:10:12.302012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.073 [2024-07-14 15:10:12.302044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.073 qpair failed and we were unable to recover it. 00:37:33.073 [2024-07-14 15:10:12.302175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.073 [2024-07-14 15:10:12.302208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.073 qpair failed and we were unable to recover it. 00:37:33.073 [2024-07-14 15:10:12.302345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.073 [2024-07-14 15:10:12.302398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.073 qpair failed and we were unable to recover it. 00:37:33.073 [2024-07-14 15:10:12.302550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.073 [2024-07-14 15:10:12.302585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.073 qpair failed and we were unable to recover it. 00:37:33.073 [2024-07-14 15:10:12.302752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.073 [2024-07-14 15:10:12.302788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.073 qpair failed and we were unable to recover it. 00:37:33.073 [2024-07-14 15:10:12.302944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.073 [2024-07-14 15:10:12.302977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.073 qpair failed and we were unable to recover it. 00:37:33.073 [2024-07-14 15:10:12.303105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.073 [2024-07-14 15:10:12.303153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.073 qpair failed and we were unable to recover it. 00:37:33.073 [2024-07-14 15:10:12.303306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.073 [2024-07-14 15:10:12.303339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.073 qpair failed and we were unable to recover it. 00:37:33.073 [2024-07-14 15:10:12.303522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.073 [2024-07-14 15:10:12.303558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.073 qpair failed and we were unable to recover it. 00:37:33.073 [2024-07-14 15:10:12.303706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.073 [2024-07-14 15:10:12.303741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.073 qpair failed and we were unable to recover it. 00:37:33.073 [2024-07-14 15:10:12.303893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.073 [2024-07-14 15:10:12.303926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.073 qpair failed and we were unable to recover it. 00:37:33.073 [2024-07-14 15:10:12.304047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.073 [2024-07-14 15:10:12.304079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.073 qpair failed and we were unable to recover it. 00:37:33.073 [2024-07-14 15:10:12.304243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.073 [2024-07-14 15:10:12.304275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.073 qpair failed and we were unable to recover it. 00:37:33.073 [2024-07-14 15:10:12.304376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.073 [2024-07-14 15:10:12.304427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.073 qpair failed and we were unable to recover it. 00:37:33.073 [2024-07-14 15:10:12.304583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.073 [2024-07-14 15:10:12.304619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.073 qpair failed and we were unable to recover it. 00:37:33.073 [2024-07-14 15:10:12.304762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.073 [2024-07-14 15:10:12.304798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.073 qpair failed and we were unable to recover it. 00:37:33.073 [2024-07-14 15:10:12.304931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.073 [2024-07-14 15:10:12.304969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.073 qpair failed and we were unable to recover it. 00:37:33.073 [2024-07-14 15:10:12.305106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.073 [2024-07-14 15:10:12.305139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.073 qpair failed and we were unable to recover it. 00:37:33.073 [2024-07-14 15:10:12.305265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.073 [2024-07-14 15:10:12.305301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.073 qpair failed and we were unable to recover it. 00:37:33.073 [2024-07-14 15:10:12.305442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.073 [2024-07-14 15:10:12.305485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.073 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-14 15:10:12.305655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-14 15:10:12.305708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-14 15:10:12.305861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-14 15:10:12.305909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-14 15:10:12.306092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-14 15:10:12.306126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-14 15:10:12.306283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-14 15:10:12.306328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-14 15:10:12.306485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-14 15:10:12.306530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-14 15:10:12.306672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-14 15:10:12.306705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-14 15:10:12.306824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-14 15:10:12.306864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-14 15:10:12.307017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-14 15:10:12.307061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-14 15:10:12.307203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-14 15:10:12.307251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-14 15:10:12.307450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-14 15:10:12.307503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-14 15:10:12.307645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-14 15:10:12.307697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-14 15:10:12.307859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-14 15:10:12.307904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-14 15:10:12.308049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-14 15:10:12.308083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-14 15:10:12.308215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-14 15:10:12.308248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-14 15:10:12.308373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-14 15:10:12.308409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-14 15:10:12.308585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-14 15:10:12.308636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-14 15:10:12.308771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-14 15:10:12.308804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-14 15:10:12.308919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-14 15:10:12.308956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-14 15:10:12.309097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-14 15:10:12.309131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-14 15:10:12.309246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-14 15:10:12.309279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-14 15:10:12.309379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-14 15:10:12.309411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-14 15:10:12.309553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-14 15:10:12.309586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-14 15:10:12.309721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-14 15:10:12.309754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-14 15:10:12.309891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-14 15:10:12.309926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-14 15:10:12.310115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-14 15:10:12.310167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-14 15:10:12.310313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-14 15:10:12.310363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-14 15:10:12.310508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-14 15:10:12.310560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-14 15:10:12.310724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-14 15:10:12.310757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-14 15:10:12.310892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-14 15:10:12.310926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-14 15:10:12.311085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-14 15:10:12.311123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-14 15:10:12.311277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-14 15:10:12.311313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-14 15:10:12.311428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-14 15:10:12.311464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-14 15:10:12.311609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-14 15:10:12.311646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-14 15:10:12.311763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-14 15:10:12.311799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-14 15:10:12.311928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-14 15:10:12.311960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-14 15:10:12.312121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-14 15:10:12.312157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-14 15:10:12.312279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-14 15:10:12.312321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-14 15:10:12.312473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-14 15:10:12.312509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-14 15:10:12.312682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-14 15:10:12.312736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-14 15:10:12.312889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-14 15:10:12.312922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-14 15:10:12.313028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-14 15:10:12.313061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-14 15:10:12.313211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-14 15:10:12.313264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-14 15:10:12.313452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-14 15:10:12.313502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-14 15:10:12.313637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-14 15:10:12.313671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-14 15:10:12.313786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-14 15:10:12.313819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-14 15:10:12.313941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-14 15:10:12.313974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-14 15:10:12.314102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-14 15:10:12.314134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-14 15:10:12.314351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-14 15:10:12.314387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-14 15:10:12.314541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-14 15:10:12.314577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-14 15:10:12.314684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-14 15:10:12.314720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-14 15:10:12.314870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-14 15:10:12.314925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-14 15:10:12.315111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-14 15:10:12.315147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-14 15:10:12.315280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-14 15:10:12.315333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-14 15:10:12.315486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-14 15:10:12.315536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-14 15:10:12.315643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-14 15:10:12.315676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-14 15:10:12.315815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-14 15:10:12.315849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-14 15:10:12.316023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-14 15:10:12.316056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-14 15:10:12.316168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-14 15:10:12.316201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-14 15:10:12.316331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-14 15:10:12.316364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-14 15:10:12.316504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-14 15:10:12.316537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-14 15:10:12.316652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-14 15:10:12.316685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-14 15:10:12.316817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-14 15:10:12.316849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-14 15:10:12.317009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-14 15:10:12.317042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-14 15:10:12.317161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-14 15:10:12.317211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-14 15:10:12.317324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-14 15:10:12.317360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-14 15:10:12.317536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-14 15:10:12.317573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-14 15:10:12.317718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-14 15:10:12.317754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-14 15:10:12.317926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-14 15:10:12.317960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-14 15:10:12.318120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-14 15:10:12.318152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-14 15:10:12.318302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-14 15:10:12.318338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-14 15:10:12.318525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-14 15:10:12.318557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-14 15:10:12.318695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-14 15:10:12.318747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-14 15:10:12.318901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-14 15:10:12.318951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-14 15:10:12.319081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-14 15:10:12.319114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-14 15:10:12.319302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-14 15:10:12.319339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-14 15:10:12.319450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-14 15:10:12.319487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-14 15:10:12.319635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-14 15:10:12.319676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-14 15:10:12.319848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-14 15:10:12.319922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-14 15:10:12.320066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-14 15:10:12.320100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-14 15:10:12.320257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-14 15:10:12.320308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-14 15:10:12.320481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-14 15:10:12.320535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-14 15:10:12.320713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-14 15:10:12.320777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-14 15:10:12.320923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-14 15:10:12.320957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-14 15:10:12.321138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-14 15:10:12.321191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-14 15:10:12.321323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-14 15:10:12.321375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-14 15:10:12.321492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-14 15:10:12.321528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-14 15:10:12.321672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-14 15:10:12.321705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-14 15:10:12.321816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-14 15:10:12.321850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-14 15:10:12.321986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-14 15:10:12.322019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-14 15:10:12.322128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-14 15:10:12.322163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-14 15:10:12.322335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-14 15:10:12.322367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-14 15:10:12.322476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-14 15:10:12.322508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-14 15:10:12.322644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-07-14 15:10:12.322676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-07-14 15:10:12.322782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-07-14 15:10:12.322816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-07-14 15:10:12.322952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-07-14 15:10:12.322985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-07-14 15:10:12.323134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-07-14 15:10:12.323173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-07-14 15:10:12.323329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-07-14 15:10:12.323362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-07-14 15:10:12.323496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-07-14 15:10:12.323530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-07-14 15:10:12.323693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-07-14 15:10:12.323726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-07-14 15:10:12.323830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-07-14 15:10:12.323863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-07-14 15:10:12.323983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-07-14 15:10:12.324017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-07-14 15:10:12.324156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-07-14 15:10:12.324189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-07-14 15:10:12.324352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-07-14 15:10:12.324385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-07-14 15:10:12.324497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-07-14 15:10:12.324530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-07-14 15:10:12.324676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-07-14 15:10:12.324709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-07-14 15:10:12.324840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-07-14 15:10:12.324873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-07-14 15:10:12.325062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-07-14 15:10:12.325098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-07-14 15:10:12.325211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-07-14 15:10:12.325247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-07-14 15:10:12.325420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-07-14 15:10:12.325456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-07-14 15:10:12.325580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-07-14 15:10:12.325616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-07-14 15:10:12.325777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-07-14 15:10:12.325811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-07-14 15:10:12.325913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-07-14 15:10:12.325947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-07-14 15:10:12.326107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-07-14 15:10:12.326158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-07-14 15:10:12.326350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-07-14 15:10:12.326403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-07-14 15:10:12.326536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-07-14 15:10:12.326575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-07-14 15:10:12.326754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-07-14 15:10:12.326786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-07-14 15:10:12.326901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-07-14 15:10:12.326958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-07-14 15:10:12.327081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-07-14 15:10:12.327130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-07-14 15:10:12.327271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-07-14 15:10:12.327304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-07-14 15:10:12.327472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-07-14 15:10:12.327507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-07-14 15:10:12.327652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-07-14 15:10:12.327688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-07-14 15:10:12.327814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-07-14 15:10:12.327850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-07-14 15:10:12.328009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-07-14 15:10:12.328048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-07-14 15:10:12.328181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-07-14 15:10:12.328215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-07-14 15:10:12.328370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-07-14 15:10:12.328422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-07-14 15:10:12.328573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-07-14 15:10:12.328626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-07-14 15:10:12.328740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-07-14 15:10:12.328773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-07-14 15:10:12.328889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-07-14 15:10:12.328923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-07-14 15:10:12.329082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-07-14 15:10:12.329116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-07-14 15:10:12.329221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-07-14 15:10:12.329254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-07-14 15:10:12.329395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-07-14 15:10:12.329428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-07-14 15:10:12.329538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-07-14 15:10:12.329572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-07-14 15:10:12.329675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-07-14 15:10:12.329708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-07-14 15:10:12.329842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-07-14 15:10:12.329875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-07-14 15:10:12.330021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-07-14 15:10:12.330053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.372 qpair failed and we were unable to recover it. 00:37:33.372 [2024-07-14 15:10:12.330163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.372 [2024-07-14 15:10:12.330196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-07-14 15:10:12.330332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-07-14 15:10:12.330365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-07-14 15:10:12.330526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-07-14 15:10:12.330578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-07-14 15:10:12.330718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-07-14 15:10:12.330751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-07-14 15:10:12.330915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-07-14 15:10:12.330949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-07-14 15:10:12.331055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-07-14 15:10:12.331089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-07-14 15:10:12.331207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-07-14 15:10:12.331242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-07-14 15:10:12.331354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-07-14 15:10:12.331387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-07-14 15:10:12.331506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-07-14 15:10:12.331541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-07-14 15:10:12.331652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-07-14 15:10:12.331685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-07-14 15:10:12.331796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-07-14 15:10:12.331828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-07-14 15:10:12.331977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-07-14 15:10:12.332011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-07-14 15:10:12.332145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-07-14 15:10:12.332178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-07-14 15:10:12.332300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-07-14 15:10:12.332332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-07-14 15:10:12.332436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-07-14 15:10:12.332468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-07-14 15:10:12.332585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-07-14 15:10:12.332619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-07-14 15:10:12.332752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-07-14 15:10:12.332784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-07-14 15:10:12.332923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-07-14 15:10:12.332958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-07-14 15:10:12.333111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-07-14 15:10:12.333163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-07-14 15:10:12.333293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-07-14 15:10:12.333345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-07-14 15:10:12.333500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-07-14 15:10:12.333554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-07-14 15:10:12.333696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-07-14 15:10:12.333733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-07-14 15:10:12.333849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-07-14 15:10:12.333893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-07-14 15:10:12.334010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-07-14 15:10:12.334044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-07-14 15:10:12.334158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-07-14 15:10:12.334191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-07-14 15:10:12.334306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-07-14 15:10:12.334339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-07-14 15:10:12.334474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-07-14 15:10:12.334507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-07-14 15:10:12.334615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-07-14 15:10:12.334648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-07-14 15:10:12.334763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-07-14 15:10:12.334796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-07-14 15:10:12.334907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-07-14 15:10:12.334942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-07-14 15:10:12.335066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-07-14 15:10:12.335104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-07-14 15:10:12.335245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-07-14 15:10:12.335300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-07-14 15:10:12.335457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-07-14 15:10:12.335508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-07-14 15:10:12.335644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-07-14 15:10:12.335684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-07-14 15:10:12.335795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-07-14 15:10:12.335829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-07-14 15:10:12.336003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-07-14 15:10:12.336038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-07-14 15:10:12.336156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-07-14 15:10:12.336189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.373 qpair failed and we were unable to recover it. 00:37:33.373 [2024-07-14 15:10:12.336314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.373 [2024-07-14 15:10:12.336346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-07-14 15:10:12.336478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-07-14 15:10:12.336511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-07-14 15:10:12.336645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-07-14 15:10:12.336678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-07-14 15:10:12.336794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-07-14 15:10:12.336826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-07-14 15:10:12.336966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-07-14 15:10:12.337000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-07-14 15:10:12.337110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-07-14 15:10:12.337143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-07-14 15:10:12.337273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-07-14 15:10:12.337324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-07-14 15:10:12.337486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-07-14 15:10:12.337519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-07-14 15:10:12.337655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-07-14 15:10:12.337689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-07-14 15:10:12.337829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-07-14 15:10:12.337862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-07-14 15:10:12.337994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-07-14 15:10:12.338032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-07-14 15:10:12.338159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-07-14 15:10:12.338195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-07-14 15:10:12.338312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-07-14 15:10:12.338361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-07-14 15:10:12.338523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-07-14 15:10:12.338559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-07-14 15:10:12.338739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-07-14 15:10:12.338775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-07-14 15:10:12.338896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-07-14 15:10:12.338948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-07-14 15:10:12.339086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-07-14 15:10:12.339139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-07-14 15:10:12.339261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-07-14 15:10:12.339298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-07-14 15:10:12.339466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-07-14 15:10:12.339518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-07-14 15:10:12.339648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-07-14 15:10:12.339681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-07-14 15:10:12.339818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-07-14 15:10:12.339852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-07-14 15:10:12.339970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-07-14 15:10:12.340003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-07-14 15:10:12.340135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-07-14 15:10:12.340168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-07-14 15:10:12.340345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-07-14 15:10:12.340379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-07-14 15:10:12.340490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-07-14 15:10:12.340537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-07-14 15:10:12.340720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-07-14 15:10:12.340754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-07-14 15:10:12.340896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-07-14 15:10:12.340931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-07-14 15:10:12.341046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-07-14 15:10:12.341079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-07-14 15:10:12.341182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-07-14 15:10:12.341214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-07-14 15:10:12.341353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-07-14 15:10:12.341385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-07-14 15:10:12.341547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-07-14 15:10:12.341579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-07-14 15:10:12.341698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-07-14 15:10:12.341732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-07-14 15:10:12.341872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-07-14 15:10:12.341912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-07-14 15:10:12.342064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-07-14 15:10:12.342115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-07-14 15:10:12.342304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-07-14 15:10:12.342353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-07-14 15:10:12.342512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-07-14 15:10:12.342563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-07-14 15:10:12.342695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-07-14 15:10:12.342728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-07-14 15:10:12.342863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-07-14 15:10:12.342905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-07-14 15:10:12.343055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-07-14 15:10:12.343087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-07-14 15:10:12.343252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-07-14 15:10:12.343284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-07-14 15:10:12.343385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-07-14 15:10:12.343418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-07-14 15:10:12.343581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-07-14 15:10:12.343613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-07-14 15:10:12.343712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-07-14 15:10:12.343744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-07-14 15:10:12.343843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-07-14 15:10:12.343884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.374 qpair failed and we were unable to recover it. 00:37:33.374 [2024-07-14 15:10:12.344020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.374 [2024-07-14 15:10:12.344052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-07-14 15:10:12.344168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-07-14 15:10:12.344200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-07-14 15:10:12.344320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-07-14 15:10:12.344356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-07-14 15:10:12.344530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-07-14 15:10:12.344566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-07-14 15:10:12.344682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-07-14 15:10:12.344718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-07-14 15:10:12.344884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-07-14 15:10:12.344917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-07-14 15:10:12.345034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-07-14 15:10:12.345066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-07-14 15:10:12.345240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-07-14 15:10:12.345293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-07-14 15:10:12.345462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-07-14 15:10:12.345502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-07-14 15:10:12.345664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-07-14 15:10:12.345703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-07-14 15:10:12.345854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-07-14 15:10:12.345900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-07-14 15:10:12.346063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-07-14 15:10:12.346096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-07-14 15:10:12.346251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-07-14 15:10:12.346287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-07-14 15:10:12.346401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-07-14 15:10:12.346437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-07-14 15:10:12.346612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-07-14 15:10:12.346648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-07-14 15:10:12.346774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-07-14 15:10:12.346810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-07-14 15:10:12.346975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-07-14 15:10:12.347018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-07-14 15:10:12.347146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-07-14 15:10:12.347194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-07-14 15:10:12.347362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-07-14 15:10:12.347416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-07-14 15:10:12.347521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-07-14 15:10:12.347556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-07-14 15:10:12.347714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-07-14 15:10:12.347772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-07-14 15:10:12.347890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-07-14 15:10:12.347924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-07-14 15:10:12.348080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-07-14 15:10:12.348132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-07-14 15:10:12.348305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-07-14 15:10:12.348345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-07-14 15:10:12.348498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-07-14 15:10:12.348536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-07-14 15:10:12.348671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-07-14 15:10:12.348710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-07-14 15:10:12.348843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-07-14 15:10:12.348898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-07-14 15:10:12.349021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-07-14 15:10:12.349056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-07-14 15:10:12.349200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-07-14 15:10:12.349232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-07-14 15:10:12.349357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-07-14 15:10:12.349389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-07-14 15:10:12.349494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-07-14 15:10:12.349527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-07-14 15:10:12.349647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-07-14 15:10:12.349679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-07-14 15:10:12.349787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-07-14 15:10:12.349826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-07-14 15:10:12.350027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-07-14 15:10:12.350079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-07-14 15:10:12.350255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-07-14 15:10:12.350294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-07-14 15:10:12.350416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-07-14 15:10:12.350453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-07-14 15:10:12.350634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-07-14 15:10:12.350673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-07-14 15:10:12.350816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-07-14 15:10:12.350848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-07-14 15:10:12.350972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-07-14 15:10:12.351006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-07-14 15:10:12.351165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-07-14 15:10:12.351202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-07-14 15:10:12.351361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-07-14 15:10:12.351447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-07-14 15:10:12.351597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-07-14 15:10:12.351633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-07-14 15:10:12.351786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-07-14 15:10:12.351823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-07-14 15:10:12.351965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-07-14 15:10:12.351999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.375 [2024-07-14 15:10:12.352171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.375 [2024-07-14 15:10:12.352224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.375 qpair failed and we were unable to recover it. 00:37:33.376 [2024-07-14 15:10:12.352356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-07-14 15:10:12.352395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-07-14 15:10:12.352586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-07-14 15:10:12.352623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-07-14 15:10:12.352792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-07-14 15:10:12.352831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-07-14 15:10:12.353003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-07-14 15:10:12.353037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-07-14 15:10:12.353191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-07-14 15:10:12.353239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-07-14 15:10:12.353370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-07-14 15:10:12.353408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-07-14 15:10:12.353538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-07-14 15:10:12.353574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-07-14 15:10:12.353753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-07-14 15:10:12.353792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-07-14 15:10:12.353924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-07-14 15:10:12.353959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-07-14 15:10:12.354114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-07-14 15:10:12.354164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-07-14 15:10:12.354311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-07-14 15:10:12.354359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-07-14 15:10:12.354520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-07-14 15:10:12.354557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-07-14 15:10:12.354678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-07-14 15:10:12.354716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-07-14 15:10:12.354867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-07-14 15:10:12.354944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-07-14 15:10:12.355069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-07-14 15:10:12.355116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-07-14 15:10:12.355288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-07-14 15:10:12.355349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-07-14 15:10:12.355538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-07-14 15:10:12.355590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-07-14 15:10:12.355723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-07-14 15:10:12.355756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-07-14 15:10:12.355889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-07-14 15:10:12.355924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-07-14 15:10:12.356070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-07-14 15:10:12.356121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-07-14 15:10:12.356229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-07-14 15:10:12.356262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-07-14 15:10:12.356396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-07-14 15:10:12.356429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-07-14 15:10:12.356564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-07-14 15:10:12.356597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-07-14 15:10:12.356739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-07-14 15:10:12.356775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-07-14 15:10:12.356905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-07-14 15:10:12.356939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-07-14 15:10:12.357050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-07-14 15:10:12.357082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-07-14 15:10:12.357197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-07-14 15:10:12.357230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-07-14 15:10:12.357369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-07-14 15:10:12.357401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-07-14 15:10:12.357553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-07-14 15:10:12.357586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-07-14 15:10:12.357738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-07-14 15:10:12.357772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-07-14 15:10:12.357920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-07-14 15:10:12.357954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-07-14 15:10:12.358106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-07-14 15:10:12.358158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-07-14 15:10:12.358314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-07-14 15:10:12.358366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-07-14 15:10:12.358527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-07-14 15:10:12.358580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-07-14 15:10:12.358727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-07-14 15:10:12.358760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-07-14 15:10:12.358875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-07-14 15:10:12.358940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-07-14 15:10:12.359081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-07-14 15:10:12.359115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-07-14 15:10:12.359238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-07-14 15:10:12.359273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-07-14 15:10:12.359406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-07-14 15:10:12.359438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-07-14 15:10:12.359551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-07-14 15:10:12.359583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-07-14 15:10:12.359715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-07-14 15:10:12.359746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-07-14 15:10:12.359890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-07-14 15:10:12.359925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-07-14 15:10:12.360118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-07-14 15:10:12.360170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.376 qpair failed and we were unable to recover it. 00:37:33.376 [2024-07-14 15:10:12.360321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.376 [2024-07-14 15:10:12.360376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-07-14 15:10:12.360556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-07-14 15:10:12.360608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-07-14 15:10:12.360751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-07-14 15:10:12.360785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-07-14 15:10:12.360942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-07-14 15:10:12.360995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-07-14 15:10:12.361138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-07-14 15:10:12.361171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-07-14 15:10:12.361326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-07-14 15:10:12.361380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-07-14 15:10:12.361487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-07-14 15:10:12.361520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-07-14 15:10:12.361656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-07-14 15:10:12.361691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-07-14 15:10:12.361797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-07-14 15:10:12.361830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-07-14 15:10:12.361997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-07-14 15:10:12.362045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-07-14 15:10:12.362163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-07-14 15:10:12.362211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-07-14 15:10:12.362321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-07-14 15:10:12.362355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-07-14 15:10:12.362490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-07-14 15:10:12.362524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-07-14 15:10:12.362683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-07-14 15:10:12.362735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-07-14 15:10:12.362854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-07-14 15:10:12.362896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-07-14 15:10:12.363053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-07-14 15:10:12.363105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-07-14 15:10:12.363229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-07-14 15:10:12.363281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-07-14 15:10:12.363385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-07-14 15:10:12.363418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-07-14 15:10:12.363534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-07-14 15:10:12.363586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-07-14 15:10:12.363724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-07-14 15:10:12.363757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-07-14 15:10:12.363899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-07-14 15:10:12.363933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-07-14 15:10:12.364077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-07-14 15:10:12.364110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-07-14 15:10:12.364283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-07-14 15:10:12.364331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-07-14 15:10:12.364452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-07-14 15:10:12.364488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-07-14 15:10:12.364599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-07-14 15:10:12.364633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-07-14 15:10:12.364745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-07-14 15:10:12.364778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-07-14 15:10:12.364935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-07-14 15:10:12.364970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-07-14 15:10:12.365124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-07-14 15:10:12.365189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-07-14 15:10:12.365331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-07-14 15:10:12.365386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-07-14 15:10:12.365530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-07-14 15:10:12.365578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-07-14 15:10:12.365712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-07-14 15:10:12.365749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-07-14 15:10:12.365906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-07-14 15:10:12.365940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-07-14 15:10:12.366079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-07-14 15:10:12.366111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-07-14 15:10:12.366247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-07-14 15:10:12.366283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-07-14 15:10:12.366418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-07-14 15:10:12.366470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-07-14 15:10:12.366611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-07-14 15:10:12.366646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-07-14 15:10:12.366834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-07-14 15:10:12.366870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-07-14 15:10:12.367021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-07-14 15:10:12.367068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-07-14 15:10:12.367265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-07-14 15:10:12.367318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-07-14 15:10:12.367431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.377 [2024-07-14 15:10:12.367470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.377 qpair failed and we were unable to recover it. 00:37:33.377 [2024-07-14 15:10:12.367622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-07-14 15:10:12.367677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-07-14 15:10:12.367787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-07-14 15:10:12.367820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-07-14 15:10:12.367991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-07-14 15:10:12.368042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-07-14 15:10:12.368194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-07-14 15:10:12.368247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-07-14 15:10:12.368407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-07-14 15:10:12.368457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-07-14 15:10:12.368588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-07-14 15:10:12.368621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-07-14 15:10:12.368796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-07-14 15:10:12.368831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-07-14 15:10:12.368976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-07-14 15:10:12.369008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-07-14 15:10:12.369122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-07-14 15:10:12.369154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-07-14 15:10:12.369259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-07-14 15:10:12.369290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-07-14 15:10:12.369397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-07-14 15:10:12.369429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-07-14 15:10:12.369539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-07-14 15:10:12.369571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-07-14 15:10:12.369709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-07-14 15:10:12.369745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-07-14 15:10:12.369894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-07-14 15:10:12.369928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-07-14 15:10:12.370054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-07-14 15:10:12.370091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-07-14 15:10:12.370303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-07-14 15:10:12.370356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-07-14 15:10:12.370509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-07-14 15:10:12.370561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-07-14 15:10:12.370714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-07-14 15:10:12.370748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-07-14 15:10:12.370916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-07-14 15:10:12.370955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-07-14 15:10:12.371084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-07-14 15:10:12.371120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-07-14 15:10:12.371271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-07-14 15:10:12.371307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-07-14 15:10:12.371458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-07-14 15:10:12.371495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-07-14 15:10:12.371610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-07-14 15:10:12.371659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-07-14 15:10:12.371773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-07-14 15:10:12.371804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-07-14 15:10:12.371938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-07-14 15:10:12.371971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-07-14 15:10:12.372135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-07-14 15:10:12.372186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-07-14 15:10:12.372310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-07-14 15:10:12.372346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-07-14 15:10:12.372513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-07-14 15:10:12.372549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-07-14 15:10:12.372694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-07-14 15:10:12.372729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-07-14 15:10:12.372845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-07-14 15:10:12.372895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-07-14 15:10:12.373092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-07-14 15:10:12.373152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-07-14 15:10:12.373298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-07-14 15:10:12.373353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-07-14 15:10:12.373505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-07-14 15:10:12.373558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-07-14 15:10:12.373691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-07-14 15:10:12.373724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-07-14 15:10:12.373839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-07-14 15:10:12.373873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-07-14 15:10:12.374022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-07-14 15:10:12.374055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-07-14 15:10:12.374171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-07-14 15:10:12.374205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-07-14 15:10:12.374321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-07-14 15:10:12.374353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-07-14 15:10:12.374461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-07-14 15:10:12.374492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-07-14 15:10:12.374605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-07-14 15:10:12.374642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-07-14 15:10:12.374752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-07-14 15:10:12.374784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-07-14 15:10:12.374903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-07-14 15:10:12.374951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-07-14 15:10:12.375116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-07-14 15:10:12.375168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-07-14 15:10:12.375327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.378 [2024-07-14 15:10:12.375380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.378 qpair failed and we were unable to recover it. 00:37:33.378 [2024-07-14 15:10:12.375531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-07-14 15:10:12.375582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-07-14 15:10:12.375687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-07-14 15:10:12.375719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-07-14 15:10:12.375888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-07-14 15:10:12.375922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-07-14 15:10:12.376051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-07-14 15:10:12.376088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-07-14 15:10:12.376237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-07-14 15:10:12.376273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-07-14 15:10:12.376392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-07-14 15:10:12.376428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-07-14 15:10:12.376568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-07-14 15:10:12.376604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-07-14 15:10:12.376751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-07-14 15:10:12.376800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-07-14 15:10:12.376958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-07-14 15:10:12.376990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-07-14 15:10:12.377109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-07-14 15:10:12.377140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-07-14 15:10:12.377278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-07-14 15:10:12.377310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-07-14 15:10:12.377451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-07-14 15:10:12.377483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-07-14 15:10:12.377637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-07-14 15:10:12.377691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-07-14 15:10:12.377814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-07-14 15:10:12.377862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-07-14 15:10:12.378027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-07-14 15:10:12.378063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-07-14 15:10:12.378208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-07-14 15:10:12.378246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-07-14 15:10:12.378419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-07-14 15:10:12.378457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-07-14 15:10:12.378612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-07-14 15:10:12.378650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-07-14 15:10:12.378779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-07-14 15:10:12.378817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-07-14 15:10:12.378957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-07-14 15:10:12.378990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-07-14 15:10:12.379109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-07-14 15:10:12.379142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-07-14 15:10:12.379326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-07-14 15:10:12.379360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-07-14 15:10:12.379615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-07-14 15:10:12.379652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-07-14 15:10:12.379774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-07-14 15:10:12.379812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-07-14 15:10:12.379975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-07-14 15:10:12.380008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-07-14 15:10:12.380120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-07-14 15:10:12.380153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-07-14 15:10:12.380330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-07-14 15:10:12.380401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-07-14 15:10:12.380608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-07-14 15:10:12.380666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-07-14 15:10:12.380795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-07-14 15:10:12.380830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-07-14 15:10:12.380977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-07-14 15:10:12.381010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-07-14 15:10:12.381141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-07-14 15:10:12.381190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-07-14 15:10:12.381346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-07-14 15:10:12.381379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-07-14 15:10:12.381497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-07-14 15:10:12.381530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-07-14 15:10:12.381663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-07-14 15:10:12.381700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-07-14 15:10:12.381896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-07-14 15:10:12.381930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-07-14 15:10:12.382035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-07-14 15:10:12.382072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-07-14 15:10:12.382213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-07-14 15:10:12.382246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-07-14 15:10:12.382375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-07-14 15:10:12.382424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-07-14 15:10:12.382597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-07-14 15:10:12.382634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-07-14 15:10:12.382760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-07-14 15:10:12.382793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-07-14 15:10:12.382932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-07-14 15:10:12.382965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-07-14 15:10:12.383086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-07-14 15:10:12.383120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-07-14 15:10:12.383332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-07-14 15:10:12.383365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-07-14 15:10:12.383532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-07-14 15:10:12.383568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.379 qpair failed and we were unable to recover it. 00:37:33.379 [2024-07-14 15:10:12.383735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.379 [2024-07-14 15:10:12.383768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-07-14 15:10:12.383905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-07-14 15:10:12.383939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-07-14 15:10:12.384053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-07-14 15:10:12.384085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-07-14 15:10:12.384204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-07-14 15:10:12.384252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-07-14 15:10:12.384403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-07-14 15:10:12.384440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-07-14 15:10:12.384618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-07-14 15:10:12.384655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-07-14 15:10:12.384791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-07-14 15:10:12.384828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-07-14 15:10:12.385015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-07-14 15:10:12.385063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-07-14 15:10:12.385217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-07-14 15:10:12.385264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-07-14 15:10:12.385405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-07-14 15:10:12.385446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-07-14 15:10:12.385648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-07-14 15:10:12.385701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-07-14 15:10:12.385810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-07-14 15:10:12.385843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-07-14 15:10:12.385992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-07-14 15:10:12.386026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-07-14 15:10:12.386184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-07-14 15:10:12.386223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-07-14 15:10:12.386341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-07-14 15:10:12.386378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-07-14 15:10:12.386526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-07-14 15:10:12.386562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-07-14 15:10:12.386685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-07-14 15:10:12.386722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-07-14 15:10:12.386862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-07-14 15:10:12.386907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-07-14 15:10:12.387057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-07-14 15:10:12.387090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-07-14 15:10:12.387217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-07-14 15:10:12.387254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-07-14 15:10:12.387431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-07-14 15:10:12.387483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-07-14 15:10:12.387610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-07-14 15:10:12.387662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-07-14 15:10:12.387802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-07-14 15:10:12.387835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-07-14 15:10:12.387999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-07-14 15:10:12.388046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-07-14 15:10:12.388191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-07-14 15:10:12.388231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-07-14 15:10:12.388357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-07-14 15:10:12.388392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-07-14 15:10:12.388556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-07-14 15:10:12.388591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-07-14 15:10:12.388719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-07-14 15:10:12.388755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-07-14 15:10:12.388893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-07-14 15:10:12.388942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-07-14 15:10:12.389068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-07-14 15:10:12.389115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-07-14 15:10:12.389247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-07-14 15:10:12.389286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-07-14 15:10:12.389441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-07-14 15:10:12.389480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-07-14 15:10:12.389632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-07-14 15:10:12.389684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-07-14 15:10:12.389802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-07-14 15:10:12.389835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-07-14 15:10:12.389975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-07-14 15:10:12.390009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-07-14 15:10:12.390118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-07-14 15:10:12.390152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-07-14 15:10:12.390290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-07-14 15:10:12.390324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-07-14 15:10:12.390428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-07-14 15:10:12.390460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-07-14 15:10:12.390580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-07-14 15:10:12.390613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-07-14 15:10:12.390775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-07-14 15:10:12.390807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-07-14 15:10:12.390928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-07-14 15:10:12.390964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-07-14 15:10:12.391086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-07-14 15:10:12.391119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-07-14 15:10:12.391260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-07-14 15:10:12.391293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-07-14 15:10:12.391426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.380 [2024-07-14 15:10:12.391460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.380 qpair failed and we were unable to recover it. 00:37:33.380 [2024-07-14 15:10:12.391611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-07-14 15:10:12.391646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-07-14 15:10:12.391808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-07-14 15:10:12.391845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-07-14 15:10:12.391992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-07-14 15:10:12.392027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-07-14 15:10:12.392158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-07-14 15:10:12.392212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-07-14 15:10:12.392385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-07-14 15:10:12.392437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-07-14 15:10:12.392562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-07-14 15:10:12.392599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-07-14 15:10:12.392720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-07-14 15:10:12.392753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-07-14 15:10:12.392913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-07-14 15:10:12.392952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-07-14 15:10:12.393100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-07-14 15:10:12.393151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-07-14 15:10:12.393269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-07-14 15:10:12.393303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-07-14 15:10:12.393447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-07-14 15:10:12.393480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-07-14 15:10:12.393586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-07-14 15:10:12.393621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-07-14 15:10:12.393733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-07-14 15:10:12.393766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-07-14 15:10:12.393911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-07-14 15:10:12.393944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-07-14 15:10:12.394095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-07-14 15:10:12.394129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-07-14 15:10:12.394259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-07-14 15:10:12.394296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-07-14 15:10:12.394417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-07-14 15:10:12.394454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-07-14 15:10:12.394613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-07-14 15:10:12.394650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-07-14 15:10:12.394792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-07-14 15:10:12.394825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-07-14 15:10:12.394973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-07-14 15:10:12.395017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-07-14 15:10:12.395157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-07-14 15:10:12.395210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-07-14 15:10:12.395342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-07-14 15:10:12.395393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-07-14 15:10:12.395558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-07-14 15:10:12.395594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-07-14 15:10:12.395733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-07-14 15:10:12.395769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-07-14 15:10:12.395917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-07-14 15:10:12.395950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-07-14 15:10:12.396093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-07-14 15:10:12.396126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-07-14 15:10:12.396267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-07-14 15:10:12.396300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-07-14 15:10:12.396496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-07-14 15:10:12.396538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-07-14 15:10:12.396661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-07-14 15:10:12.396697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-07-14 15:10:12.396846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-07-14 15:10:12.396888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-07-14 15:10:12.397051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-07-14 15:10:12.397085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-07-14 15:10:12.397260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-07-14 15:10:12.397298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-07-14 15:10:12.397453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-07-14 15:10:12.397489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-07-14 15:10:12.397662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-07-14 15:10:12.397698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-07-14 15:10:12.397827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-07-14 15:10:12.397865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-07-14 15:10:12.398025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-07-14 15:10:12.398072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-07-14 15:10:12.398211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-07-14 15:10:12.398249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-07-14 15:10:12.398404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-07-14 15:10:12.398442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.381 qpair failed and we were unable to recover it. 00:37:33.381 [2024-07-14 15:10:12.398599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.381 [2024-07-14 15:10:12.398636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-07-14 15:10:12.398782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-07-14 15:10:12.398818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-07-14 15:10:12.398961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-07-14 15:10:12.398995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-07-14 15:10:12.399143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-07-14 15:10:12.399178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-07-14 15:10:12.399316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-07-14 15:10:12.399350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-07-14 15:10:12.399458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-07-14 15:10:12.399508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-07-14 15:10:12.399637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-07-14 15:10:12.399674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-07-14 15:10:12.399841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-07-14 15:10:12.399875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-07-14 15:10:12.400009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-07-14 15:10:12.400042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-07-14 15:10:12.400182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-07-14 15:10:12.400215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-07-14 15:10:12.400336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-07-14 15:10:12.400369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-07-14 15:10:12.400506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-07-14 15:10:12.400556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-07-14 15:10:12.400712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-07-14 15:10:12.400762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-07-14 15:10:12.400950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-07-14 15:10:12.400983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-07-14 15:10:12.401118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-07-14 15:10:12.401168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-07-14 15:10:12.401313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-07-14 15:10:12.401349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-07-14 15:10:12.401501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-07-14 15:10:12.401538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-07-14 15:10:12.401686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-07-14 15:10:12.401724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-07-14 15:10:12.401894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-07-14 15:10:12.401962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-07-14 15:10:12.402094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-07-14 15:10:12.402141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-07-14 15:10:12.402283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-07-14 15:10:12.402338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-07-14 15:10:12.402471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-07-14 15:10:12.402522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-07-14 15:10:12.402686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-07-14 15:10:12.402738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-07-14 15:10:12.402845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-07-14 15:10:12.402886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-07-14 15:10:12.403024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-07-14 15:10:12.403057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-07-14 15:10:12.403194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-07-14 15:10:12.403227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-07-14 15:10:12.403368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-07-14 15:10:12.403401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-07-14 15:10:12.403541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-07-14 15:10:12.403576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-07-14 15:10:12.403707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-07-14 15:10:12.403741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-07-14 15:10:12.403850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-07-14 15:10:12.403894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-07-14 15:10:12.404031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-07-14 15:10:12.404065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-07-14 15:10:12.404258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-07-14 15:10:12.404291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-07-14 15:10:12.404407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-07-14 15:10:12.404440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-07-14 15:10:12.404588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-07-14 15:10:12.404623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-07-14 15:10:12.404760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-07-14 15:10:12.404793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-07-14 15:10:12.404950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-07-14 15:10:12.404984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-07-14 15:10:12.405113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-07-14 15:10:12.405166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-07-14 15:10:12.405272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-07-14 15:10:12.405305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-07-14 15:10:12.405470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-07-14 15:10:12.405503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-07-14 15:10:12.405657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-07-14 15:10:12.405695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-07-14 15:10:12.405832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-07-14 15:10:12.405865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-07-14 15:10:12.406020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-07-14 15:10:12.406053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-07-14 15:10:12.406191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-07-14 15:10:12.406224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-07-14 15:10:12.406365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-07-14 15:10:12.406398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.382 qpair failed and we were unable to recover it. 00:37:33.382 [2024-07-14 15:10:12.406534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.382 [2024-07-14 15:10:12.406568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.383 qpair failed and we were unable to recover it. 00:37:33.383 [2024-07-14 15:10:12.406702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.383 [2024-07-14 15:10:12.406735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.383 qpair failed and we were unable to recover it. 00:37:33.383 [2024-07-14 15:10:12.406912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.383 [2024-07-14 15:10:12.406959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.383 qpair failed and we were unable to recover it. 00:37:33.383 [2024-07-14 15:10:12.407083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.383 [2024-07-14 15:10:12.407118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.383 qpair failed and we were unable to recover it. 00:37:33.383 [2024-07-14 15:10:12.407239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.383 [2024-07-14 15:10:12.407272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.383 qpair failed and we were unable to recover it. 00:37:33.383 [2024-07-14 15:10:12.407405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.383 [2024-07-14 15:10:12.407437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.383 qpair failed and we were unable to recover it. 00:37:33.383 [2024-07-14 15:10:12.407554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.383 [2024-07-14 15:10:12.407587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.383 qpair failed and we were unable to recover it. 00:37:33.383 [2024-07-14 15:10:12.407692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.383 [2024-07-14 15:10:12.407725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.383 qpair failed and we were unable to recover it. 00:37:33.383 [2024-07-14 15:10:12.407860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.383 [2024-07-14 15:10:12.407901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.383 qpair failed and we were unable to recover it. 00:37:33.383 [2024-07-14 15:10:12.408014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.383 [2024-07-14 15:10:12.408047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.383 qpair failed and we were unable to recover it. 00:37:33.383 [2024-07-14 15:10:12.408153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.383 [2024-07-14 15:10:12.408186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.383 qpair failed and we were unable to recover it. 00:37:33.383 [2024-07-14 15:10:12.408293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.383 [2024-07-14 15:10:12.408325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.383 qpair failed and we were unable to recover it. 00:37:33.383 [2024-07-14 15:10:12.408486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.383 [2024-07-14 15:10:12.408537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.383 qpair failed and we were unable to recover it. 00:37:33.383 [2024-07-14 15:10:12.408664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.383 [2024-07-14 15:10:12.408712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.383 qpair failed and we were unable to recover it. 00:37:33.383 [2024-07-14 15:10:12.408857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.383 [2024-07-14 15:10:12.408897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.383 qpair failed and we were unable to recover it. 00:37:33.383 [2024-07-14 15:10:12.409037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.383 [2024-07-14 15:10:12.409070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.383 qpair failed and we were unable to recover it. 00:37:33.383 [2024-07-14 15:10:12.409232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.383 [2024-07-14 15:10:12.409268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.383 qpair failed and we were unable to recover it. 00:37:33.383 [2024-07-14 15:10:12.409400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.383 [2024-07-14 15:10:12.409437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.383 qpair failed and we were unable to recover it. 00:37:33.383 [2024-07-14 15:10:12.409553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.383 [2024-07-14 15:10:12.409589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.383 qpair failed and we were unable to recover it. 00:37:33.383 [2024-07-14 15:10:12.409728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.383 [2024-07-14 15:10:12.409762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.383 qpair failed and we were unable to recover it. 00:37:33.383 [2024-07-14 15:10:12.409916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.383 [2024-07-14 15:10:12.409964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.383 qpair failed and we were unable to recover it. 00:37:33.383 [2024-07-14 15:10:12.410105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.383 [2024-07-14 15:10:12.410141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.383 qpair failed and we were unable to recover it. 00:37:33.383 [2024-07-14 15:10:12.410276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.383 [2024-07-14 15:10:12.410313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.383 qpair failed and we were unable to recover it. 00:37:33.383 [2024-07-14 15:10:12.410516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.383 [2024-07-14 15:10:12.410552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.383 qpair failed and we were unable to recover it. 00:37:33.383 [2024-07-14 15:10:12.410676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.383 [2024-07-14 15:10:12.410712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.383 qpair failed and we were unable to recover it. 00:37:33.383 [2024-07-14 15:10:12.410889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.383 [2024-07-14 15:10:12.410960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.383 qpair failed and we were unable to recover it. 00:37:33.383 [2024-07-14 15:10:12.411117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.383 [2024-07-14 15:10:12.411164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.383 qpair failed and we were unable to recover it. 00:37:33.383 [2024-07-14 15:10:12.411332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.383 [2024-07-14 15:10:12.411388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.383 qpair failed and we were unable to recover it. 00:37:33.383 [2024-07-14 15:10:12.411518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.383 [2024-07-14 15:10:12.411570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.383 qpair failed and we were unable to recover it. 00:37:33.383 [2024-07-14 15:10:12.411721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.383 [2024-07-14 15:10:12.411755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.383 qpair failed and we were unable to recover it. 00:37:33.383 [2024-07-14 15:10:12.411901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.383 [2024-07-14 15:10:12.411947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.383 qpair failed and we were unable to recover it. 00:37:33.383 [2024-07-14 15:10:12.412113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.383 [2024-07-14 15:10:12.412147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.383 qpair failed and we were unable to recover it. 00:37:33.383 [2024-07-14 15:10:12.412292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.383 [2024-07-14 15:10:12.412329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.383 qpair failed and we were unable to recover it. 00:37:33.383 [2024-07-14 15:10:12.412472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.383 [2024-07-14 15:10:12.412505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.383 qpair failed and we were unable to recover it. 00:37:33.383 [2024-07-14 15:10:12.412647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.383 [2024-07-14 15:10:12.412679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.383 qpair failed and we were unable to recover it. 00:37:33.383 [2024-07-14 15:10:12.412787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.383 [2024-07-14 15:10:12.412820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.383 qpair failed and we were unable to recover it. 00:37:33.383 [2024-07-14 15:10:12.412995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.383 [2024-07-14 15:10:12.413028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.383 qpair failed and we were unable to recover it. 00:37:33.383 [2024-07-14 15:10:12.413134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.383 [2024-07-14 15:10:12.413184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.383 qpair failed and we were unable to recover it. 00:37:33.383 [2024-07-14 15:10:12.413393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.383 [2024-07-14 15:10:12.413429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.383 qpair failed and we were unable to recover it. 00:37:33.383 [2024-07-14 15:10:12.413613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.383 [2024-07-14 15:10:12.413649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.383 qpair failed and we were unable to recover it. 00:37:33.383 [2024-07-14 15:10:12.413767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.383 [2024-07-14 15:10:12.413803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.383 qpair failed and we were unable to recover it. 00:37:33.383 [2024-07-14 15:10:12.413981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.383 [2024-07-14 15:10:12.414014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.383 qpair failed and we were unable to recover it. 00:37:33.383 [2024-07-14 15:10:12.414124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.383 [2024-07-14 15:10:12.414156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.383 qpair failed and we were unable to recover it. 00:37:33.383 [2024-07-14 15:10:12.414332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.383 [2024-07-14 15:10:12.414368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.383 qpair failed and we were unable to recover it. 00:37:33.383 [2024-07-14 15:10:12.414544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.383 [2024-07-14 15:10:12.414607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.383 qpair failed and we were unable to recover it. 00:37:33.384 [2024-07-14 15:10:12.414760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.384 [2024-07-14 15:10:12.414796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.384 qpair failed and we were unable to recover it. 00:37:33.384 [2024-07-14 15:10:12.414950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.384 [2024-07-14 15:10:12.414984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.384 qpair failed and we were unable to recover it. 00:37:33.384 [2024-07-14 15:10:12.415129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.384 [2024-07-14 15:10:12.415162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.384 qpair failed and we were unable to recover it. 00:37:33.384 [2024-07-14 15:10:12.415297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.384 [2024-07-14 15:10:12.415330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.384 qpair failed and we were unable to recover it. 00:37:33.384 [2024-07-14 15:10:12.415514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.384 [2024-07-14 15:10:12.415550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.384 qpair failed and we were unable to recover it. 00:37:33.384 [2024-07-14 15:10:12.415692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.384 [2024-07-14 15:10:12.415746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.384 qpair failed and we were unable to recover it. 00:37:33.384 [2024-07-14 15:10:12.415902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.384 [2024-07-14 15:10:12.415952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.384 qpair failed and we were unable to recover it. 00:37:33.384 [2024-07-14 15:10:12.416089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.384 [2024-07-14 15:10:12.416153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.384 qpair failed and we were unable to recover it. 00:37:33.384 [2024-07-14 15:10:12.416336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.384 [2024-07-14 15:10:12.416375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.384 qpair failed and we were unable to recover it. 00:37:33.384 [2024-07-14 15:10:12.416499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.384 [2024-07-14 15:10:12.416535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.384 qpair failed and we were unable to recover it. 00:37:33.384 [2024-07-14 15:10:12.416688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.384 [2024-07-14 15:10:12.416725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.384 qpair failed and we were unable to recover it. 00:37:33.384 [2024-07-14 15:10:12.416905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.384 [2024-07-14 15:10:12.416953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.384 qpair failed and we were unable to recover it. 00:37:33.384 [2024-07-14 15:10:12.417076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.384 [2024-07-14 15:10:12.417112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.384 qpair failed and we were unable to recover it. 00:37:33.384 [2024-07-14 15:10:12.417265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.384 [2024-07-14 15:10:12.417319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.384 qpair failed and we were unable to recover it. 00:37:33.384 [2024-07-14 15:10:12.417477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.384 [2024-07-14 15:10:12.417529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.384 qpair failed and we were unable to recover it. 00:37:33.384 [2024-07-14 15:10:12.417713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.384 [2024-07-14 15:10:12.417766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.384 qpair failed and we were unable to recover it. 00:37:33.384 [2024-07-14 15:10:12.417874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.384 [2024-07-14 15:10:12.417914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.384 qpair failed and we were unable to recover it. 00:37:33.384 [2024-07-14 15:10:12.418047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.384 [2024-07-14 15:10:12.418080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.384 qpair failed and we were unable to recover it. 00:37:33.384 [2024-07-14 15:10:12.418237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.384 [2024-07-14 15:10:12.418289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.384 qpair failed and we were unable to recover it. 00:37:33.384 [2024-07-14 15:10:12.418491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.384 [2024-07-14 15:10:12.418527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.384 qpair failed and we were unable to recover it. 00:37:33.384 [2024-07-14 15:10:12.418699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.384 [2024-07-14 15:10:12.418758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.384 qpair failed and we were unable to recover it. 00:37:33.384 [2024-07-14 15:10:12.418938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.384 [2024-07-14 15:10:12.418974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.384 qpair failed and we were unable to recover it. 00:37:33.384 [2024-07-14 15:10:12.419144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.384 [2024-07-14 15:10:12.419210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.384 qpair failed and we were unable to recover it. 00:37:33.384 [2024-07-14 15:10:12.419406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.384 [2024-07-14 15:10:12.419458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.384 qpair failed and we were unable to recover it. 00:37:33.384 [2024-07-14 15:10:12.419577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.384 [2024-07-14 15:10:12.419612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.384 qpair failed and we were unable to recover it. 00:37:33.384 [2024-07-14 15:10:12.419749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.384 [2024-07-14 15:10:12.419783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.384 qpair failed and we were unable to recover it. 00:37:33.384 [2024-07-14 15:10:12.419967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.384 [2024-07-14 15:10:12.420005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.384 qpair failed and we were unable to recover it. 00:37:33.384 [2024-07-14 15:10:12.420124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.384 [2024-07-14 15:10:12.420161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.384 qpair failed and we were unable to recover it. 00:37:33.384 [2024-07-14 15:10:12.420281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.384 [2024-07-14 15:10:12.420316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.384 qpair failed and we were unable to recover it. 00:37:33.384 [2024-07-14 15:10:12.420514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.384 [2024-07-14 15:10:12.420551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.384 qpair failed and we were unable to recover it. 00:37:33.384 [2024-07-14 15:10:12.420694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.384 [2024-07-14 15:10:12.420731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.384 qpair failed and we were unable to recover it. 00:37:33.384 [2024-07-14 15:10:12.420934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.384 [2024-07-14 15:10:12.420978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.384 qpair failed and we were unable to recover it. 00:37:33.384 [2024-07-14 15:10:12.421132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.384 [2024-07-14 15:10:12.421185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.384 qpair failed and we were unable to recover it. 00:37:33.384 [2024-07-14 15:10:12.421355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.384 [2024-07-14 15:10:12.421408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.384 qpair failed and we were unable to recover it. 00:37:33.384 [2024-07-14 15:10:12.421578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.384 [2024-07-14 15:10:12.421611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.384 qpair failed and we were unable to recover it. 00:37:33.384 [2024-07-14 15:10:12.421724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.384 [2024-07-14 15:10:12.421757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.384 qpair failed and we were unable to recover it. 00:37:33.384 [2024-07-14 15:10:12.421892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.384 [2024-07-14 15:10:12.421926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.384 qpair failed and we were unable to recover it. 00:37:33.384 [2024-07-14 15:10:12.422085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.384 [2024-07-14 15:10:12.422136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.384 qpair failed and we were unable to recover it. 00:37:33.384 [2024-07-14 15:10:12.422245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.384 [2024-07-14 15:10:12.422279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.384 qpair failed and we were unable to recover it. 00:37:33.384 [2024-07-14 15:10:12.422463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.384 [2024-07-14 15:10:12.422513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.384 qpair failed and we were unable to recover it. 00:37:33.384 [2024-07-14 15:10:12.422652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.384 [2024-07-14 15:10:12.422686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.384 qpair failed and we were unable to recover it. 00:37:33.384 [2024-07-14 15:10:12.422824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.384 [2024-07-14 15:10:12.422858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.384 qpair failed and we were unable to recover it. 00:37:33.384 [2024-07-14 15:10:12.423002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.384 [2024-07-14 15:10:12.423050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.384 qpair failed and we were unable to recover it. 00:37:33.384 [2024-07-14 15:10:12.423183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.384 [2024-07-14 15:10:12.423219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.384 qpair failed and we were unable to recover it. 00:37:33.385 [2024-07-14 15:10:12.423365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.385 [2024-07-14 15:10:12.423401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.385 qpair failed and we were unable to recover it. 00:37:33.385 [2024-07-14 15:10:12.423545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.385 [2024-07-14 15:10:12.423581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.385 qpair failed and we were unable to recover it. 00:37:33.385 [2024-07-14 15:10:12.423724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.385 [2024-07-14 15:10:12.423760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.385 qpair failed and we were unable to recover it. 00:37:33.385 [2024-07-14 15:10:12.423948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.385 [2024-07-14 15:10:12.423983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.385 qpair failed and we were unable to recover it. 00:37:33.385 [2024-07-14 15:10:12.424146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.385 [2024-07-14 15:10:12.424199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.385 qpair failed and we were unable to recover it. 00:37:33.385 [2024-07-14 15:10:12.424321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.385 [2024-07-14 15:10:12.424375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.385 qpair failed and we were unable to recover it. 00:37:33.385 [2024-07-14 15:10:12.424504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.385 [2024-07-14 15:10:12.424554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.385 qpair failed and we were unable to recover it. 00:37:33.385 [2024-07-14 15:10:12.424697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.385 [2024-07-14 15:10:12.424731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.385 qpair failed and we were unable to recover it. 00:37:33.385 [2024-07-14 15:10:12.424886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.385 [2024-07-14 15:10:12.424935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.385 qpair failed and we were unable to recover it. 00:37:33.385 [2024-07-14 15:10:12.425049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.385 [2024-07-14 15:10:12.425084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.385 qpair failed and we were unable to recover it. 00:37:33.385 [2024-07-14 15:10:12.425199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.385 [2024-07-14 15:10:12.425230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.385 qpair failed and we were unable to recover it. 00:37:33.385 [2024-07-14 15:10:12.425365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.385 [2024-07-14 15:10:12.425397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.385 qpair failed and we were unable to recover it. 00:37:33.385 [2024-07-14 15:10:12.425530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.385 [2024-07-14 15:10:12.425562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.385 qpair failed and we were unable to recover it. 00:37:33.385 [2024-07-14 15:10:12.425739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.385 [2024-07-14 15:10:12.425772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.385 qpair failed and we were unable to recover it. 00:37:33.385 [2024-07-14 15:10:12.425931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.385 [2024-07-14 15:10:12.425963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.385 qpair failed and we were unable to recover it. 00:37:33.385 [2024-07-14 15:10:12.426098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.385 [2024-07-14 15:10:12.426131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.385 qpair failed and we were unable to recover it. 00:37:33.385 [2024-07-14 15:10:12.426263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.385 [2024-07-14 15:10:12.426303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.385 qpair failed and we were unable to recover it. 00:37:33.385 [2024-07-14 15:10:12.426434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.385 [2024-07-14 15:10:12.426484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.385 qpair failed and we were unable to recover it. 00:37:33.385 [2024-07-14 15:10:12.426630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.385 [2024-07-14 15:10:12.426665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.385 qpair failed and we were unable to recover it. 00:37:33.385 [2024-07-14 15:10:12.426816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.385 [2024-07-14 15:10:12.426851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.385 qpair failed and we were unable to recover it. 00:37:33.385 [2024-07-14 15:10:12.427010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.385 [2024-07-14 15:10:12.427041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.385 qpair failed and we were unable to recover it. 00:37:33.385 [2024-07-14 15:10:12.427180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.385 [2024-07-14 15:10:12.427233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.385 qpair failed and we were unable to recover it. 00:37:33.385 [2024-07-14 15:10:12.427430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.385 [2024-07-14 15:10:12.427470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.385 qpair failed and we were unable to recover it. 00:37:33.385 [2024-07-14 15:10:12.427602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.385 [2024-07-14 15:10:12.427655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.385 qpair failed and we were unable to recover it. 00:37:33.385 [2024-07-14 15:10:12.427808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.385 [2024-07-14 15:10:12.427844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.385 qpair failed and we were unable to recover it. 00:37:33.385 [2024-07-14 15:10:12.428017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.385 [2024-07-14 15:10:12.428050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.385 qpair failed and we were unable to recover it. 00:37:33.385 [2024-07-14 15:10:12.428178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.385 [2024-07-14 15:10:12.428225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.385 qpair failed and we were unable to recover it. 00:37:33.385 [2024-07-14 15:10:12.428419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.385 [2024-07-14 15:10:12.428456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.385 qpair failed and we were unable to recover it. 00:37:33.385 [2024-07-14 15:10:12.428633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.385 [2024-07-14 15:10:12.428670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.385 qpair failed and we were unable to recover it. 00:37:33.385 [2024-07-14 15:10:12.428818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.385 [2024-07-14 15:10:12.428854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.385 qpair failed and we were unable to recover it. 00:37:33.385 [2024-07-14 15:10:12.429004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.385 [2024-07-14 15:10:12.429036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.385 qpair failed and we were unable to recover it. 00:37:33.385 [2024-07-14 15:10:12.429191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.385 [2024-07-14 15:10:12.429227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.385 qpair failed and we were unable to recover it. 00:37:33.385 [2024-07-14 15:10:12.429424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.385 [2024-07-14 15:10:12.429478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.385 qpair failed and we were unable to recover it. 00:37:33.385 [2024-07-14 15:10:12.429692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.385 [2024-07-14 15:10:12.429787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.385 qpair failed and we were unable to recover it. 00:37:33.385 [2024-07-14 15:10:12.429972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.385 [2024-07-14 15:10:12.430006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.385 qpair failed and we were unable to recover it. 00:37:33.385 [2024-07-14 15:10:12.430155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.385 [2024-07-14 15:10:12.430192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.385 qpair failed and we were unable to recover it. 00:37:33.385 [2024-07-14 15:10:12.430325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.385 [2024-07-14 15:10:12.430377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.385 qpair failed and we were unable to recover it. 00:37:33.385 [2024-07-14 15:10:12.430555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.385 [2024-07-14 15:10:12.430590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.385 qpair failed and we were unable to recover it. 00:37:33.385 [2024-07-14 15:10:12.430735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.385 [2024-07-14 15:10:12.430770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.385 qpair failed and we were unable to recover it. 00:37:33.385 [2024-07-14 15:10:12.430970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.386 [2024-07-14 15:10:12.431018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.386 qpair failed and we were unable to recover it. 00:37:33.386 [2024-07-14 15:10:12.431162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.386 [2024-07-14 15:10:12.431198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.386 qpair failed and we were unable to recover it. 00:37:33.386 [2024-07-14 15:10:12.431379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.386 [2024-07-14 15:10:12.431432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.386 qpair failed and we were unable to recover it. 00:37:33.386 [2024-07-14 15:10:12.431551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.386 [2024-07-14 15:10:12.431585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.386 qpair failed and we were unable to recover it. 00:37:33.386 [2024-07-14 15:10:12.431764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.386 [2024-07-14 15:10:12.431803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.386 qpair failed and we were unable to recover it. 00:37:33.386 [2024-07-14 15:10:12.431941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.386 [2024-07-14 15:10:12.431975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.386 qpair failed and we were unable to recover it. 00:37:33.386 [2024-07-14 15:10:12.432113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.386 [2024-07-14 15:10:12.432147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.386 qpair failed and we were unable to recover it. 00:37:33.386 [2024-07-14 15:10:12.432286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.386 [2024-07-14 15:10:12.432319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.386 qpair failed and we were unable to recover it. 00:37:33.386 [2024-07-14 15:10:12.432510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.386 [2024-07-14 15:10:12.432563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.386 qpair failed and we were unable to recover it. 00:37:33.386 [2024-07-14 15:10:12.432742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.386 [2024-07-14 15:10:12.432775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.386 qpair failed and we were unable to recover it. 00:37:33.386 [2024-07-14 15:10:12.432921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.386 [2024-07-14 15:10:12.432954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.386 qpair failed and we were unable to recover it. 00:37:33.386 [2024-07-14 15:10:12.433060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.386 [2024-07-14 15:10:12.433091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.386 qpair failed and we were unable to recover it. 00:37:33.386 [2024-07-14 15:10:12.433218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.386 [2024-07-14 15:10:12.433253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.386 qpair failed and we were unable to recover it. 00:37:33.386 [2024-07-14 15:10:12.433427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.386 [2024-07-14 15:10:12.433464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.386 qpair failed and we were unable to recover it. 00:37:33.386 [2024-07-14 15:10:12.433583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.386 [2024-07-14 15:10:12.433619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.386 qpair failed and we were unable to recover it. 00:37:33.386 [2024-07-14 15:10:12.433802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.386 [2024-07-14 15:10:12.433836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.386 qpair failed and we were unable to recover it. 00:37:33.386 [2024-07-14 15:10:12.433984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.386 [2024-07-14 15:10:12.434017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.386 qpair failed and we were unable to recover it. 00:37:33.386 [2024-07-14 15:10:12.434146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.386 [2024-07-14 15:10:12.434202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.386 qpair failed and we were unable to recover it. 00:37:33.386 [2024-07-14 15:10:12.434352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.386 [2024-07-14 15:10:12.434405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.386 qpair failed and we were unable to recover it. 00:37:33.386 [2024-07-14 15:10:12.434546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.386 [2024-07-14 15:10:12.434600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.386 qpair failed and we were unable to recover it. 00:37:33.386 [2024-07-14 15:10:12.434758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.386 [2024-07-14 15:10:12.434807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.386 qpair failed and we were unable to recover it. 00:37:33.386 [2024-07-14 15:10:12.435002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.386 [2024-07-14 15:10:12.435035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.386 qpair failed and we were unable to recover it. 00:37:33.386 [2024-07-14 15:10:12.435234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.386 [2024-07-14 15:10:12.435271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.386 qpair failed and we were unable to recover it. 00:37:33.386 [2024-07-14 15:10:12.435479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.386 [2024-07-14 15:10:12.435515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.386 qpair failed and we were unable to recover it. 00:37:33.386 [2024-07-14 15:10:12.435664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.386 [2024-07-14 15:10:12.435713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.386 qpair failed and we were unable to recover it. 00:37:33.386 [2024-07-14 15:10:12.435885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.386 [2024-07-14 15:10:12.435936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.386 qpair failed and we were unable to recover it. 00:37:33.386 [2024-07-14 15:10:12.436070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.386 [2024-07-14 15:10:12.436102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.386 qpair failed and we were unable to recover it. 00:37:33.386 [2024-07-14 15:10:12.436255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.386 [2024-07-14 15:10:12.436291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.386 qpair failed and we were unable to recover it. 00:37:33.386 [2024-07-14 15:10:12.436467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.386 [2024-07-14 15:10:12.436503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.386 qpair failed and we were unable to recover it. 00:37:33.386 [2024-07-14 15:10:12.436634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.386 [2024-07-14 15:10:12.436683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.386 qpair failed and we were unable to recover it. 00:37:33.386 [2024-07-14 15:10:12.436819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.386 [2024-07-14 15:10:12.436871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.386 qpair failed and we were unable to recover it. 00:37:33.386 [2024-07-14 15:10:12.437081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.386 [2024-07-14 15:10:12.437124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.386 qpair failed and we were unable to recover it. 00:37:33.386 [2024-07-14 15:10:12.437258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.386 [2024-07-14 15:10:12.437291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.386 qpair failed and we were unable to recover it. 00:37:33.386 [2024-07-14 15:10:12.437466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.386 [2024-07-14 15:10:12.437503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.386 qpair failed and we were unable to recover it. 00:37:33.386 [2024-07-14 15:10:12.437681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.386 [2024-07-14 15:10:12.437717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.386 qpair failed and we were unable to recover it. 00:37:33.386 [2024-07-14 15:10:12.437845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.386 [2024-07-14 15:10:12.437905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.386 qpair failed and we were unable to recover it. 00:37:33.386 [2024-07-14 15:10:12.438064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.386 [2024-07-14 15:10:12.438096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.386 qpair failed and we were unable to recover it. 00:37:33.386 [2024-07-14 15:10:12.438241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.386 [2024-07-14 15:10:12.438277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.386 qpair failed and we were unable to recover it. 00:37:33.386 [2024-07-14 15:10:12.438401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.386 [2024-07-14 15:10:12.438453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.386 qpair failed and we were unable to recover it. 00:37:33.386 [2024-07-14 15:10:12.438593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.386 [2024-07-14 15:10:12.438628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.386 qpair failed and we were unable to recover it. 00:37:33.386 [2024-07-14 15:10:12.438743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.386 [2024-07-14 15:10:12.438778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.386 qpair failed and we were unable to recover it. 00:37:33.386 [2024-07-14 15:10:12.438960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.386 [2024-07-14 15:10:12.439008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.386 qpair failed and we were unable to recover it. 00:37:33.386 [2024-07-14 15:10:12.439186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.386 [2024-07-14 15:10:12.439222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.386 qpair failed and we were unable to recover it. 00:37:33.386 [2024-07-14 15:10:12.439379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.386 [2024-07-14 15:10:12.439417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.386 qpair failed and we were unable to recover it. 00:37:33.387 [2024-07-14 15:10:12.439615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.387 [2024-07-14 15:10:12.439653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.387 qpair failed and we were unable to recover it. 00:37:33.387 [2024-07-14 15:10:12.439790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.387 [2024-07-14 15:10:12.439823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.387 qpair failed and we were unable to recover it. 00:37:33.387 [2024-07-14 15:10:12.439976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.387 [2024-07-14 15:10:12.440008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.387 qpair failed and we were unable to recover it. 00:37:33.387 [2024-07-14 15:10:12.440173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.387 [2024-07-14 15:10:12.440220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.387 qpair failed and we were unable to recover it. 00:37:33.387 [2024-07-14 15:10:12.440356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.387 [2024-07-14 15:10:12.440388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.387 qpair failed and we were unable to recover it. 00:37:33.387 [2024-07-14 15:10:12.440528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.387 [2024-07-14 15:10:12.440581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.387 qpair failed and we were unable to recover it. 00:37:33.387 [2024-07-14 15:10:12.440727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.387 [2024-07-14 15:10:12.440763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.387 qpair failed and we were unable to recover it. 00:37:33.387 [2024-07-14 15:10:12.440896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.387 [2024-07-14 15:10:12.440934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.387 qpair failed and we were unable to recover it. 00:37:33.387 [2024-07-14 15:10:12.441077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.387 [2024-07-14 15:10:12.441111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.387 qpair failed and we were unable to recover it. 00:37:33.387 [2024-07-14 15:10:12.441282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.387 [2024-07-14 15:10:12.441333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.387 qpair failed and we were unable to recover it. 00:37:33.387 [2024-07-14 15:10:12.441539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.387 [2024-07-14 15:10:12.441573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.387 qpair failed and we were unable to recover it. 00:37:33.387 [2024-07-14 15:10:12.441726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.387 [2024-07-14 15:10:12.441761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.387 qpair failed and we were unable to recover it. 00:37:33.387 [2024-07-14 15:10:12.441924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.387 [2024-07-14 15:10:12.441990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.387 qpair failed and we were unable to recover it. 00:37:33.387 [2024-07-14 15:10:12.442159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.387 [2024-07-14 15:10:12.442211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.387 qpair failed and we were unable to recover it. 00:37:33.387 [2024-07-14 15:10:12.442409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.387 [2024-07-14 15:10:12.442461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.387 qpair failed and we were unable to recover it. 00:37:33.387 [2024-07-14 15:10:12.442646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.387 [2024-07-14 15:10:12.442698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.387 qpair failed and we were unable to recover it. 00:37:33.387 [2024-07-14 15:10:12.442923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.387 [2024-07-14 15:10:12.442958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.387 qpair failed and we were unable to recover it. 00:37:33.387 [2024-07-14 15:10:12.443101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.387 [2024-07-14 15:10:12.443135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.387 qpair failed and we were unable to recover it. 00:37:33.387 [2024-07-14 15:10:12.443289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.387 [2024-07-14 15:10:12.443341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.387 qpair failed and we were unable to recover it. 00:37:33.387 [2024-07-14 15:10:12.443491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.387 [2024-07-14 15:10:12.443545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.387 qpair failed and we were unable to recover it. 00:37:33.387 [2024-07-14 15:10:12.443660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.387 [2024-07-14 15:10:12.443693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.387 qpair failed and we were unable to recover it. 00:37:33.387 [2024-07-14 15:10:12.443826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.387 [2024-07-14 15:10:12.443860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.387 qpair failed and we were unable to recover it. 00:37:33.387 [2024-07-14 15:10:12.443989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.387 [2024-07-14 15:10:12.444036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.387 qpair failed and we were unable to recover it. 00:37:33.387 [2024-07-14 15:10:12.444215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.387 [2024-07-14 15:10:12.444255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.387 qpair failed and we were unable to recover it. 00:37:33.387 [2024-07-14 15:10:12.444395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.387 [2024-07-14 15:10:12.444459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.387 qpair failed and we were unable to recover it. 00:37:33.387 [2024-07-14 15:10:12.444580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.387 [2024-07-14 15:10:12.444617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.387 qpair failed and we were unable to recover it. 00:37:33.387 [2024-07-14 15:10:12.444794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.387 [2024-07-14 15:10:12.444831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.387 qpair failed and we were unable to recover it. 00:37:33.387 [2024-07-14 15:10:12.445016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.387 [2024-07-14 15:10:12.445050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.387 qpair failed and we were unable to recover it. 00:37:33.387 [2024-07-14 15:10:12.445226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.387 [2024-07-14 15:10:12.445279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.387 qpair failed and we were unable to recover it. 00:37:33.387 [2024-07-14 15:10:12.445446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.387 [2024-07-14 15:10:12.445485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.387 qpair failed and we were unable to recover it. 00:37:33.387 [2024-07-14 15:10:12.445690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.387 [2024-07-14 15:10:12.445727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.387 qpair failed and we were unable to recover it. 00:37:33.387 [2024-07-14 15:10:12.445873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.387 [2024-07-14 15:10:12.445932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.387 qpair failed and we were unable to recover it. 00:37:33.387 [2024-07-14 15:10:12.446075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.387 [2024-07-14 15:10:12.446108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.387 qpair failed and we were unable to recover it. 00:37:33.387 [2024-07-14 15:10:12.446321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.387 [2024-07-14 15:10:12.446353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.387 qpair failed and we were unable to recover it. 00:37:33.387 [2024-07-14 15:10:12.446457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.387 [2024-07-14 15:10:12.446507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.387 qpair failed and we were unable to recover it. 00:37:33.387 [2024-07-14 15:10:12.446626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.387 [2024-07-14 15:10:12.446662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.387 qpair failed and we were unable to recover it. 00:37:33.387 [2024-07-14 15:10:12.446837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.387 [2024-07-14 15:10:12.446870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.387 qpair failed and we were unable to recover it. 00:37:33.387 [2024-07-14 15:10:12.447019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.387 [2024-07-14 15:10:12.447051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.387 qpair failed and we were unable to recover it. 00:37:33.387 [2024-07-14 15:10:12.447159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.387 [2024-07-14 15:10:12.447191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.387 qpair failed and we were unable to recover it. 00:37:33.387 [2024-07-14 15:10:12.447325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.387 [2024-07-14 15:10:12.447357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.387 qpair failed and we were unable to recover it. 00:37:33.387 [2024-07-14 15:10:12.447523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.387 [2024-07-14 15:10:12.447555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.387 qpair failed and we were unable to recover it. 00:37:33.387 [2024-07-14 15:10:12.447713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.387 [2024-07-14 15:10:12.447746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.387 qpair failed and we were unable to recover it. 00:37:33.387 [2024-07-14 15:10:12.447884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.387 [2024-07-14 15:10:12.447916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.387 qpair failed and we were unable to recover it. 00:37:33.387 [2024-07-14 15:10:12.448055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.387 [2024-07-14 15:10:12.448088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.387 qpair failed and we were unable to recover it. 00:37:33.387 [2024-07-14 15:10:12.448198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.387 [2024-07-14 15:10:12.448230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.388 qpair failed and we were unable to recover it. 00:37:33.388 [2024-07-14 15:10:12.448333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.388 [2024-07-14 15:10:12.448365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.388 qpair failed and we were unable to recover it. 00:37:33.388 [2024-07-14 15:10:12.448499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.388 [2024-07-14 15:10:12.448532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.388 qpair failed and we were unable to recover it. 00:37:33.388 [2024-07-14 15:10:12.448658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.388 [2024-07-14 15:10:12.448691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.388 qpair failed and we were unable to recover it. 00:37:33.388 [2024-07-14 15:10:12.448799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.388 [2024-07-14 15:10:12.448831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.388 qpair failed and we were unable to recover it. 00:37:33.388 [2024-07-14 15:10:12.448983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.388 [2024-07-14 15:10:12.449016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.388 qpair failed and we were unable to recover it. 00:37:33.388 [2024-07-14 15:10:12.449153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.388 [2024-07-14 15:10:12.449186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.388 qpair failed and we were unable to recover it. 00:37:33.388 [2024-07-14 15:10:12.449294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.388 [2024-07-14 15:10:12.449326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.388 qpair failed and we were unable to recover it. 00:37:33.388 [2024-07-14 15:10:12.449454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.388 [2024-07-14 15:10:12.449486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.388 qpair failed and we were unable to recover it. 00:37:33.388 [2024-07-14 15:10:12.449622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.388 [2024-07-14 15:10:12.449659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.388 qpair failed and we were unable to recover it. 00:37:33.388 [2024-07-14 15:10:12.449778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.388 [2024-07-14 15:10:12.449809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.388 qpair failed and we were unable to recover it. 00:37:33.388 [2024-07-14 15:10:12.449925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.388 [2024-07-14 15:10:12.449959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.388 qpair failed and we were unable to recover it. 00:37:33.388 [2024-07-14 15:10:12.450063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.388 [2024-07-14 15:10:12.450095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.388 qpair failed and we were unable to recover it. 00:37:33.388 [2024-07-14 15:10:12.450236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.388 [2024-07-14 15:10:12.450268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.388 qpair failed and we were unable to recover it. 00:37:33.388 [2024-07-14 15:10:12.450403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.388 [2024-07-14 15:10:12.450434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.388 qpair failed and we were unable to recover it. 00:37:33.388 [2024-07-14 15:10:12.450581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.388 [2024-07-14 15:10:12.450612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.388 qpair failed and we were unable to recover it. 00:37:33.388 [2024-07-14 15:10:12.450751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.388 [2024-07-14 15:10:12.450783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.388 qpair failed and we were unable to recover it. 00:37:33.388 [2024-07-14 15:10:12.450920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.388 [2024-07-14 15:10:12.450956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.388 qpair failed and we were unable to recover it. 00:37:33.388 [2024-07-14 15:10:12.451071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.388 [2024-07-14 15:10:12.451102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.388 qpair failed and we were unable to recover it. 00:37:33.388 [2024-07-14 15:10:12.451242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.388 [2024-07-14 15:10:12.451273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.388 qpair failed and we were unable to recover it. 00:37:33.388 [2024-07-14 15:10:12.451436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.388 [2024-07-14 15:10:12.451468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.388 qpair failed and we were unable to recover it. 00:37:33.388 [2024-07-14 15:10:12.451574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.388 [2024-07-14 15:10:12.451606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.388 qpair failed and we were unable to recover it. 00:37:33.388 [2024-07-14 15:10:12.451741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.388 [2024-07-14 15:10:12.451772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.388 qpair failed and we were unable to recover it. 00:37:33.388 [2024-07-14 15:10:12.451915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.388 [2024-07-14 15:10:12.451948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.388 qpair failed and we were unable to recover it. 00:37:33.388 [2024-07-14 15:10:12.452117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.388 [2024-07-14 15:10:12.452149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.388 qpair failed and we were unable to recover it. 00:37:33.388 [2024-07-14 15:10:12.452287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.388 [2024-07-14 15:10:12.452319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.388 qpair failed and we were unable to recover it. 00:37:33.388 [2024-07-14 15:10:12.452494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.388 [2024-07-14 15:10:12.452531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.388 qpair failed and we were unable to recover it. 00:37:33.388 [2024-07-14 15:10:12.452679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.388 [2024-07-14 15:10:12.452715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.388 qpair failed and we were unable to recover it. 00:37:33.388 [2024-07-14 15:10:12.452887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.388 [2024-07-14 15:10:12.452926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.388 qpair failed and we were unable to recover it. 00:37:33.388 [2024-07-14 15:10:12.453067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.388 [2024-07-14 15:10:12.453098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.388 qpair failed and we were unable to recover it. 00:37:33.388 [2024-07-14 15:10:12.453278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.388 [2024-07-14 15:10:12.453314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.388 qpair failed and we were unable to recover it. 00:37:33.388 [2024-07-14 15:10:12.453437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.388 [2024-07-14 15:10:12.453472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.388 qpair failed and we were unable to recover it. 00:37:33.388 [2024-07-14 15:10:12.453604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.388 [2024-07-14 15:10:12.453653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.388 qpair failed and we were unable to recover it. 00:37:33.388 [2024-07-14 15:10:12.453845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.388 [2024-07-14 15:10:12.453921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.388 qpair failed and we were unable to recover it. 00:37:33.388 [2024-07-14 15:10:12.454076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.388 [2024-07-14 15:10:12.454112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.388 qpair failed and we were unable to recover it. 00:37:33.388 [2024-07-14 15:10:12.454295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.388 [2024-07-14 15:10:12.454328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.388 qpair failed and we were unable to recover it. 00:37:33.388 [2024-07-14 15:10:12.454480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.388 [2024-07-14 15:10:12.454517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.388 qpair failed and we were unable to recover it. 00:37:33.388 [2024-07-14 15:10:12.454637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.388 [2024-07-14 15:10:12.454674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.388 qpair failed and we were unable to recover it. 00:37:33.388 [2024-07-14 15:10:12.454835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.388 [2024-07-14 15:10:12.454868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.388 qpair failed and we were unable to recover it. 00:37:33.388 [2024-07-14 15:10:12.455018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.388 [2024-07-14 15:10:12.455050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.388 qpair failed and we were unable to recover it. 00:37:33.388 [2024-07-14 15:10:12.455199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.388 [2024-07-14 15:10:12.455251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.388 qpair failed and we were unable to recover it. 00:37:33.388 [2024-07-14 15:10:12.455457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.388 [2024-07-14 15:10:12.455505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.388 qpair failed and we were unable to recover it. 00:37:33.388 [2024-07-14 15:10:12.455630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.388 [2024-07-14 15:10:12.455666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.388 qpair failed and we were unable to recover it. 00:37:33.388 [2024-07-14 15:10:12.455819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.388 [2024-07-14 15:10:12.455854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.388 qpair failed and we were unable to recover it. 00:37:33.388 [2024-07-14 15:10:12.456041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.388 [2024-07-14 15:10:12.456088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.389 qpair failed and we were unable to recover it. 00:37:33.389 [2024-07-14 15:10:12.459895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.389 [2024-07-14 15:10:12.459962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.389 qpair failed and we were unable to recover it. 00:37:33.389 [2024-07-14 15:10:12.460130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.389 [2024-07-14 15:10:12.460184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.389 qpair failed and we were unable to recover it. 00:37:33.389 [2024-07-14 15:10:12.460371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.389 [2024-07-14 15:10:12.460410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.389 qpair failed and we were unable to recover it. 00:37:33.389 [2024-07-14 15:10:12.460622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.389 [2024-07-14 15:10:12.460660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.389 qpair failed and we were unable to recover it. 00:37:33.389 [2024-07-14 15:10:12.460854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.389 [2024-07-14 15:10:12.460900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.389 qpair failed and we were unable to recover it. 00:37:33.389 [2024-07-14 15:10:12.461078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.389 [2024-07-14 15:10:12.461113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.389 qpair failed and we were unable to recover it. 00:37:33.389 [2024-07-14 15:10:12.461261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.389 [2024-07-14 15:10:12.461296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.389 qpair failed and we were unable to recover it. 00:37:33.389 [2024-07-14 15:10:12.461449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.389 [2024-07-14 15:10:12.461486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.389 qpair failed and we were unable to recover it. 00:37:33.389 [2024-07-14 15:10:12.461672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.389 [2024-07-14 15:10:12.461710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.389 qpair failed and we were unable to recover it. 00:37:33.389 [2024-07-14 15:10:12.461941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.389 [2024-07-14 15:10:12.461986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.389 qpair failed and we were unable to recover it. 00:37:33.389 [2024-07-14 15:10:12.462217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.389 [2024-07-14 15:10:12.462266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.389 qpair failed and we were unable to recover it. 00:37:33.389 [2024-07-14 15:10:12.462420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.389 [2024-07-14 15:10:12.462471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.389 qpair failed and we were unable to recover it. 00:37:33.389 [2024-07-14 15:10:12.462702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.389 [2024-07-14 15:10:12.462753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.389 qpair failed and we were unable to recover it. 00:37:33.389 [2024-07-14 15:10:12.462919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.389 [2024-07-14 15:10:12.462981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.389 qpair failed and we were unable to recover it. 00:37:33.389 [2024-07-14 15:10:12.463145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.389 [2024-07-14 15:10:12.463212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.389 qpair failed and we were unable to recover it. 00:37:33.389 [2024-07-14 15:10:12.463358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.389 [2024-07-14 15:10:12.463393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.389 qpair failed and we were unable to recover it. 00:37:33.389 [2024-07-14 15:10:12.463512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.389 [2024-07-14 15:10:12.463545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.389 qpair failed and we were unable to recover it. 00:37:33.389 [2024-07-14 15:10:12.463682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.389 [2024-07-14 15:10:12.463718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.389 qpair failed and we were unable to recover it. 00:37:33.389 [2024-07-14 15:10:12.463904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.389 [2024-07-14 15:10:12.463940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.389 qpair failed and we were unable to recover it. 00:37:33.389 [2024-07-14 15:10:12.464083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.389 [2024-07-14 15:10:12.464115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.389 qpair failed and we were unable to recover it. 00:37:33.389 [2024-07-14 15:10:12.464286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.389 [2024-07-14 15:10:12.464321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.389 qpair failed and we were unable to recover it. 00:37:33.389 [2024-07-14 15:10:12.464495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.389 [2024-07-14 15:10:12.464531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.389 qpair failed and we were unable to recover it. 00:37:33.389 [2024-07-14 15:10:12.464660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.389 [2024-07-14 15:10:12.464696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.389 qpair failed and we were unable to recover it. 00:37:33.389 [2024-07-14 15:10:12.464824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.389 [2024-07-14 15:10:12.464859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.389 qpair failed and we were unable to recover it. 00:37:33.389 [2024-07-14 15:10:12.465002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.389 [2024-07-14 15:10:12.465035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.389 qpair failed and we were unable to recover it. 00:37:33.389 [2024-07-14 15:10:12.465183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.389 [2024-07-14 15:10:12.465215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.389 qpair failed and we were unable to recover it. 00:37:33.389 [2024-07-14 15:10:12.465325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.389 [2024-07-14 15:10:12.465356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.389 qpair failed and we were unable to recover it. 00:37:33.389 [2024-07-14 15:10:12.465584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.389 [2024-07-14 15:10:12.465620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.389 qpair failed and we were unable to recover it. 00:37:33.389 [2024-07-14 15:10:12.465743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.389 [2024-07-14 15:10:12.465780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.389 qpair failed and we were unable to recover it. 00:37:33.389 [2024-07-14 15:10:12.465949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.389 [2024-07-14 15:10:12.465982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.389 qpair failed and we were unable to recover it. 00:37:33.389 [2024-07-14 15:10:12.466142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.389 [2024-07-14 15:10:12.466173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.389 qpair failed and we were unable to recover it. 00:37:33.389 [2024-07-14 15:10:12.466338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.389 [2024-07-14 15:10:12.466380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.389 qpair failed and we were unable to recover it. 00:37:33.389 [2024-07-14 15:10:12.466494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.389 [2024-07-14 15:10:12.466530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.389 qpair failed and we were unable to recover it. 00:37:33.389 [2024-07-14 15:10:12.466696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.389 [2024-07-14 15:10:12.466731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.389 qpair failed and we were unable to recover it. 00:37:33.389 [2024-07-14 15:10:12.466890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.389 [2024-07-14 15:10:12.466947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.389 qpair failed and we were unable to recover it. 00:37:33.389 [2024-07-14 15:10:12.467056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.389 [2024-07-14 15:10:12.467088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.389 qpair failed and we were unable to recover it. 00:37:33.389 [2024-07-14 15:10:12.467226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.389 [2024-07-14 15:10:12.467277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.389 qpair failed and we were unable to recover it. 00:37:33.389 [2024-07-14 15:10:12.467428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.389 [2024-07-14 15:10:12.467464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.389 qpair failed and we were unable to recover it. 00:37:33.389 [2024-07-14 15:10:12.467621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.389 [2024-07-14 15:10:12.467656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.390 qpair failed and we were unable to recover it. 00:37:33.390 [2024-07-14 15:10:12.467801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.390 [2024-07-14 15:10:12.467838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.390 qpair failed and we were unable to recover it. 00:37:33.390 [2024-07-14 15:10:12.468043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.390 [2024-07-14 15:10:12.468077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.390 qpair failed and we were unable to recover it. 00:37:33.390 [2024-07-14 15:10:12.468198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.390 [2024-07-14 15:10:12.468235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.390 qpair failed and we were unable to recover it. 00:37:33.390 [2024-07-14 15:10:12.468393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.390 [2024-07-14 15:10:12.468429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.390 qpair failed and we were unable to recover it. 00:37:33.390 [2024-07-14 15:10:12.468608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.390 [2024-07-14 15:10:12.468644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.390 qpair failed and we were unable to recover it. 00:37:33.390 [2024-07-14 15:10:12.468797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.390 [2024-07-14 15:10:12.468831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.390 qpair failed and we were unable to recover it. 00:37:33.390 [2024-07-14 15:10:12.469028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.390 [2024-07-14 15:10:12.469076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.390 qpair failed and we were unable to recover it. 00:37:33.390 [2024-07-14 15:10:12.469202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.390 [2024-07-14 15:10:12.469236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.390 qpair failed and we were unable to recover it. 00:37:33.390 [2024-07-14 15:10:12.469382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.390 [2024-07-14 15:10:12.469435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.390 qpair failed and we were unable to recover it. 00:37:33.390 [2024-07-14 15:10:12.469567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.390 [2024-07-14 15:10:12.469618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.390 qpair failed and we were unable to recover it. 00:37:33.390 [2024-07-14 15:10:12.469752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.390 [2024-07-14 15:10:12.469784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.390 qpair failed and we were unable to recover it. 00:37:33.390 [2024-07-14 15:10:12.469933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.390 [2024-07-14 15:10:12.469967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.390 qpair failed and we were unable to recover it. 00:37:33.390 [2024-07-14 15:10:12.470103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.390 [2024-07-14 15:10:12.470137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.390 qpair failed and we were unable to recover it. 00:37:33.390 [2024-07-14 15:10:12.470285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.390 [2024-07-14 15:10:12.470318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.390 qpair failed and we were unable to recover it. 00:37:33.390 [2024-07-14 15:10:12.470480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.390 [2024-07-14 15:10:12.470511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.390 qpair failed and we were unable to recover it. 00:37:33.390 [2024-07-14 15:10:12.470650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.390 [2024-07-14 15:10:12.470683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.390 qpair failed and we were unable to recover it. 00:37:33.390 [2024-07-14 15:10:12.470821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.390 [2024-07-14 15:10:12.470853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.390 qpair failed and we were unable to recover it. 00:37:33.390 [2024-07-14 15:10:12.471015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.390 [2024-07-14 15:10:12.471047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.390 qpair failed and we were unable to recover it. 00:37:33.390 [2024-07-14 15:10:12.471239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.390 [2024-07-14 15:10:12.471274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.390 qpair failed and we were unable to recover it. 00:37:33.390 [2024-07-14 15:10:12.471431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.390 [2024-07-14 15:10:12.471467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.390 qpair failed and we were unable to recover it. 00:37:33.390 [2024-07-14 15:10:12.471586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.390 [2024-07-14 15:10:12.471621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.390 qpair failed and we were unable to recover it. 00:37:33.390 [2024-07-14 15:10:12.471859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.390 [2024-07-14 15:10:12.471901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.390 qpair failed and we were unable to recover it. 00:37:33.390 [2024-07-14 15:10:12.472061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.390 [2024-07-14 15:10:12.472094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.390 qpair failed and we were unable to recover it. 00:37:33.390 [2024-07-14 15:10:12.472261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.390 [2024-07-14 15:10:12.472313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.390 qpair failed and we were unable to recover it. 00:37:33.390 [2024-07-14 15:10:12.472472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.390 [2024-07-14 15:10:12.472523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.390 qpair failed and we were unable to recover it. 00:37:33.390 [2024-07-14 15:10:12.472654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.390 [2024-07-14 15:10:12.472708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.390 qpair failed and we were unable to recover it. 00:37:33.390 [2024-07-14 15:10:12.472873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.390 [2024-07-14 15:10:12.472923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.390 qpair failed and we were unable to recover it. 00:37:33.390 [2024-07-14 15:10:12.473046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.390 [2024-07-14 15:10:12.473079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.390 qpair failed and we were unable to recover it. 00:37:33.390 [2024-07-14 15:10:12.473249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.390 [2024-07-14 15:10:12.473299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.390 qpair failed and we were unable to recover it. 00:37:33.390 [2024-07-14 15:10:12.473404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.390 [2024-07-14 15:10:12.473448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.390 qpair failed and we were unable to recover it. 00:37:33.390 [2024-07-14 15:10:12.473639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.390 [2024-07-14 15:10:12.473677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.390 qpair failed and we were unable to recover it. 00:37:33.390 [2024-07-14 15:10:12.473834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.390 [2024-07-14 15:10:12.473871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.390 qpair failed and we were unable to recover it. 00:37:33.390 [2024-07-14 15:10:12.474055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.390 [2024-07-14 15:10:12.474113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.390 qpair failed and we were unable to recover it. 00:37:33.390 [2024-07-14 15:10:12.474262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.390 [2024-07-14 15:10:12.474297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.390 qpair failed and we were unable to recover it. 00:37:33.390 [2024-07-14 15:10:12.474473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.390 [2024-07-14 15:10:12.474509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.390 qpair failed and we were unable to recover it. 00:37:33.390 [2024-07-14 15:10:12.474659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.390 [2024-07-14 15:10:12.474708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.390 qpair failed and we were unable to recover it. 00:37:33.390 [2024-07-14 15:10:12.474843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.390 [2024-07-14 15:10:12.474882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.390 qpair failed and we were unable to recover it. 00:37:33.390 [2024-07-14 15:10:12.475039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.390 [2024-07-14 15:10:12.475072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.390 qpair failed and we were unable to recover it. 00:37:33.390 [2024-07-14 15:10:12.475202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.390 [2024-07-14 15:10:12.475238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.390 qpair failed and we were unable to recover it. 00:37:33.390 [2024-07-14 15:10:12.475369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.390 [2024-07-14 15:10:12.475419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.390 qpair failed and we were unable to recover it. 00:37:33.390 [2024-07-14 15:10:12.475597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.390 [2024-07-14 15:10:12.475633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.390 qpair failed and we were unable to recover it. 00:37:33.390 [2024-07-14 15:10:12.475786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.390 [2024-07-14 15:10:12.475822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.390 qpair failed and we were unable to recover it. 00:37:33.390 [2024-07-14 15:10:12.475985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.390 [2024-07-14 15:10:12.476017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.390 qpair failed and we were unable to recover it. 00:37:33.390 [2024-07-14 15:10:12.476126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.390 [2024-07-14 15:10:12.476158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.391 qpair failed and we were unable to recover it. 00:37:33.391 [2024-07-14 15:10:12.476317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.391 [2024-07-14 15:10:12.476352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.391 qpair failed and we were unable to recover it. 00:37:33.391 [2024-07-14 15:10:12.476484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.391 [2024-07-14 15:10:12.476534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.391 qpair failed and we were unable to recover it. 00:37:33.391 [2024-07-14 15:10:12.476723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.391 [2024-07-14 15:10:12.476758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.391 qpair failed and we were unable to recover it. 00:37:33.391 [2024-07-14 15:10:12.476938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.391 [2024-07-14 15:10:12.476972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.391 qpair failed and we were unable to recover it. 00:37:33.391 [2024-07-14 15:10:12.477116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.391 [2024-07-14 15:10:12.477148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.391 qpair failed and we were unable to recover it. 00:37:33.391 [2024-07-14 15:10:12.477303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.391 [2024-07-14 15:10:12.477338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.391 qpair failed and we were unable to recover it. 00:37:33.391 [2024-07-14 15:10:12.477489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.391 [2024-07-14 15:10:12.477524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.391 qpair failed and we were unable to recover it. 00:37:33.391 [2024-07-14 15:10:12.477716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.391 [2024-07-14 15:10:12.477789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.391 qpair failed and we were unable to recover it. 00:37:33.391 [2024-07-14 15:10:12.477911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.391 [2024-07-14 15:10:12.477962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.391 qpair failed and we were unable to recover it. 00:37:33.391 [2024-07-14 15:10:12.478131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.391 [2024-07-14 15:10:12.478184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.391 qpair failed and we were unable to recover it. 00:37:33.391 [2024-07-14 15:10:12.478343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.391 [2024-07-14 15:10:12.478374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.391 qpair failed and we were unable to recover it. 00:37:33.391 [2024-07-14 15:10:12.478534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.391 [2024-07-14 15:10:12.478572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.391 qpair failed and we were unable to recover it. 00:37:33.391 [2024-07-14 15:10:12.478685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.391 [2024-07-14 15:10:12.478720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.391 qpair failed and we were unable to recover it. 00:37:33.391 [2024-07-14 15:10:12.478853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.391 [2024-07-14 15:10:12.478911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.391 qpair failed and we were unable to recover it. 00:37:33.391 [2024-07-14 15:10:12.479076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.391 [2024-07-14 15:10:12.479109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.391 qpair failed and we were unable to recover it. 00:37:33.391 [2024-07-14 15:10:12.479287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.391 [2024-07-14 15:10:12.479319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.391 qpair failed and we were unable to recover it. 00:37:33.391 [2024-07-14 15:10:12.479474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.391 [2024-07-14 15:10:12.479524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.391 qpair failed and we were unable to recover it. 00:37:33.391 [2024-07-14 15:10:12.479685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.391 [2024-07-14 15:10:12.479721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.391 qpair failed and we were unable to recover it. 00:37:33.391 [2024-07-14 15:10:12.479863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.391 [2024-07-14 15:10:12.479926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.391 qpair failed and we were unable to recover it. 00:37:33.391 [2024-07-14 15:10:12.480068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.391 [2024-07-14 15:10:12.480100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.391 qpair failed and we were unable to recover it. 00:37:33.391 [2024-07-14 15:10:12.480246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.391 [2024-07-14 15:10:12.480282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.391 qpair failed and we were unable to recover it. 00:37:33.391 [2024-07-14 15:10:12.480455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.391 [2024-07-14 15:10:12.480492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.391 qpair failed and we were unable to recover it. 00:37:33.391 [2024-07-14 15:10:12.480668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.391 [2024-07-14 15:10:12.480705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.391 qpair failed and we were unable to recover it. 00:37:33.391 [2024-07-14 15:10:12.480857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.391 [2024-07-14 15:10:12.480929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.391 qpair failed and we were unable to recover it. 00:37:33.391 [2024-07-14 15:10:12.481070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.391 [2024-07-14 15:10:12.481102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.391 qpair failed and we were unable to recover it. 00:37:33.391 [2024-07-14 15:10:12.481248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.391 [2024-07-14 15:10:12.481281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.391 qpair failed and we were unable to recover it. 00:37:33.391 [2024-07-14 15:10:12.481435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.391 [2024-07-14 15:10:12.481472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.391 qpair failed and we were unable to recover it. 00:37:33.391 [2024-07-14 15:10:12.481643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.391 [2024-07-14 15:10:12.481678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.391 qpair failed and we were unable to recover it. 00:37:33.391 [2024-07-14 15:10:12.481828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.391 [2024-07-14 15:10:12.481864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.391 qpair failed and we were unable to recover it. 00:37:33.391 [2024-07-14 15:10:12.482029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.391 [2024-07-14 15:10:12.482061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.391 qpair failed and we were unable to recover it. 00:37:33.391 [2024-07-14 15:10:12.482210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.391 [2024-07-14 15:10:12.482247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.391 qpair failed and we were unable to recover it. 00:37:33.391 [2024-07-14 15:10:12.482420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.391 [2024-07-14 15:10:12.482457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.391 qpair failed and we were unable to recover it. 00:37:33.391 [2024-07-14 15:10:12.482634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.391 [2024-07-14 15:10:12.482670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.391 qpair failed and we were unable to recover it. 00:37:33.391 [2024-07-14 15:10:12.482792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.391 [2024-07-14 15:10:12.482827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.391 qpair failed and we were unable to recover it. 00:37:33.391 [2024-07-14 15:10:12.483005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.391 [2024-07-14 15:10:12.483043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.391 qpair failed and we were unable to recover it. 00:37:33.391 [2024-07-14 15:10:12.483177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.391 [2024-07-14 15:10:12.483212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.391 qpair failed and we were unable to recover it. 00:37:33.391 [2024-07-14 15:10:12.483331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.391 [2024-07-14 15:10:12.483366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.391 qpair failed and we were unable to recover it. 00:37:33.391 [2024-07-14 15:10:12.483496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.391 [2024-07-14 15:10:12.483546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.391 qpair failed and we were unable to recover it. 00:37:33.391 [2024-07-14 15:10:12.483722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.391 [2024-07-14 15:10:12.483758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.391 qpair failed and we were unable to recover it. 00:37:33.391 [2024-07-14 15:10:12.483901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.391 [2024-07-14 15:10:12.483953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.391 qpair failed and we were unable to recover it. 00:37:33.391 [2024-07-14 15:10:12.484123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.391 [2024-07-14 15:10:12.484154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.391 qpair failed and we were unable to recover it. 00:37:33.391 [2024-07-14 15:10:12.484310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.391 [2024-07-14 15:10:12.484347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.391 qpair failed and we were unable to recover it. 00:37:33.391 [2024-07-14 15:10:12.484472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.391 [2024-07-14 15:10:12.484508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.391 qpair failed and we were unable to recover it. 00:37:33.391 [2024-07-14 15:10:12.484639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.391 [2024-07-14 15:10:12.484688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.392 qpair failed and we were unable to recover it. 00:37:33.392 [2024-07-14 15:10:12.484866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.392 [2024-07-14 15:10:12.484929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.392 qpair failed and we were unable to recover it. 00:37:33.392 [2024-07-14 15:10:12.485046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.392 [2024-07-14 15:10:12.485079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.392 qpair failed and we were unable to recover it. 00:37:33.392 [2024-07-14 15:10:12.485221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.392 [2024-07-14 15:10:12.485253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.392 qpair failed and we were unable to recover it. 00:37:33.392 [2024-07-14 15:10:12.485423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.392 [2024-07-14 15:10:12.485458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.392 qpair failed and we were unable to recover it. 00:37:33.392 [2024-07-14 15:10:12.485630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.392 [2024-07-14 15:10:12.485666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.392 qpair failed and we were unable to recover it. 00:37:33.392 [2024-07-14 15:10:12.485812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.392 [2024-07-14 15:10:12.485845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.392 qpair failed and we were unable to recover it. 00:37:33.392 [2024-07-14 15:10:12.485991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.392 [2024-07-14 15:10:12.486024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.392 qpair failed and we were unable to recover it. 00:37:33.392 [2024-07-14 15:10:12.486203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.392 [2024-07-14 15:10:12.486238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.392 qpair failed and we were unable to recover it. 00:37:33.392 [2024-07-14 15:10:12.486397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.392 [2024-07-14 15:10:12.486429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.392 qpair failed and we were unable to recover it. 00:37:33.392 [2024-07-14 15:10:12.486589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.392 [2024-07-14 15:10:12.486622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.392 qpair failed and we were unable to recover it. 00:37:33.392 [2024-07-14 15:10:12.486778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.392 [2024-07-14 15:10:12.486814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.392 qpair failed and we were unable to recover it. 00:37:33.392 [2024-07-14 15:10:12.486991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.392 [2024-07-14 15:10:12.487022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.392 qpair failed and we were unable to recover it. 00:37:33.392 [2024-07-14 15:10:12.487176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.392 [2024-07-14 15:10:12.487214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.392 qpair failed and we were unable to recover it. 00:37:33.392 [2024-07-14 15:10:12.487374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.392 [2024-07-14 15:10:12.487406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.392 qpair failed and we were unable to recover it. 00:37:33.392 [2024-07-14 15:10:12.487625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.392 [2024-07-14 15:10:12.487661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.392 qpair failed and we were unable to recover it. 00:37:33.392 [2024-07-14 15:10:12.487799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.392 [2024-07-14 15:10:12.487831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.392 qpair failed and we were unable to recover it. 00:37:33.392 [2024-07-14 15:10:12.488013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.392 [2024-07-14 15:10:12.488046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.392 qpair failed and we were unable to recover it. 00:37:33.392 [2024-07-14 15:10:12.488151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.392 [2024-07-14 15:10:12.488183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.392 qpair failed and we were unable to recover it. 00:37:33.392 [2024-07-14 15:10:12.488322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.392 [2024-07-14 15:10:12.488372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.392 qpair failed and we were unable to recover it. 00:37:33.392 [2024-07-14 15:10:12.488544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.392 [2024-07-14 15:10:12.488579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.392 qpair failed and we were unable to recover it. 00:37:33.392 [2024-07-14 15:10:12.488731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.392 [2024-07-14 15:10:12.488763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.392 qpair failed and we were unable to recover it. 00:37:33.392 [2024-07-14 15:10:12.488933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.392 [2024-07-14 15:10:12.488966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.392 qpair failed and we were unable to recover it. 00:37:33.392 [2024-07-14 15:10:12.489135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.392 [2024-07-14 15:10:12.489167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.392 qpair failed and we were unable to recover it. 00:37:33.392 [2024-07-14 15:10:12.489347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.392 [2024-07-14 15:10:12.489380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.392 qpair failed and we were unable to recover it. 00:37:33.392 [2024-07-14 15:10:12.489535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.392 [2024-07-14 15:10:12.489576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.392 qpair failed and we were unable to recover it. 00:37:33.392 [2024-07-14 15:10:12.489690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.392 [2024-07-14 15:10:12.489726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.392 qpair failed and we were unable to recover it. 00:37:33.392 [2024-07-14 15:10:12.489906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.392 [2024-07-14 15:10:12.489963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.392 qpair failed and we were unable to recover it. 00:37:33.392 [2024-07-14 15:10:12.490135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.392 [2024-07-14 15:10:12.490194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.392 qpair failed and we were unable to recover it. 00:37:33.392 [2024-07-14 15:10:12.490367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.392 [2024-07-14 15:10:12.490403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.392 qpair failed and we were unable to recover it. 00:37:33.392 [2024-07-14 15:10:12.490568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.392 [2024-07-14 15:10:12.490600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.392 qpair failed and we were unable to recover it. 00:37:33.392 [2024-07-14 15:10:12.490711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.392 [2024-07-14 15:10:12.490743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.392 qpair failed and we were unable to recover it. 00:37:33.392 [2024-07-14 15:10:12.490932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.392 [2024-07-14 15:10:12.490965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.392 qpair failed and we were unable to recover it. 00:37:33.392 [2024-07-14 15:10:12.491090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.392 [2024-07-14 15:10:12.491124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.392 qpair failed and we were unable to recover it. 00:37:33.392 [2024-07-14 15:10:12.491252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.392 [2024-07-14 15:10:12.491303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.392 qpair failed and we were unable to recover it. 00:37:33.392 [2024-07-14 15:10:12.491463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.392 [2024-07-14 15:10:12.491496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.392 qpair failed and we were unable to recover it. 00:37:33.392 [2024-07-14 15:10:12.491629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.392 [2024-07-14 15:10:12.491661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.392 qpair failed and we were unable to recover it. 00:37:33.392 [2024-07-14 15:10:12.491774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.392 [2024-07-14 15:10:12.491823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.392 qpair failed and we were unable to recover it. 00:37:33.392 [2024-07-14 15:10:12.492004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.392 [2024-07-14 15:10:12.492037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.392 qpair failed and we were unable to recover it. 00:37:33.392 [2024-07-14 15:10:12.492170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.392 [2024-07-14 15:10:12.492208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.392 qpair failed and we were unable to recover it. 00:37:33.392 [2024-07-14 15:10:12.492371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.392 [2024-07-14 15:10:12.492407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.392 qpair failed and we were unable to recover it. 00:37:33.392 [2024-07-14 15:10:12.492591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.392 [2024-07-14 15:10:12.492623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.392 qpair failed and we were unable to recover it. 00:37:33.392 [2024-07-14 15:10:12.492724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.392 [2024-07-14 15:10:12.492757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.392 qpair failed and we were unable to recover it. 00:37:33.392 [2024-07-14 15:10:12.492894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.392 [2024-07-14 15:10:12.492927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.392 qpair failed and we were unable to recover it. 00:37:33.392 [2024-07-14 15:10:12.493072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.392 [2024-07-14 15:10:12.493105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.393 qpair failed and we were unable to recover it. 00:37:33.393 [2024-07-14 15:10:12.493208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.393 [2024-07-14 15:10:12.493240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.393 qpair failed and we were unable to recover it. 00:37:33.393 [2024-07-14 15:10:12.493378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.393 [2024-07-14 15:10:12.493428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.393 qpair failed and we were unable to recover it. 00:37:33.393 [2024-07-14 15:10:12.493586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.393 [2024-07-14 15:10:12.493619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.393 qpair failed and we were unable to recover it. 00:37:33.393 [2024-07-14 15:10:12.493786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.393 [2024-07-14 15:10:12.493822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.393 qpair failed and we were unable to recover it. 00:37:33.393 [2024-07-14 15:10:12.493961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.393 [2024-07-14 15:10:12.493994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.393 qpair failed and we were unable to recover it. 00:37:33.393 [2024-07-14 15:10:12.494119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.393 [2024-07-14 15:10:12.494151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.393 qpair failed and we were unable to recover it. 00:37:33.393 [2024-07-14 15:10:12.494324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.393 [2024-07-14 15:10:12.494356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.393 qpair failed and we were unable to recover it. 00:37:33.393 [2024-07-14 15:10:12.494468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.393 [2024-07-14 15:10:12.494500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.393 qpair failed and we were unable to recover it. 00:37:33.393 [2024-07-14 15:10:12.494633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.393 [2024-07-14 15:10:12.494668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.393 qpair failed and we were unable to recover it. 00:37:33.393 [2024-07-14 15:10:12.494841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.393 [2024-07-14 15:10:12.494885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.393 qpair failed and we were unable to recover it. 00:37:33.393 [2024-07-14 15:10:12.495031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.393 [2024-07-14 15:10:12.495064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.393 qpair failed and we were unable to recover it. 00:37:33.393 [2024-07-14 15:10:12.495166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.393 [2024-07-14 15:10:12.495198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.393 qpair failed and we were unable to recover it. 00:37:33.393 [2024-07-14 15:10:12.495357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.393 [2024-07-14 15:10:12.495389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.393 qpair failed and we were unable to recover it. 00:37:33.393 [2024-07-14 15:10:12.495569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.393 [2024-07-14 15:10:12.495606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.393 qpair failed and we were unable to recover it. 00:37:33.393 [2024-07-14 15:10:12.495729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.393 [2024-07-14 15:10:12.495765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.393 qpair failed and we were unable to recover it. 00:37:33.393 [2024-07-14 15:10:12.495895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.393 [2024-07-14 15:10:12.495930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.393 qpair failed and we were unable to recover it. 00:37:33.393 [2024-07-14 15:10:12.496040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.393 [2024-07-14 15:10:12.496073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.393 qpair failed and we were unable to recover it. 00:37:33.393 [2024-07-14 15:10:12.496229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.393 [2024-07-14 15:10:12.496274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.393 qpair failed and we were unable to recover it. 00:37:33.393 [2024-07-14 15:10:12.496434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.393 [2024-07-14 15:10:12.496467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.393 qpair failed and we were unable to recover it. 00:37:33.393 [2024-07-14 15:10:12.496633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.393 [2024-07-14 15:10:12.496684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.393 qpair failed and we were unable to recover it. 00:37:33.393 [2024-07-14 15:10:12.496836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.393 [2024-07-14 15:10:12.496885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.393 qpair failed and we were unable to recover it. 00:37:33.393 [2024-07-14 15:10:12.497012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.393 [2024-07-14 15:10:12.497045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.393 qpair failed and we were unable to recover it. 00:37:33.393 [2024-07-14 15:10:12.497166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.393 [2024-07-14 15:10:12.497199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.393 qpair failed and we were unable to recover it. 00:37:33.393 [2024-07-14 15:10:12.497364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.393 [2024-07-14 15:10:12.497400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.393 qpair failed and we were unable to recover it. 00:37:33.393 [2024-07-14 15:10:12.497559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.393 [2024-07-14 15:10:12.497592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.393 qpair failed and we were unable to recover it. 00:37:33.393 [2024-07-14 15:10:12.497699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.393 [2024-07-14 15:10:12.497732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.393 qpair failed and we were unable to recover it. 00:37:33.393 [2024-07-14 15:10:12.497847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.393 [2024-07-14 15:10:12.497892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.393 qpair failed and we were unable to recover it. 00:37:33.393 [2024-07-14 15:10:12.498085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.393 [2024-07-14 15:10:12.498118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.393 qpair failed and we were unable to recover it. 00:37:33.393 [2024-07-14 15:10:12.498271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.393 [2024-07-14 15:10:12.498307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.393 qpair failed and we were unable to recover it. 00:37:33.393 [2024-07-14 15:10:12.498454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.393 [2024-07-14 15:10:12.498490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.393 qpair failed and we were unable to recover it. 00:37:33.393 [2024-07-14 15:10:12.498627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.393 [2024-07-14 15:10:12.498677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.393 qpair failed and we were unable to recover it. 00:37:33.393 [2024-07-14 15:10:12.498828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.393 [2024-07-14 15:10:12.498864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.393 qpair failed and we were unable to recover it. 00:37:33.393 [2024-07-14 15:10:12.498997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.393 [2024-07-14 15:10:12.499029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.393 qpair failed and we were unable to recover it. 00:37:33.393 [2024-07-14 15:10:12.499178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.393 [2024-07-14 15:10:12.499210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.393 qpair failed and we were unable to recover it. 00:37:33.393 [2024-07-14 15:10:12.499331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.393 [2024-07-14 15:10:12.499364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.393 qpair failed and we were unable to recover it. 00:37:33.393 [2024-07-14 15:10:12.499516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.393 [2024-07-14 15:10:12.499549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.393 qpair failed and we were unable to recover it. 00:37:33.393 [2024-07-14 15:10:12.499650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.393 [2024-07-14 15:10:12.499683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.393 qpair failed and we were unable to recover it. 00:37:33.393 [2024-07-14 15:10:12.499786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.393 [2024-07-14 15:10:12.499819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.393 qpair failed and we were unable to recover it. 00:37:33.393 [2024-07-14 15:10:12.499988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.393 [2024-07-14 15:10:12.500021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.393 qpair failed and we were unable to recover it. 00:37:33.393 [2024-07-14 15:10:12.500151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.394 [2024-07-14 15:10:12.500184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.394 qpair failed and we were unable to recover it. 00:37:33.394 [2024-07-14 15:10:12.500333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.394 [2024-07-14 15:10:12.500369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.394 qpair failed and we were unable to recover it. 00:37:33.394 [2024-07-14 15:10:12.500510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.394 [2024-07-14 15:10:12.500546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.394 qpair failed and we were unable to recover it. 00:37:33.394 [2024-07-14 15:10:12.500671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.394 [2024-07-14 15:10:12.500704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.394 qpair failed and we were unable to recover it. 00:37:33.394 [2024-07-14 15:10:12.500892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.394 [2024-07-14 15:10:12.500929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.394 qpair failed and we were unable to recover it. 00:37:33.394 [2024-07-14 15:10:12.501079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.394 [2024-07-14 15:10:12.501116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.394 qpair failed and we were unable to recover it. 00:37:33.394 [2024-07-14 15:10:12.501243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.394 [2024-07-14 15:10:12.501276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.394 qpair failed and we were unable to recover it. 00:37:33.394 [2024-07-14 15:10:12.501378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.394 [2024-07-14 15:10:12.501411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.394 qpair failed and we were unable to recover it. 00:37:33.394 [2024-07-14 15:10:12.501580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.394 [2024-07-14 15:10:12.501627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.394 qpair failed and we were unable to recover it. 00:37:33.394 [2024-07-14 15:10:12.501795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.394 [2024-07-14 15:10:12.501832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.394 qpair failed and we were unable to recover it. 00:37:33.394 [2024-07-14 15:10:12.502022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.394 [2024-07-14 15:10:12.502055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.394 qpair failed and we were unable to recover it. 00:37:33.394 [2024-07-14 15:10:12.502163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.394 [2024-07-14 15:10:12.502215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.394 qpair failed and we were unable to recover it. 00:37:33.394 [2024-07-14 15:10:12.502378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.394 [2024-07-14 15:10:12.502411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.394 qpair failed and we were unable to recover it. 00:37:33.394 [2024-07-14 15:10:12.502541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.394 [2024-07-14 15:10:12.502574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.394 qpair failed and we were unable to recover it. 00:37:33.394 [2024-07-14 15:10:12.502729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.394 [2024-07-14 15:10:12.502765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.394 qpair failed and we were unable to recover it. 00:37:33.394 [2024-07-14 15:10:12.502913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.394 [2024-07-14 15:10:12.502946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.394 qpair failed and we were unable to recover it. 00:37:33.394 [2024-07-14 15:10:12.503075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.394 [2024-07-14 15:10:12.503108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.394 qpair failed and we were unable to recover it. 00:37:33.394 [2024-07-14 15:10:12.503273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.394 [2024-07-14 15:10:12.503309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.394 qpair failed and we were unable to recover it. 00:37:33.394 [2024-07-14 15:10:12.503459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.394 [2024-07-14 15:10:12.503492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.394 qpair failed and we were unable to recover it. 00:37:33.394 [2024-07-14 15:10:12.503609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.394 [2024-07-14 15:10:12.503642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.394 qpair failed and we were unable to recover it. 00:37:33.394 [2024-07-14 15:10:12.503752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.394 [2024-07-14 15:10:12.503784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.394 qpair failed and we were unable to recover it. 00:37:33.394 [2024-07-14 15:10:12.503883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.394 [2024-07-14 15:10:12.503920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.394 qpair failed and we were unable to recover it. 00:37:33.394 [2024-07-14 15:10:12.504087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.394 [2024-07-14 15:10:12.504135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.394 qpair failed and we were unable to recover it. 00:37:33.394 [2024-07-14 15:10:12.504282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.394 [2024-07-14 15:10:12.504316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.394 qpair failed and we were unable to recover it. 00:37:33.394 [2024-07-14 15:10:12.504464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.394 [2024-07-14 15:10:12.504496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.394 qpair failed and we were unable to recover it. 00:37:33.394 [2024-07-14 15:10:12.504678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.394 [2024-07-14 15:10:12.504713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.394 qpair failed and we were unable to recover it. 00:37:33.394 [2024-07-14 15:10:12.504914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.394 [2024-07-14 15:10:12.504947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.394 qpair failed and we were unable to recover it. 00:37:33.394 [2024-07-14 15:10:12.505061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.394 [2024-07-14 15:10:12.505093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.394 qpair failed and we were unable to recover it. 00:37:33.394 [2024-07-14 15:10:12.505204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.394 [2024-07-14 15:10:12.505237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.394 qpair failed and we were unable to recover it. 00:37:33.394 [2024-07-14 15:10:12.505370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.394 [2024-07-14 15:10:12.505405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.394 qpair failed and we were unable to recover it. 00:37:33.394 [2024-07-14 15:10:12.505529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.394 [2024-07-14 15:10:12.505561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.394 qpair failed and we were unable to recover it. 00:37:33.394 [2024-07-14 15:10:12.505698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.394 [2024-07-14 15:10:12.505730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.394 qpair failed and we were unable to recover it. 00:37:33.394 [2024-07-14 15:10:12.505858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.394 [2024-07-14 15:10:12.505900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.394 qpair failed and we were unable to recover it. 00:37:33.394 [2024-07-14 15:10:12.506079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.394 [2024-07-14 15:10:12.506111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.394 qpair failed and we were unable to recover it. 00:37:33.394 [2024-07-14 15:10:12.506209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.394 [2024-07-14 15:10:12.506242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.394 qpair failed and we were unable to recover it. 00:37:33.394 [2024-07-14 15:10:12.506436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.394 [2024-07-14 15:10:12.506471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.394 qpair failed and we were unable to recover it. 00:37:33.394 [2024-07-14 15:10:12.506612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.394 [2024-07-14 15:10:12.506644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.394 qpair failed and we were unable to recover it. 00:37:33.394 [2024-07-14 15:10:12.506757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.394 [2024-07-14 15:10:12.506789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.394 qpair failed and we were unable to recover it. 00:37:33.394 [2024-07-14 15:10:12.506927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.394 [2024-07-14 15:10:12.506962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.394 qpair failed and we were unable to recover it. 00:37:33.394 [2024-07-14 15:10:12.507119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.394 [2024-07-14 15:10:12.507152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.394 qpair failed and we were unable to recover it. 00:37:33.394 [2024-07-14 15:10:12.507287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.394 [2024-07-14 15:10:12.507337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.394 qpair failed and we were unable to recover it. 00:37:33.394 [2024-07-14 15:10:12.507481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.394 [2024-07-14 15:10:12.507515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.394 qpair failed and we were unable to recover it. 00:37:33.394 [2024-07-14 15:10:12.507667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.394 [2024-07-14 15:10:12.507699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.394 qpair failed and we were unable to recover it. 00:37:33.394 [2024-07-14 15:10:12.507890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.394 [2024-07-14 15:10:12.507926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.394 qpair failed and we were unable to recover it. 00:37:33.394 [2024-07-14 15:10:12.508067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.395 [2024-07-14 15:10:12.508102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.395 qpair failed and we were unable to recover it. 00:37:33.395 [2024-07-14 15:10:12.508279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.395 [2024-07-14 15:10:12.508312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.395 qpair failed and we were unable to recover it. 00:37:33.395 [2024-07-14 15:10:12.508455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.395 [2024-07-14 15:10:12.508490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.395 qpair failed and we were unable to recover it. 00:37:33.395 [2024-07-14 15:10:12.508649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.395 [2024-07-14 15:10:12.508685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.395 qpair failed and we were unable to recover it. 00:37:33.395 [2024-07-14 15:10:12.508894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.395 [2024-07-14 15:10:12.508944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.395 qpair failed and we were unable to recover it. 00:37:33.395 [2024-07-14 15:10:12.509108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.395 [2024-07-14 15:10:12.509141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.395 qpair failed and we were unable to recover it. 00:37:33.395 [2024-07-14 15:10:12.509284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.395 [2024-07-14 15:10:12.509317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.395 qpair failed and we were unable to recover it. 00:37:33.395 [2024-07-14 15:10:12.509470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.395 [2024-07-14 15:10:12.509503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.395 qpair failed and we were unable to recover it. 00:37:33.395 [2024-07-14 15:10:12.509657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.395 [2024-07-14 15:10:12.509691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.395 qpair failed and we were unable to recover it. 00:37:33.395 [2024-07-14 15:10:12.509797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.395 [2024-07-14 15:10:12.509831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.395 qpair failed and we were unable to recover it. 00:37:33.395 [2024-07-14 15:10:12.509960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.395 [2024-07-14 15:10:12.509993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.395 qpair failed and we were unable to recover it. 00:37:33.395 [2024-07-14 15:10:12.510131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.395 [2024-07-14 15:10:12.510180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.395 qpair failed and we were unable to recover it. 00:37:33.395 [2024-07-14 15:10:12.510287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.395 [2024-07-14 15:10:12.510321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.395 qpair failed and we were unable to recover it. 00:37:33.395 [2024-07-14 15:10:12.510503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.395 [2024-07-14 15:10:12.510536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.395 qpair failed and we were unable to recover it. 00:37:33.395 [2024-07-14 15:10:12.510642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.395 [2024-07-14 15:10:12.510691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.395 qpair failed and we were unable to recover it. 00:37:33.395 [2024-07-14 15:10:12.510830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.395 [2024-07-14 15:10:12.510864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.395 qpair failed and we were unable to recover it. 00:37:33.395 [2024-07-14 15:10:12.511007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.395 [2024-07-14 15:10:12.511040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.395 qpair failed and we were unable to recover it. 00:37:33.395 [2024-07-14 15:10:12.511208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.395 [2024-07-14 15:10:12.511261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.395 qpair failed and we were unable to recover it. 00:37:33.395 [2024-07-14 15:10:12.511367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.395 [2024-07-14 15:10:12.511400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.395 qpair failed and we were unable to recover it. 00:37:33.395 [2024-07-14 15:10:12.511526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.395 [2024-07-14 15:10:12.511558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.395 qpair failed and we were unable to recover it. 00:37:33.395 [2024-07-14 15:10:12.511661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.395 [2024-07-14 15:10:12.511693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.395 qpair failed and we were unable to recover it. 00:37:33.395 [2024-07-14 15:10:12.511836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.395 [2024-07-14 15:10:12.511870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.395 qpair failed and we were unable to recover it. 00:37:33.395 [2024-07-14 15:10:12.512044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.395 [2024-07-14 15:10:12.512077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.395 qpair failed and we were unable to recover it. 00:37:33.395 [2024-07-14 15:10:12.512194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.395 [2024-07-14 15:10:12.512227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.395 qpair failed and we were unable to recover it. 00:37:33.395 [2024-07-14 15:10:12.512336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.395 [2024-07-14 15:10:12.512369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.395 qpair failed and we were unable to recover it. 00:37:33.395 [2024-07-14 15:10:12.512531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.395 [2024-07-14 15:10:12.512564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.395 qpair failed and we were unable to recover it. 00:37:33.395 [2024-07-14 15:10:12.512705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.395 [2024-07-14 15:10:12.512737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.395 qpair failed and we were unable to recover it. 00:37:33.395 [2024-07-14 15:10:12.512901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.395 [2024-07-14 15:10:12.512934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.395 qpair failed and we were unable to recover it. 00:37:33.395 [2024-07-14 15:10:12.513082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.395 [2024-07-14 15:10:12.513124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.395 qpair failed and we were unable to recover it. 00:37:33.395 [2024-07-14 15:10:12.513264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.395 [2024-07-14 15:10:12.513296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.395 qpair failed and we were unable to recover it. 00:37:33.395 [2024-07-14 15:10:12.513456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.395 [2024-07-14 15:10:12.513487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.395 qpair failed and we were unable to recover it. 00:37:33.395 [2024-07-14 15:10:12.513610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.395 [2024-07-14 15:10:12.513643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.395 qpair failed and we were unable to recover it. 00:37:33.395 [2024-07-14 15:10:12.513783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.395 [2024-07-14 15:10:12.513815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.395 qpair failed and we were unable to recover it. 00:37:33.395 [2024-07-14 15:10:12.513947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.395 [2024-07-14 15:10:12.513979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.395 qpair failed and we were unable to recover it. 00:37:33.395 [2024-07-14 15:10:12.514126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.395 [2024-07-14 15:10:12.514159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.395 qpair failed and we were unable to recover it. 00:37:33.395 [2024-07-14 15:10:12.514321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.395 [2024-07-14 15:10:12.514354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.395 qpair failed and we were unable to recover it. 00:37:33.395 [2024-07-14 15:10:12.514488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.395 [2024-07-14 15:10:12.514520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.395 qpair failed and we were unable to recover it. 00:37:33.395 [2024-07-14 15:10:12.514658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.395 [2024-07-14 15:10:12.514690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.395 qpair failed and we were unable to recover it. 00:37:33.395 [2024-07-14 15:10:12.514829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.395 [2024-07-14 15:10:12.514861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.395 qpair failed and we were unable to recover it. 00:37:33.395 [2024-07-14 15:10:12.515000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.395 [2024-07-14 15:10:12.515033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.395 qpair failed and we were unable to recover it. 00:37:33.395 [2024-07-14 15:10:12.515161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.395 [2024-07-14 15:10:12.515193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.395 qpair failed and we were unable to recover it. 00:37:33.395 [2024-07-14 15:10:12.515331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.395 [2024-07-14 15:10:12.515363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.395 qpair failed and we were unable to recover it. 00:37:33.395 [2024-07-14 15:10:12.515497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.395 [2024-07-14 15:10:12.515531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.395 qpair failed and we were unable to recover it. 00:37:33.395 [2024-07-14 15:10:12.515692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.395 [2024-07-14 15:10:12.515725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.395 qpair failed and we were unable to recover it. 00:37:33.396 [2024-07-14 15:10:12.515859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.396 [2024-07-14 15:10:12.515900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.396 qpair failed and we were unable to recover it. 00:37:33.396 [2024-07-14 15:10:12.516028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.396 [2024-07-14 15:10:12.516061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.396 qpair failed and we were unable to recover it. 00:37:33.396 [2024-07-14 15:10:12.516203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.396 [2024-07-14 15:10:12.516236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.396 qpair failed and we were unable to recover it. 00:37:33.396 [2024-07-14 15:10:12.516368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.396 [2024-07-14 15:10:12.516401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.396 qpair failed and we were unable to recover it. 00:37:33.396 [2024-07-14 15:10:12.516503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.396 [2024-07-14 15:10:12.516535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.396 qpair failed and we were unable to recover it. 00:37:33.396 [2024-07-14 15:10:12.516646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.396 [2024-07-14 15:10:12.516679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.396 qpair failed and we were unable to recover it. 00:37:33.396 [2024-07-14 15:10:12.516787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.396 [2024-07-14 15:10:12.516820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.396 qpair failed and we were unable to recover it. 00:37:33.396 [2024-07-14 15:10:12.516965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.396 [2024-07-14 15:10:12.516998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.396 qpair failed and we were unable to recover it. 00:37:33.396 [2024-07-14 15:10:12.517103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.396 [2024-07-14 15:10:12.517136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.396 qpair failed and we were unable to recover it. 00:37:33.396 [2024-07-14 15:10:12.517273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.396 [2024-07-14 15:10:12.517305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.396 qpair failed and we were unable to recover it. 00:37:33.396 [2024-07-14 15:10:12.517404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.396 [2024-07-14 15:10:12.517437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.396 qpair failed and we were unable to recover it. 00:37:33.396 [2024-07-14 15:10:12.517562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.396 [2024-07-14 15:10:12.517605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.396 qpair failed and we were unable to recover it. 00:37:33.396 [2024-07-14 15:10:12.517740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.396 [2024-07-14 15:10:12.517773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.396 qpair failed and we were unable to recover it. 00:37:33.396 [2024-07-14 15:10:12.517866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.396 [2024-07-14 15:10:12.517912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.396 qpair failed and we were unable to recover it. 00:37:33.396 [2024-07-14 15:10:12.518050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.396 [2024-07-14 15:10:12.518083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.396 qpair failed and we were unable to recover it. 00:37:33.396 [2024-07-14 15:10:12.518214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.396 [2024-07-14 15:10:12.518247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.396 qpair failed and we were unable to recover it. 00:37:33.396 [2024-07-14 15:10:12.518402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.396 [2024-07-14 15:10:12.518435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.396 qpair failed and we were unable to recover it. 00:37:33.396 [2024-07-14 15:10:12.518573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.396 [2024-07-14 15:10:12.518610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.396 qpair failed and we were unable to recover it. 00:37:33.396 [2024-07-14 15:10:12.518767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.396 [2024-07-14 15:10:12.518805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.396 qpair failed and we were unable to recover it. 00:37:33.396 [2024-07-14 15:10:12.518962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.396 [2024-07-14 15:10:12.518996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.396 qpair failed and we were unable to recover it. 00:37:33.396 [2024-07-14 15:10:12.519128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.396 [2024-07-14 15:10:12.519161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.396 qpair failed and we were unable to recover it. 00:37:33.396 [2024-07-14 15:10:12.519314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.396 [2024-07-14 15:10:12.519347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.396 qpair failed and we were unable to recover it. 00:37:33.396 [2024-07-14 15:10:12.519476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.396 [2024-07-14 15:10:12.519508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.396 qpair failed and we were unable to recover it. 00:37:33.396 [2024-07-14 15:10:12.519647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.396 [2024-07-14 15:10:12.519680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.396 qpair failed and we were unable to recover it. 00:37:33.396 [2024-07-14 15:10:12.519814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.396 [2024-07-14 15:10:12.519847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.396 qpair failed and we were unable to recover it. 00:37:33.396 [2024-07-14 15:10:12.520022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.396 [2024-07-14 15:10:12.520055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.396 qpair failed and we were unable to recover it. 00:37:33.396 [2024-07-14 15:10:12.520192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.396 [2024-07-14 15:10:12.520225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.396 qpair failed and we were unable to recover it. 00:37:33.396 [2024-07-14 15:10:12.520356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.396 [2024-07-14 15:10:12.520389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.396 qpair failed and we were unable to recover it. 00:37:33.396 [2024-07-14 15:10:12.520529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.396 [2024-07-14 15:10:12.520562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.396 qpair failed and we were unable to recover it. 00:37:33.396 [2024-07-14 15:10:12.520663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.396 [2024-07-14 15:10:12.520696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.396 qpair failed and we were unable to recover it. 00:37:33.396 [2024-07-14 15:10:12.520828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.396 [2024-07-14 15:10:12.520861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.396 qpair failed and we were unable to recover it. 00:37:33.396 [2024-07-14 15:10:12.521007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.396 [2024-07-14 15:10:12.521040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.396 qpair failed and we were unable to recover it. 00:37:33.396 [2024-07-14 15:10:12.521152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.396 [2024-07-14 15:10:12.521185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.396 qpair failed and we were unable to recover it. 00:37:33.396 [2024-07-14 15:10:12.521324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.396 [2024-07-14 15:10:12.521356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.396 qpair failed and we were unable to recover it. 00:37:33.396 [2024-07-14 15:10:12.521491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.396 [2024-07-14 15:10:12.521524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.396 qpair failed and we were unable to recover it. 00:37:33.396 [2024-07-14 15:10:12.521658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.396 [2024-07-14 15:10:12.521690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.396 qpair failed and we were unable to recover it. 00:37:33.396 [2024-07-14 15:10:12.521838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.396 [2024-07-14 15:10:12.521889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.396 qpair failed and we were unable to recover it. 00:37:33.396 [2024-07-14 15:10:12.522011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.396 [2024-07-14 15:10:12.522047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.396 qpair failed and we were unable to recover it. 00:37:33.396 [2024-07-14 15:10:12.522226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.396 [2024-07-14 15:10:12.522260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.396 qpair failed and we were unable to recover it. 00:37:33.396 [2024-07-14 15:10:12.522401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.396 [2024-07-14 15:10:12.522434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.396 qpair failed and we were unable to recover it. 00:37:33.396 [2024-07-14 15:10:12.522599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.396 [2024-07-14 15:10:12.522632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.396 qpair failed and we were unable to recover it. 00:37:33.396 [2024-07-14 15:10:12.522792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.396 [2024-07-14 15:10:12.522825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.396 qpair failed and we were unable to recover it. 00:37:33.396 [2024-07-14 15:10:12.522995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.396 [2024-07-14 15:10:12.523027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.396 qpair failed and we were unable to recover it. 00:37:33.396 [2024-07-14 15:10:12.523167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.396 [2024-07-14 15:10:12.523200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.396 qpair failed and we were unable to recover it. 00:37:33.396 [2024-07-14 15:10:12.523336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.397 [2024-07-14 15:10:12.523370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.397 qpair failed and we were unable to recover it. 00:37:33.397 [2024-07-14 15:10:12.523550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.397 [2024-07-14 15:10:12.523587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.397 qpair failed and we were unable to recover it. 00:37:33.397 [2024-07-14 15:10:12.523701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.397 [2024-07-14 15:10:12.523735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.397 qpair failed and we were unable to recover it. 00:37:33.397 [2024-07-14 15:10:12.523849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.397 [2024-07-14 15:10:12.523890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.397 qpair failed and we were unable to recover it. 00:37:33.397 [2024-07-14 15:10:12.524039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.397 [2024-07-14 15:10:12.524072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.397 qpair failed and we were unable to recover it. 00:37:33.397 [2024-07-14 15:10:12.524213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.397 [2024-07-14 15:10:12.524246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.397 qpair failed and we were unable to recover it. 00:37:33.397 [2024-07-14 15:10:12.524383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.397 [2024-07-14 15:10:12.524416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.397 qpair failed and we were unable to recover it. 00:37:33.397 [2024-07-14 15:10:12.524553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.397 [2024-07-14 15:10:12.524587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.397 qpair failed and we were unable to recover it. 00:37:33.397 [2024-07-14 15:10:12.524747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.397 [2024-07-14 15:10:12.524780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.397 qpair failed and we were unable to recover it. 00:37:33.397 [2024-07-14 15:10:12.524914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.397 [2024-07-14 15:10:12.524952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.397 qpair failed and we were unable to recover it. 00:37:33.397 [2024-07-14 15:10:12.525091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.397 [2024-07-14 15:10:12.525124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.397 qpair failed and we were unable to recover it. 00:37:33.397 [2024-07-14 15:10:12.525279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.397 [2024-07-14 15:10:12.525312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.397 qpair failed and we were unable to recover it. 00:37:33.397 [2024-07-14 15:10:12.525417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.397 [2024-07-14 15:10:12.525450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.397 qpair failed and we were unable to recover it. 00:37:33.397 [2024-07-14 15:10:12.525582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.397 [2024-07-14 15:10:12.525625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.397 qpair failed and we were unable to recover it. 00:37:33.397 [2024-07-14 15:10:12.525759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.397 [2024-07-14 15:10:12.525791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.397 qpair failed and we were unable to recover it. 00:37:33.397 [2024-07-14 15:10:12.525956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.397 [2024-07-14 15:10:12.525990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.397 qpair failed and we were unable to recover it. 00:37:33.397 [2024-07-14 15:10:12.526126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.397 [2024-07-14 15:10:12.526161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.397 qpair failed and we were unable to recover it. 00:37:33.397 [2024-07-14 15:10:12.526335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.397 [2024-07-14 15:10:12.526369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.397 qpair failed and we were unable to recover it. 00:37:33.397 [2024-07-14 15:10:12.526511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.397 [2024-07-14 15:10:12.526545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.397 qpair failed and we were unable to recover it. 00:37:33.397 [2024-07-14 15:10:12.526707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.397 [2024-07-14 15:10:12.526740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.397 qpair failed and we were unable to recover it. 00:37:33.397 [2024-07-14 15:10:12.526853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.397 [2024-07-14 15:10:12.526893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.397 qpair failed and we were unable to recover it. 00:37:33.397 [2024-07-14 15:10:12.527041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.397 [2024-07-14 15:10:12.527073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.397 qpair failed and we were unable to recover it. 00:37:33.397 [2024-07-14 15:10:12.527213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.397 [2024-07-14 15:10:12.527246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.397 qpair failed and we were unable to recover it. 00:37:33.397 [2024-07-14 15:10:12.527378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.397 [2024-07-14 15:10:12.527410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.397 qpair failed and we were unable to recover it. 00:37:33.397 [2024-07-14 15:10:12.527526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.397 [2024-07-14 15:10:12.527559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.397 qpair failed and we were unable to recover it. 00:37:33.397 [2024-07-14 15:10:12.527670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.397 [2024-07-14 15:10:12.527703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.397 qpair failed and we were unable to recover it. 00:37:33.397 [2024-07-14 15:10:12.527842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.397 [2024-07-14 15:10:12.527874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.397 qpair failed and we were unable to recover it. 00:37:33.397 [2024-07-14 15:10:12.528022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.397 [2024-07-14 15:10:12.528055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.397 qpair failed and we were unable to recover it. 00:37:33.397 [2024-07-14 15:10:12.528187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.397 [2024-07-14 15:10:12.528221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.397 qpair failed and we were unable to recover it. 00:37:33.397 [2024-07-14 15:10:12.528384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.397 [2024-07-14 15:10:12.528417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.397 qpair failed and we were unable to recover it. 00:37:33.397 [2024-07-14 15:10:12.528555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.397 [2024-07-14 15:10:12.528588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.397 qpair failed and we were unable to recover it. 00:37:33.397 [2024-07-14 15:10:12.528725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.397 [2024-07-14 15:10:12.528758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.397 qpair failed and we were unable to recover it. 00:37:33.397 [2024-07-14 15:10:12.528893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.397 [2024-07-14 15:10:12.528927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.397 qpair failed and we were unable to recover it. 00:37:33.397 [2024-07-14 15:10:12.529066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.397 [2024-07-14 15:10:12.529099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.397 qpair failed and we were unable to recover it. 00:37:33.397 [2024-07-14 15:10:12.529239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.397 [2024-07-14 15:10:12.529289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.397 qpair failed and we were unable to recover it. 00:37:33.397 [2024-07-14 15:10:12.529413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.397 [2024-07-14 15:10:12.529446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.397 qpair failed and we were unable to recover it. 00:37:33.397 [2024-07-14 15:10:12.529556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.397 [2024-07-14 15:10:12.529589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.397 qpair failed and we were unable to recover it. 00:37:33.397 [2024-07-14 15:10:12.529718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.397 [2024-07-14 15:10:12.529751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.397 qpair failed and we were unable to recover it. 00:37:33.397 [2024-07-14 15:10:12.529905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.397 [2024-07-14 15:10:12.529940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.397 qpair failed and we were unable to recover it. 00:37:33.397 [2024-07-14 15:10:12.530053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.398 [2024-07-14 15:10:12.530086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.398 qpair failed and we were unable to recover it. 00:37:33.398 [2024-07-14 15:10:12.530207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.398 [2024-07-14 15:10:12.530243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.398 qpair failed and we were unable to recover it. 00:37:33.398 [2024-07-14 15:10:12.530353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.398 [2024-07-14 15:10:12.530385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.398 qpair failed and we were unable to recover it. 00:37:33.398 [2024-07-14 15:10:12.530500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.398 [2024-07-14 15:10:12.530532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.398 qpair failed and we were unable to recover it. 00:37:33.398 [2024-07-14 15:10:12.530701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.398 [2024-07-14 15:10:12.530735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.398 qpair failed and we were unable to recover it. 00:37:33.398 [2024-07-14 15:10:12.530870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.398 [2024-07-14 15:10:12.530911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.398 qpair failed and we were unable to recover it. 00:37:33.398 [2024-07-14 15:10:12.531050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.398 [2024-07-14 15:10:12.531083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.398 qpair failed and we were unable to recover it. 00:37:33.398 [2024-07-14 15:10:12.531241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.398 [2024-07-14 15:10:12.531274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.398 qpair failed and we were unable to recover it. 00:37:33.398 [2024-07-14 15:10:12.531385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.398 [2024-07-14 15:10:12.531418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.398 qpair failed and we were unable to recover it. 00:37:33.398 [2024-07-14 15:10:12.531555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.398 [2024-07-14 15:10:12.531587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.398 qpair failed and we were unable to recover it. 00:37:33.398 [2024-07-14 15:10:12.531698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.398 [2024-07-14 15:10:12.531737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.398 qpair failed and we were unable to recover it. 00:37:33.398 [2024-07-14 15:10:12.531852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.398 [2024-07-14 15:10:12.531891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.398 qpair failed and we were unable to recover it. 00:37:33.398 [2024-07-14 15:10:12.532058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.398 [2024-07-14 15:10:12.532091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.398 qpair failed and we were unable to recover it. 00:37:33.398 [2024-07-14 15:10:12.532203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.398 [2024-07-14 15:10:12.532236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.398 qpair failed and we were unable to recover it. 00:37:33.398 [2024-07-14 15:10:12.532368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.398 [2024-07-14 15:10:12.532400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.398 qpair failed and we were unable to recover it. 00:37:33.398 [2024-07-14 15:10:12.532548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.398 [2024-07-14 15:10:12.532582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.398 qpair failed and we were unable to recover it. 00:37:33.398 [2024-07-14 15:10:12.532743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.398 [2024-07-14 15:10:12.532777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.398 qpair failed and we were unable to recover it. 00:37:33.398 [2024-07-14 15:10:12.532920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.398 [2024-07-14 15:10:12.532953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.398 qpair failed and we were unable to recover it. 00:37:33.398 [2024-07-14 15:10:12.533089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.398 [2024-07-14 15:10:12.533122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.398 qpair failed and we were unable to recover it. 00:37:33.398 [2024-07-14 15:10:12.533259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.398 [2024-07-14 15:10:12.533292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.398 qpair failed and we were unable to recover it. 00:37:33.398 [2024-07-14 15:10:12.533430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.398 [2024-07-14 15:10:12.533463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.398 qpair failed and we were unable to recover it. 00:37:33.398 [2024-07-14 15:10:12.533598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.398 [2024-07-14 15:10:12.533631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.398 qpair failed and we were unable to recover it. 00:37:33.398 [2024-07-14 15:10:12.533761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.398 [2024-07-14 15:10:12.533795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.398 qpair failed and we were unable to recover it. 00:37:33.398 [2024-07-14 15:10:12.533975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.398 [2024-07-14 15:10:12.534009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.398 qpair failed and we were unable to recover it. 00:37:33.398 [2024-07-14 15:10:12.534135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.398 [2024-07-14 15:10:12.534169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.398 qpair failed and we were unable to recover it. 00:37:33.398 [2024-07-14 15:10:12.534338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.398 [2024-07-14 15:10:12.534370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.398 qpair failed and we were unable to recover it. 00:37:33.398 [2024-07-14 15:10:12.534530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.398 [2024-07-14 15:10:12.534563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.398 qpair failed and we were unable to recover it. 00:37:33.398 [2024-07-14 15:10:12.534705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.398 [2024-07-14 15:10:12.534740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.398 qpair failed and we were unable to recover it. 00:37:33.398 [2024-07-14 15:10:12.534881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.398 [2024-07-14 15:10:12.534916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.398 qpair failed and we were unable to recover it. 00:37:33.398 [2024-07-14 15:10:12.535052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.398 [2024-07-14 15:10:12.535085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.398 qpair failed and we were unable to recover it. 00:37:33.398 [2024-07-14 15:10:12.535199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.398 [2024-07-14 15:10:12.535231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.398 qpair failed and we were unable to recover it. 00:37:33.398 [2024-07-14 15:10:12.535366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.398 [2024-07-14 15:10:12.535414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.398 qpair failed and we were unable to recover it. 00:37:33.398 [2024-07-14 15:10:12.535525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.398 [2024-07-14 15:10:12.535558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.398 qpair failed and we were unable to recover it. 00:37:33.398 [2024-07-14 15:10:12.535692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.398 [2024-07-14 15:10:12.535725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.398 qpair failed and we were unable to recover it. 00:37:33.398 [2024-07-14 15:10:12.535825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.398 [2024-07-14 15:10:12.535857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.398 qpair failed and we were unable to recover it. 00:37:33.398 [2024-07-14 15:10:12.535969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.398 [2024-07-14 15:10:12.536001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.398 qpair failed and we were unable to recover it. 00:37:33.398 [2024-07-14 15:10:12.536143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.398 [2024-07-14 15:10:12.536176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.398 qpair failed and we were unable to recover it. 00:37:33.398 [2024-07-14 15:10:12.536325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.398 [2024-07-14 15:10:12.536372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.398 qpair failed and we were unable to recover it. 00:37:33.398 [2024-07-14 15:10:12.536497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.398 [2024-07-14 15:10:12.536532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.398 qpair failed and we were unable to recover it. 00:37:33.398 [2024-07-14 15:10:12.536650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.398 [2024-07-14 15:10:12.536684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.398 qpair failed and we were unable to recover it. 00:37:33.398 [2024-07-14 15:10:12.536791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.398 [2024-07-14 15:10:12.536825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.398 qpair failed and we were unable to recover it. 00:37:33.398 [2024-07-14 15:10:12.536966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.398 [2024-07-14 15:10:12.537001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.398 qpair failed and we were unable to recover it. 00:37:33.398 [2024-07-14 15:10:12.537146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.398 [2024-07-14 15:10:12.537179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.398 qpair failed and we were unable to recover it. 00:37:33.398 [2024-07-14 15:10:12.537298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.398 [2024-07-14 15:10:12.537333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.398 qpair failed and we were unable to recover it. 00:37:33.398 [2024-07-14 15:10:12.537477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.398 [2024-07-14 15:10:12.537509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.399 qpair failed and we were unable to recover it. 00:37:33.399 [2024-07-14 15:10:12.537643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.399 [2024-07-14 15:10:12.537676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.399 qpair failed and we were unable to recover it. 00:37:33.399 [2024-07-14 15:10:12.537782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.399 [2024-07-14 15:10:12.537815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.399 qpair failed and we were unable to recover it. 00:37:33.399 [2024-07-14 15:10:12.537919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.399 [2024-07-14 15:10:12.537952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.399 qpair failed and we were unable to recover it. 00:37:33.399 [2024-07-14 15:10:12.538089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.399 [2024-07-14 15:10:12.538121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.399 qpair failed and we were unable to recover it. 00:37:33.399 [2024-07-14 15:10:12.538224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.399 [2024-07-14 15:10:12.538257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.399 qpair failed and we were unable to recover it. 00:37:33.399 [2024-07-14 15:10:12.538382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.399 [2024-07-14 15:10:12.538418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.399 qpair failed and we were unable to recover it. 00:37:33.399 [2024-07-14 15:10:12.538583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.399 [2024-07-14 15:10:12.538617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.399 qpair failed and we were unable to recover it. 00:37:33.399 [2024-07-14 15:10:12.538727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.399 [2024-07-14 15:10:12.538763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.399 qpair failed and we were unable to recover it. 00:37:33.399 [2024-07-14 15:10:12.538869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.399 [2024-07-14 15:10:12.538910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.399 qpair failed and we were unable to recover it. 00:37:33.399 [2024-07-14 15:10:12.539050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.399 [2024-07-14 15:10:12.539082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.399 qpair failed and we were unable to recover it. 00:37:33.399 [2024-07-14 15:10:12.539234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.399 [2024-07-14 15:10:12.539269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.399 qpair failed and we were unable to recover it. 00:37:33.399 [2024-07-14 15:10:12.539438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.399 [2024-07-14 15:10:12.539482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.399 qpair failed and we were unable to recover it. 00:37:33.399 [2024-07-14 15:10:12.539615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.399 [2024-07-14 15:10:12.539648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.399 qpair failed and we were unable to recover it. 00:37:33.399 [2024-07-14 15:10:12.539785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.399 [2024-07-14 15:10:12.539820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.399 qpair failed and we were unable to recover it. 00:37:33.399 [2024-07-14 15:10:12.539982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.399 [2024-07-14 15:10:12.540015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.399 qpair failed and we were unable to recover it. 00:37:33.399 [2024-07-14 15:10:12.540173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.399 [2024-07-14 15:10:12.540210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.399 qpair failed and we were unable to recover it. 00:37:33.399 [2024-07-14 15:10:12.540398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.399 [2024-07-14 15:10:12.540434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.399 qpair failed and we were unable to recover it. 00:37:33.399 [2024-07-14 15:10:12.540590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.399 [2024-07-14 15:10:12.540623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.399 qpair failed and we were unable to recover it. 00:37:33.399 [2024-07-14 15:10:12.540730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.399 [2024-07-14 15:10:12.540781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.399 qpair failed and we were unable to recover it. 00:37:33.399 [2024-07-14 15:10:12.540963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.399 [2024-07-14 15:10:12.541012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.399 qpair failed and we were unable to recover it. 00:37:33.399 [2024-07-14 15:10:12.541155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.399 [2024-07-14 15:10:12.541191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.399 qpair failed and we were unable to recover it. 00:37:33.399 [2024-07-14 15:10:12.541379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.399 [2024-07-14 15:10:12.541418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.399 qpair failed and we were unable to recover it. 00:37:33.399 [2024-07-14 15:10:12.541545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.399 [2024-07-14 15:10:12.541583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.399 qpair failed and we were unable to recover it. 00:37:33.399 [2024-07-14 15:10:12.541720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.399 [2024-07-14 15:10:12.541753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.399 qpair failed and we were unable to recover it. 00:37:33.399 [2024-07-14 15:10:12.541873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.399 [2024-07-14 15:10:12.541919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.399 qpair failed and we were unable to recover it. 00:37:33.399 [2024-07-14 15:10:12.542083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.399 [2024-07-14 15:10:12.542132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.399 qpair failed and we were unable to recover it. 00:37:33.399 [2024-07-14 15:10:12.542283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.399 [2024-07-14 15:10:12.542315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.399 qpair failed and we were unable to recover it. 00:37:33.399 [2024-07-14 15:10:12.542420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.399 [2024-07-14 15:10:12.542453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.399 qpair failed and we were unable to recover it. 00:37:33.399 [2024-07-14 15:10:12.542615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.399 [2024-07-14 15:10:12.542651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.399 qpair failed and we were unable to recover it. 00:37:33.399 [2024-07-14 15:10:12.542812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.399 [2024-07-14 15:10:12.542845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.399 qpair failed and we were unable to recover it. 00:37:33.399 [2024-07-14 15:10:12.542992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.399 [2024-07-14 15:10:12.543025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.399 qpair failed and we were unable to recover it. 00:37:33.399 [2024-07-14 15:10:12.543172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.399 [2024-07-14 15:10:12.543208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.399 qpair failed and we were unable to recover it. 00:37:33.399 [2024-07-14 15:10:12.543371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.399 [2024-07-14 15:10:12.543415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.399 qpair failed and we were unable to recover it. 00:37:33.399 [2024-07-14 15:10:12.543557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.399 [2024-07-14 15:10:12.543589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.399 qpair failed and we were unable to recover it. 00:37:33.399 [2024-07-14 15:10:12.543746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.399 [2024-07-14 15:10:12.543778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.399 qpair failed and we were unable to recover it. 00:37:33.399 [2024-07-14 15:10:12.544009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.399 [2024-07-14 15:10:12.544043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.399 qpair failed and we were unable to recover it. 00:37:33.399 [2024-07-14 15:10:12.544172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.399 [2024-07-14 15:10:12.544209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.399 qpair failed and we were unable to recover it. 00:37:33.399 [2024-07-14 15:10:12.544356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.399 [2024-07-14 15:10:12.544392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.399 qpair failed and we were unable to recover it. 00:37:33.399 [2024-07-14 15:10:12.544540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.399 [2024-07-14 15:10:12.544572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.399 qpair failed and we were unable to recover it. 00:37:33.399 [2024-07-14 15:10:12.544710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.399 [2024-07-14 15:10:12.544742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.399 qpair failed and we were unable to recover it. 00:37:33.399 [2024-07-14 15:10:12.544872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.399 [2024-07-14 15:10:12.544911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.399 qpair failed and we were unable to recover it. 00:37:33.399 [2024-07-14 15:10:12.545045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.399 [2024-07-14 15:10:12.545077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.399 qpair failed and we were unable to recover it. 00:37:33.399 [2024-07-14 15:10:12.545217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.399 [2024-07-14 15:10:12.545266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.399 qpair failed and we were unable to recover it. 00:37:33.399 [2024-07-14 15:10:12.545386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.399 [2024-07-14 15:10:12.545423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.399 qpair failed and we were unable to recover it. 00:37:33.400 [2024-07-14 15:10:12.545586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.400 [2024-07-14 15:10:12.545619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.400 qpair failed and we were unable to recover it. 00:37:33.400 [2024-07-14 15:10:12.545760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.400 [2024-07-14 15:10:12.545817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.400 qpair failed and we were unable to recover it. 00:37:33.400 [2024-07-14 15:10:12.545967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.400 [2024-07-14 15:10:12.546000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.400 qpair failed and we were unable to recover it. 00:37:33.400 [2024-07-14 15:10:12.546135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.400 [2024-07-14 15:10:12.546168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.400 qpair failed and we were unable to recover it. 00:37:33.400 [2024-07-14 15:10:12.546304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.400 [2024-07-14 15:10:12.546355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.400 qpair failed and we were unable to recover it. 00:37:33.400 [2024-07-14 15:10:12.546495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.400 [2024-07-14 15:10:12.546531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.400 qpair failed and we were unable to recover it. 00:37:33.400 [2024-07-14 15:10:12.546697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.400 [2024-07-14 15:10:12.546730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.400 qpair failed and we were unable to recover it. 00:37:33.400 [2024-07-14 15:10:12.546844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.400 [2024-07-14 15:10:12.546906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.400 qpair failed and we were unable to recover it. 00:37:33.400 [2024-07-14 15:10:12.547093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.400 [2024-07-14 15:10:12.547129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.400 qpair failed and we were unable to recover it. 00:37:33.400 [2024-07-14 15:10:12.547262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.400 [2024-07-14 15:10:12.547294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.400 qpair failed and we were unable to recover it. 00:37:33.400 [2024-07-14 15:10:12.547437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.400 [2024-07-14 15:10:12.547469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.400 qpair failed and we were unable to recover it. 00:37:33.400 [2024-07-14 15:10:12.547569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.400 [2024-07-14 15:10:12.547601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.400 qpair failed and we were unable to recover it. 00:37:33.400 [2024-07-14 15:10:12.547763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.400 [2024-07-14 15:10:12.547795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.400 qpair failed and we were unable to recover it. 00:37:33.400 [2024-07-14 15:10:12.547911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.400 [2024-07-14 15:10:12.547944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.400 qpair failed and we were unable to recover it. 00:37:33.400 [2024-07-14 15:10:12.548077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.400 [2024-07-14 15:10:12.548109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.400 qpair failed and we were unable to recover it. 00:37:33.400 [2024-07-14 15:10:12.548240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.400 [2024-07-14 15:10:12.548272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.400 qpair failed and we were unable to recover it. 00:37:33.400 [2024-07-14 15:10:12.548394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.400 [2024-07-14 15:10:12.548426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.400 qpair failed and we were unable to recover it. 00:37:33.400 [2024-07-14 15:10:12.548576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.400 [2024-07-14 15:10:12.548623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.400 qpair failed and we were unable to recover it. 00:37:33.400 [2024-07-14 15:10:12.548771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.400 [2024-07-14 15:10:12.548807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.400 qpair failed and we were unable to recover it. 00:37:33.400 [2024-07-14 15:10:12.548939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.400 [2024-07-14 15:10:12.548974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.400 qpair failed and we were unable to recover it. 00:37:33.400 [2024-07-14 15:10:12.549118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.400 [2024-07-14 15:10:12.549150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.400 qpair failed and we were unable to recover it. 00:37:33.400 [2024-07-14 15:10:12.549285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.400 [2024-07-14 15:10:12.549319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.400 qpair failed and we were unable to recover it. 00:37:33.400 [2024-07-14 15:10:12.549484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.400 [2024-07-14 15:10:12.549517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.400 qpair failed and we were unable to recover it. 00:37:33.400 [2024-07-14 15:10:12.549651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.400 [2024-07-14 15:10:12.549685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.400 qpair failed and we were unable to recover it. 00:37:33.400 [2024-07-14 15:10:12.549798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.400 [2024-07-14 15:10:12.549831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.400 qpair failed and we were unable to recover it. 00:37:33.400 [2024-07-14 15:10:12.549952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.400 [2024-07-14 15:10:12.549985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.400 qpair failed and we were unable to recover it. 00:37:33.400 [2024-07-14 15:10:12.550122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.400 [2024-07-14 15:10:12.550154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.400 qpair failed and we were unable to recover it. 00:37:33.400 [2024-07-14 15:10:12.550267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.400 [2024-07-14 15:10:12.550305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.400 qpair failed and we were unable to recover it. 00:37:33.400 [2024-07-14 15:10:12.550422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.400 [2024-07-14 15:10:12.550455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.400 qpair failed and we were unable to recover it. 00:37:33.400 [2024-07-14 15:10:12.550592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.400 [2024-07-14 15:10:12.550627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.400 qpair failed and we were unable to recover it. 00:37:33.400 [2024-07-14 15:10:12.550796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.400 [2024-07-14 15:10:12.550830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.400 qpair failed and we were unable to recover it. 00:37:33.400 [2024-07-14 15:10:12.550970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.400 [2024-07-14 15:10:12.551004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.400 qpair failed and we were unable to recover it. 00:37:33.400 [2024-07-14 15:10:12.551138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.400 [2024-07-14 15:10:12.551171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.400 qpair failed and we were unable to recover it. 00:37:33.400 [2024-07-14 15:10:12.551308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.400 [2024-07-14 15:10:12.551341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.400 qpair failed and we were unable to recover it. 00:37:33.400 [2024-07-14 15:10:12.551476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.400 [2024-07-14 15:10:12.551508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.400 qpair failed and we were unable to recover it. 00:37:33.400 [2024-07-14 15:10:12.551645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.400 [2024-07-14 15:10:12.551679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.400 qpair failed and we were unable to recover it. 00:37:33.400 [2024-07-14 15:10:12.551807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.400 [2024-07-14 15:10:12.551839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.400 qpair failed and we were unable to recover it. 00:37:33.400 [2024-07-14 15:10:12.551986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.400 [2024-07-14 15:10:12.552019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.400 qpair failed and we were unable to recover it. 00:37:33.400 [2024-07-14 15:10:12.552154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.400 [2024-07-14 15:10:12.552186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.400 qpair failed and we were unable to recover it. 00:37:33.400 [2024-07-14 15:10:12.552360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.400 [2024-07-14 15:10:12.552392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.400 qpair failed and we were unable to recover it. 00:37:33.400 [2024-07-14 15:10:12.552503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.400 [2024-07-14 15:10:12.552535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.400 qpair failed and we were unable to recover it. 00:37:33.400 [2024-07-14 15:10:12.552667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.400 [2024-07-14 15:10:12.552700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.400 qpair failed and we were unable to recover it. 00:37:33.400 [2024-07-14 15:10:12.552837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.400 [2024-07-14 15:10:12.552869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.400 qpair failed and we were unable to recover it. 00:37:33.400 [2024-07-14 15:10:12.553010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.400 [2024-07-14 15:10:12.553043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.400 qpair failed and we were unable to recover it. 00:37:33.401 [2024-07-14 15:10:12.553153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.401 [2024-07-14 15:10:12.553184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.401 qpair failed and we were unable to recover it. 00:37:33.401 [2024-07-14 15:10:12.553344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.401 [2024-07-14 15:10:12.553376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.401 qpair failed and we were unable to recover it. 00:37:33.401 [2024-07-14 15:10:12.553515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.401 [2024-07-14 15:10:12.553548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.401 qpair failed and we were unable to recover it. 00:37:33.401 [2024-07-14 15:10:12.553689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.401 [2024-07-14 15:10:12.553725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.401 qpair failed and we were unable to recover it. 00:37:33.401 [2024-07-14 15:10:12.553861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.401 [2024-07-14 15:10:12.553903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.401 qpair failed and we were unable to recover it. 00:37:33.401 [2024-07-14 15:10:12.554018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.401 [2024-07-14 15:10:12.554051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.401 qpair failed and we were unable to recover it. 00:37:33.401 [2024-07-14 15:10:12.554203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.401 [2024-07-14 15:10:12.554238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.401 qpair failed and we were unable to recover it. 00:37:33.401 [2024-07-14 15:10:12.554369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.401 [2024-07-14 15:10:12.554403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.401 qpair failed and we were unable to recover it. 00:37:33.401 [2024-07-14 15:10:12.554516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.401 [2024-07-14 15:10:12.554550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.401 qpair failed and we were unable to recover it. 00:37:33.401 [2024-07-14 15:10:12.554714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.401 [2024-07-14 15:10:12.554748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.401 qpair failed and we were unable to recover it. 00:37:33.401 [2024-07-14 15:10:12.554855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.401 [2024-07-14 15:10:12.554894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.401 qpair failed and we were unable to recover it. 00:37:33.401 [2024-07-14 15:10:12.555046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.401 [2024-07-14 15:10:12.555078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.401 qpair failed and we were unable to recover it. 00:37:33.401 [2024-07-14 15:10:12.555212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.401 [2024-07-14 15:10:12.555245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.401 qpair failed and we were unable to recover it. 00:37:33.401 [2024-07-14 15:10:12.555409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.401 [2024-07-14 15:10:12.555442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.401 qpair failed and we were unable to recover it. 00:37:33.401 [2024-07-14 15:10:12.555579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.401 [2024-07-14 15:10:12.555612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.401 qpair failed and we were unable to recover it. 00:37:33.401 [2024-07-14 15:10:12.555751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.401 [2024-07-14 15:10:12.555785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.401 qpair failed and we were unable to recover it. 00:37:33.401 [2024-07-14 15:10:12.555905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.401 [2024-07-14 15:10:12.555938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.401 qpair failed and we were unable to recover it. 00:37:33.401 [2024-07-14 15:10:12.556104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.401 [2024-07-14 15:10:12.556137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.401 qpair failed and we were unable to recover it. 00:37:33.401 [2024-07-14 15:10:12.556275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.401 [2024-07-14 15:10:12.556307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.401 qpair failed and we were unable to recover it. 00:37:33.401 [2024-07-14 15:10:12.556445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.401 [2024-07-14 15:10:12.556479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.401 qpair failed and we were unable to recover it. 00:37:33.401 [2024-07-14 15:10:12.556621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.401 [2024-07-14 15:10:12.556654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.401 qpair failed and we were unable to recover it. 00:37:33.401 [2024-07-14 15:10:12.556790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.401 [2024-07-14 15:10:12.556824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.401 qpair failed and we were unable to recover it. 00:37:33.401 [2024-07-14 15:10:12.556936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.401 [2024-07-14 15:10:12.556968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.401 qpair failed and we were unable to recover it. 00:37:33.401 [2024-07-14 15:10:12.557104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.401 [2024-07-14 15:10:12.557136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.401 qpair failed and we were unable to recover it. 00:37:33.401 [2024-07-14 15:10:12.557296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.401 [2024-07-14 15:10:12.557337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.401 qpair failed and we were unable to recover it. 00:37:33.401 [2024-07-14 15:10:12.557515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.401 [2024-07-14 15:10:12.557547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.401 qpair failed and we were unable to recover it. 00:37:33.401 [2024-07-14 15:10:12.557689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.401 [2024-07-14 15:10:12.557721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.401 qpair failed and we were unable to recover it. 00:37:33.401 [2024-07-14 15:10:12.557855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.401 [2024-07-14 15:10:12.557917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.401 qpair failed and we were unable to recover it. 00:37:33.401 [2024-07-14 15:10:12.558076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.401 [2024-07-14 15:10:12.558108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.401 qpair failed and we were unable to recover it. 00:37:33.401 [2024-07-14 15:10:12.558246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.401 [2024-07-14 15:10:12.558295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.401 qpair failed and we were unable to recover it. 00:37:33.401 [2024-07-14 15:10:12.558446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.401 [2024-07-14 15:10:12.558482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.401 qpair failed and we were unable to recover it. 00:37:33.401 [2024-07-14 15:10:12.558651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.401 [2024-07-14 15:10:12.558683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.401 qpair failed and we were unable to recover it. 00:37:33.401 [2024-07-14 15:10:12.558864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.401 [2024-07-14 15:10:12.558921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.401 qpair failed and we were unable to recover it. 00:37:33.401 [2024-07-14 15:10:12.559042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.401 [2024-07-14 15:10:12.559078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.401 qpair failed and we were unable to recover it. 00:37:33.401 [2024-07-14 15:10:12.559236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.401 [2024-07-14 15:10:12.559268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.401 qpair failed and we were unable to recover it. 00:37:33.401 [2024-07-14 15:10:12.559407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.401 [2024-07-14 15:10:12.559458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.401 qpair failed and we were unable to recover it. 00:37:33.401 [2024-07-14 15:10:12.559578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.401 [2024-07-14 15:10:12.559613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.401 qpair failed and we were unable to recover it. 00:37:33.401 [2024-07-14 15:10:12.559812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.401 [2024-07-14 15:10:12.559848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.401 qpair failed and we were unable to recover it. 00:37:33.401 [2024-07-14 15:10:12.560022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.401 [2024-07-14 15:10:12.560054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.402 qpair failed and we were unable to recover it. 00:37:33.402 [2024-07-14 15:10:12.560153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.402 [2024-07-14 15:10:12.560184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.402 qpair failed and we were unable to recover it. 00:37:33.402 [2024-07-14 15:10:12.560286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.402 [2024-07-14 15:10:12.560317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.402 qpair failed and we were unable to recover it. 00:37:33.402 [2024-07-14 15:10:12.560455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.402 [2024-07-14 15:10:12.560487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.402 qpair failed and we were unable to recover it. 00:37:33.402 [2024-07-14 15:10:12.560652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.402 [2024-07-14 15:10:12.560688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.402 qpair failed and we were unable to recover it. 00:37:33.402 [2024-07-14 15:10:12.560841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.402 [2024-07-14 15:10:12.560873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.402 qpair failed and we were unable to recover it. 00:37:33.402 [2024-07-14 15:10:12.560998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.402 [2024-07-14 15:10:12.561030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.402 qpair failed and we were unable to recover it. 00:37:33.402 [2024-07-14 15:10:12.561159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.402 [2024-07-14 15:10:12.561227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.402 qpair failed and we were unable to recover it. 00:37:33.402 [2024-07-14 15:10:12.561369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.402 [2024-07-14 15:10:12.561405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.402 qpair failed and we were unable to recover it. 00:37:33.402 [2024-07-14 15:10:12.561573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.402 [2024-07-14 15:10:12.561606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.402 qpair failed and we were unable to recover it. 00:37:33.402 [2024-07-14 15:10:12.561737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.402 [2024-07-14 15:10:12.561769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.402 qpair failed and we were unable to recover it. 00:37:33.402 [2024-07-14 15:10:12.561900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.402 [2024-07-14 15:10:12.561935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.402 qpair failed and we were unable to recover it. 00:37:33.402 [2024-07-14 15:10:12.562041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.402 [2024-07-14 15:10:12.562074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.402 qpair failed and we were unable to recover it. 00:37:33.402 [2024-07-14 15:10:12.562218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.402 [2024-07-14 15:10:12.562251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.402 qpair failed and we were unable to recover it. 00:37:33.402 [2024-07-14 15:10:12.562386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.402 [2024-07-14 15:10:12.562418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.402 qpair failed and we were unable to recover it. 00:37:33.402 [2024-07-14 15:10:12.562577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.402 [2024-07-14 15:10:12.562609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.402 qpair failed and we were unable to recover it. 00:37:33.402 [2024-07-14 15:10:12.562769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.402 [2024-07-14 15:10:12.562802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.402 qpair failed and we were unable to recover it. 00:37:33.402 [2024-07-14 15:10:12.562970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.402 [2024-07-14 15:10:12.563003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.402 qpair failed and we were unable to recover it. 00:37:33.402 [2024-07-14 15:10:12.563199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.402 [2024-07-14 15:10:12.563235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.402 qpair failed and we were unable to recover it. 00:37:33.402 [2024-07-14 15:10:12.563375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.402 [2024-07-14 15:10:12.563410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.402 qpair failed and we were unable to recover it. 00:37:33.402 [2024-07-14 15:10:12.563537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.402 [2024-07-14 15:10:12.563569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.402 qpair failed and we were unable to recover it. 00:37:33.402 [2024-07-14 15:10:12.563708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.402 [2024-07-14 15:10:12.563757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.402 qpair failed and we were unable to recover it. 00:37:33.402 [2024-07-14 15:10:12.563915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.402 [2024-07-14 15:10:12.563947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.402 qpair failed and we were unable to recover it. 00:37:33.402 [2024-07-14 15:10:12.564109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.402 [2024-07-14 15:10:12.564140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.402 qpair failed and we were unable to recover it. 00:37:33.402 [2024-07-14 15:10:12.564292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.402 [2024-07-14 15:10:12.564328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.402 qpair failed and we were unable to recover it. 00:37:33.402 [2024-07-14 15:10:12.564443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.402 [2024-07-14 15:10:12.564479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.402 qpair failed and we were unable to recover it. 00:37:33.402 [2024-07-14 15:10:12.564613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.402 [2024-07-14 15:10:12.564651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.402 qpair failed and we were unable to recover it. 00:37:33.402 [2024-07-14 15:10:12.564773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.402 [2024-07-14 15:10:12.564806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.402 qpair failed and we were unable to recover it. 00:37:33.402 [2024-07-14 15:10:12.564988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.402 [2024-07-14 15:10:12.565022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.402 qpair failed and we were unable to recover it. 00:37:33.402 [2024-07-14 15:10:12.565120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.402 [2024-07-14 15:10:12.565151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.402 qpair failed and we were unable to recover it. 00:37:33.402 [2024-07-14 15:10:12.565287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.402 [2024-07-14 15:10:12.565320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.402 qpair failed and we were unable to recover it. 00:37:33.402 [2024-07-14 15:10:12.565507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.402 [2024-07-14 15:10:12.565543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.402 qpair failed and we were unable to recover it. 00:37:33.402 [2024-07-14 15:10:12.565716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.402 [2024-07-14 15:10:12.565752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.402 qpair failed and we were unable to recover it. 00:37:33.402 [2024-07-14 15:10:12.565917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.402 [2024-07-14 15:10:12.565951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.402 qpair failed and we were unable to recover it. 00:37:33.402 [2024-07-14 15:10:12.566088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.402 [2024-07-14 15:10:12.566120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.402 qpair failed and we were unable to recover it. 00:37:33.402 [2024-07-14 15:10:12.566291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.402 [2024-07-14 15:10:12.566324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.402 qpair failed and we were unable to recover it. 00:37:33.402 [2024-07-14 15:10:12.566465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.402 [2024-07-14 15:10:12.566516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.402 qpair failed and we were unable to recover it. 00:37:33.402 [2024-07-14 15:10:12.566668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.402 [2024-07-14 15:10:12.566700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.402 qpair failed and we were unable to recover it. 00:37:33.402 [2024-07-14 15:10:12.566859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.402 [2024-07-14 15:10:12.566899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.402 qpair failed and we were unable to recover it. 00:37:33.402 [2024-07-14 15:10:12.567054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.402 [2024-07-14 15:10:12.567090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.402 qpair failed and we were unable to recover it. 00:37:33.402 [2024-07-14 15:10:12.567267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.402 [2024-07-14 15:10:12.567302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.402 qpair failed and we were unable to recover it. 00:37:33.402 [2024-07-14 15:10:12.567481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.402 [2024-07-14 15:10:12.567512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.402 qpair failed and we were unable to recover it. 00:37:33.402 [2024-07-14 15:10:12.567650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.402 [2024-07-14 15:10:12.567682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.402 qpair failed and we were unable to recover it. 00:37:33.402 [2024-07-14 15:10:12.567835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.402 [2024-07-14 15:10:12.567890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.402 qpair failed and we were unable to recover it. 00:37:33.402 [2024-07-14 15:10:12.568041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.402 [2024-07-14 15:10:12.568077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.403 qpair failed and we were unable to recover it. 00:37:33.403 [2024-07-14 15:10:12.568248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.403 [2024-07-14 15:10:12.568286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.403 qpair failed and we were unable to recover it. 00:37:33.403 [2024-07-14 15:10:12.568432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.403 [2024-07-14 15:10:12.568469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.403 qpair failed and we were unable to recover it. 00:37:33.403 [2024-07-14 15:10:12.568624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.403 [2024-07-14 15:10:12.568658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.403 qpair failed and we were unable to recover it. 00:37:33.403 [2024-07-14 15:10:12.568799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.403 [2024-07-14 15:10:12.568849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.403 qpair failed and we were unable to recover it. 00:37:33.403 [2024-07-14 15:10:12.568996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.403 [2024-07-14 15:10:12.569030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.403 qpair failed and we were unable to recover it. 00:37:33.403 [2024-07-14 15:10:12.569193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.403 [2024-07-14 15:10:12.569225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.403 qpair failed and we were unable to recover it. 00:37:33.403 [2024-07-14 15:10:12.569381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.403 [2024-07-14 15:10:12.569417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.403 qpair failed and we were unable to recover it. 00:37:33.403 [2024-07-14 15:10:12.569563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.403 [2024-07-14 15:10:12.569599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.403 qpair failed and we were unable to recover it. 00:37:33.403 [2024-07-14 15:10:12.569730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.403 [2024-07-14 15:10:12.569763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.403 qpair failed and we were unable to recover it. 00:37:33.403 [2024-07-14 15:10:12.569900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.403 [2024-07-14 15:10:12.569933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.403 qpair failed and we were unable to recover it. 00:37:33.403 [2024-07-14 15:10:12.570044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.403 [2024-07-14 15:10:12.570077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.403 qpair failed and we were unable to recover it. 00:37:33.403 [2024-07-14 15:10:12.570202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.403 [2024-07-14 15:10:12.570250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.403 qpair failed and we were unable to recover it. 00:37:33.403 [2024-07-14 15:10:12.570421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.403 [2024-07-14 15:10:12.570456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.403 qpair failed and we were unable to recover it. 00:37:33.403 [2024-07-14 15:10:12.570615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.403 [2024-07-14 15:10:12.570653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.403 qpair failed and we were unable to recover it. 00:37:33.403 [2024-07-14 15:10:12.570780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.403 [2024-07-14 15:10:12.570818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.403 qpair failed and we were unable to recover it. 00:37:33.403 [2024-07-14 15:10:12.570981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.403 [2024-07-14 15:10:12.571014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.403 qpair failed and we were unable to recover it. 00:37:33.403 [2024-07-14 15:10:12.571172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.403 [2024-07-14 15:10:12.571204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.403 qpair failed and we were unable to recover it. 00:37:33.403 [2024-07-14 15:10:12.571312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.403 [2024-07-14 15:10:12.571363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.403 qpair failed and we were unable to recover it. 00:37:33.403 [2024-07-14 15:10:12.571516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.403 [2024-07-14 15:10:12.571552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.403 qpair failed and we were unable to recover it. 00:37:33.403 [2024-07-14 15:10:12.571716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.403 [2024-07-14 15:10:12.571751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.403 qpair failed and we were unable to recover it. 00:37:33.403 [2024-07-14 15:10:12.571933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.403 [2024-07-14 15:10:12.571981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.403 qpair failed and we were unable to recover it. 00:37:33.403 [2024-07-14 15:10:12.572105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.403 [2024-07-14 15:10:12.572162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.403 qpair failed and we were unable to recover it. 00:37:33.403 [2024-07-14 15:10:12.572329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.403 [2024-07-14 15:10:12.572365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.403 qpair failed and we were unable to recover it. 00:37:33.403 [2024-07-14 15:10:12.572518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.403 [2024-07-14 15:10:12.572554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.403 qpair failed and we were unable to recover it. 00:37:33.403 [2024-07-14 15:10:12.572705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.403 [2024-07-14 15:10:12.572742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.403 qpair failed and we were unable to recover it. 00:37:33.403 [2024-07-14 15:10:12.572900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.403 [2024-07-14 15:10:12.572935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.403 qpair failed and we were unable to recover it. 00:37:33.403 [2024-07-14 15:10:12.573051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.403 [2024-07-14 15:10:12.573084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.403 qpair failed and we were unable to recover it. 00:37:33.403 [2024-07-14 15:10:12.573245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.403 [2024-07-14 15:10:12.573277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.403 qpair failed and we were unable to recover it. 00:37:33.403 [2024-07-14 15:10:12.573471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.403 [2024-07-14 15:10:12.573506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.403 qpair failed and we were unable to recover it. 00:37:33.403 [2024-07-14 15:10:12.573661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.403 [2024-07-14 15:10:12.573696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.403 qpair failed and we were unable to recover it. 00:37:33.403 [2024-07-14 15:10:12.573825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.403 [2024-07-14 15:10:12.573861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.403 qpair failed and we were unable to recover it. 00:37:33.403 [2024-07-14 15:10:12.574002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.403 [2024-07-14 15:10:12.574035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.403 qpair failed and we were unable to recover it. 00:37:33.403 [2024-07-14 15:10:12.574214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.403 [2024-07-14 15:10:12.574252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.403 qpair failed and we were unable to recover it. 00:37:33.403 [2024-07-14 15:10:12.574434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.403 [2024-07-14 15:10:12.574472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.403 qpair failed and we were unable to recover it. 00:37:33.403 [2024-07-14 15:10:12.574692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.403 [2024-07-14 15:10:12.574729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.403 qpair failed and we were unable to recover it. 00:37:33.403 [2024-07-14 15:10:12.574854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.403 [2024-07-14 15:10:12.574913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.403 qpair failed and we were unable to recover it. 00:37:33.403 [2024-07-14 15:10:12.575081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.403 [2024-07-14 15:10:12.575113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.403 qpair failed and we were unable to recover it. 00:37:33.403 [2024-07-14 15:10:12.575287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.403 [2024-07-14 15:10:12.575329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.403 qpair failed and we were unable to recover it. 00:37:33.403 [2024-07-14 15:10:12.575469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.403 [2024-07-14 15:10:12.575519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.403 qpair failed and we were unable to recover it. 00:37:33.403 [2024-07-14 15:10:12.575671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.403 [2024-07-14 15:10:12.575707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.403 qpair failed and we were unable to recover it. 00:37:33.403 [2024-07-14 15:10:12.575861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.403 [2024-07-14 15:10:12.575902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.403 qpair failed and we were unable to recover it. 00:37:33.403 [2024-07-14 15:10:12.576037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.403 [2024-07-14 15:10:12.576069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.403 qpair failed and we were unable to recover it. 00:37:33.403 [2024-07-14 15:10:12.576230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.403 [2024-07-14 15:10:12.576266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.403 qpair failed and we were unable to recover it. 00:37:33.403 [2024-07-14 15:10:12.576471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.403 [2024-07-14 15:10:12.576522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.403 qpair failed and we were unable to recover it. 00:37:33.404 [2024-07-14 15:10:12.576637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.404 [2024-07-14 15:10:12.576669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.404 qpair failed and we were unable to recover it. 00:37:33.404 [2024-07-14 15:10:12.576782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.404 [2024-07-14 15:10:12.576815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.404 qpair failed and we were unable to recover it. 00:37:33.404 [2024-07-14 15:10:12.576949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.404 [2024-07-14 15:10:12.576997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.404 qpair failed and we were unable to recover it. 00:37:33.404 [2024-07-14 15:10:12.577119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.404 [2024-07-14 15:10:12.577156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.404 qpair failed and we were unable to recover it. 00:37:33.404 [2024-07-14 15:10:12.577331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.404 [2024-07-14 15:10:12.577366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.404 qpair failed and we were unable to recover it. 00:37:33.404 [2024-07-14 15:10:12.577498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.404 [2024-07-14 15:10:12.577531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.404 qpair failed and we were unable to recover it. 00:37:33.404 [2024-07-14 15:10:12.577685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.404 [2024-07-14 15:10:12.577733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.404 qpair failed and we were unable to recover it. 00:37:33.404 [2024-07-14 15:10:12.577865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.404 [2024-07-14 15:10:12.577913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.404 qpair failed and we were unable to recover it. 00:37:33.404 [2024-07-14 15:10:12.578050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.404 [2024-07-14 15:10:12.578084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.404 qpair failed and we were unable to recover it. 00:37:33.404 [2024-07-14 15:10:12.578225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.404 [2024-07-14 15:10:12.578257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.404 qpair failed and we were unable to recover it. 00:37:33.404 [2024-07-14 15:10:12.578354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.404 [2024-07-14 15:10:12.578387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.404 qpair failed and we were unable to recover it. 00:37:33.404 [2024-07-14 15:10:12.578548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.404 [2024-07-14 15:10:12.578580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.404 qpair failed and we were unable to recover it. 00:37:33.404 [2024-07-14 15:10:12.578693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.404 [2024-07-14 15:10:12.578727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.404 qpair failed and we were unable to recover it. 00:37:33.404 [2024-07-14 15:10:12.578845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.404 [2024-07-14 15:10:12.578888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.404 qpair failed and we were unable to recover it. 00:37:33.404 [2024-07-14 15:10:12.579057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.404 [2024-07-14 15:10:12.579094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.404 qpair failed and we were unable to recover it. 00:37:33.404 [2024-07-14 15:10:12.579210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.404 [2024-07-14 15:10:12.579244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.404 qpair failed and we were unable to recover it. 00:37:33.404 [2024-07-14 15:10:12.579409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.404 [2024-07-14 15:10:12.579443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.404 qpair failed and we were unable to recover it. 00:37:33.404 [2024-07-14 15:10:12.579553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.404 [2024-07-14 15:10:12.579592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.404 qpair failed and we were unable to recover it. 00:37:33.404 [2024-07-14 15:10:12.579734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.404 [2024-07-14 15:10:12.579768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.404 qpair failed and we were unable to recover it. 00:37:33.404 [2024-07-14 15:10:12.579909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.404 [2024-07-14 15:10:12.579942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.404 qpair failed and we were unable to recover it. 00:37:33.404 [2024-07-14 15:10:12.580075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.404 [2024-07-14 15:10:12.580109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.404 qpair failed and we were unable to recover it. 00:37:33.404 [2024-07-14 15:10:12.580224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.404 [2024-07-14 15:10:12.580258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.404 qpair failed and we were unable to recover it. 00:37:33.404 [2024-07-14 15:10:12.580395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.404 [2024-07-14 15:10:12.580428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.404 qpair failed and we were unable to recover it. 00:37:33.404 [2024-07-14 15:10:12.580563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.404 [2024-07-14 15:10:12.580596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.404 qpair failed and we were unable to recover it. 00:37:33.404 [2024-07-14 15:10:12.580717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.404 [2024-07-14 15:10:12.580752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.404 qpair failed and we were unable to recover it. 00:37:33.404 [2024-07-14 15:10:12.580870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.404 [2024-07-14 15:10:12.580910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.404 qpair failed and we were unable to recover it. 00:37:33.404 [2024-07-14 15:10:12.581041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.404 [2024-07-14 15:10:12.581074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.404 qpair failed and we were unable to recover it. 00:37:33.404 [2024-07-14 15:10:12.581185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.404 [2024-07-14 15:10:12.581219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.404 qpair failed and we were unable to recover it. 00:37:33.404 [2024-07-14 15:10:12.581361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.404 [2024-07-14 15:10:12.581394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.404 qpair failed and we were unable to recover it. 00:37:33.404 [2024-07-14 15:10:12.581506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.404 [2024-07-14 15:10:12.581538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.404 qpair failed and we were unable to recover it. 00:37:33.404 [2024-07-14 15:10:12.581653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.404 [2024-07-14 15:10:12.581687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.404 qpair failed and we were unable to recover it. 00:37:33.404 [2024-07-14 15:10:12.581851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.404 [2024-07-14 15:10:12.581893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.404 qpair failed and we were unable to recover it. 00:37:33.404 [2024-07-14 15:10:12.582008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.404 [2024-07-14 15:10:12.582041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.404 qpair failed and we were unable to recover it. 00:37:33.404 [2024-07-14 15:10:12.582143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.404 [2024-07-14 15:10:12.582176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.404 qpair failed and we were unable to recover it. 00:37:33.404 [2024-07-14 15:10:12.582314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.404 [2024-07-14 15:10:12.582347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.404 qpair failed and we were unable to recover it. 00:37:33.404 [2024-07-14 15:10:12.582498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.404 [2024-07-14 15:10:12.582545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.404 qpair failed and we were unable to recover it. 00:37:33.404 [2024-07-14 15:10:12.582661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.404 [2024-07-14 15:10:12.582696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.404 qpair failed and we were unable to recover it. 00:37:33.404 [2024-07-14 15:10:12.582839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.404 [2024-07-14 15:10:12.582891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.404 qpair failed and we were unable to recover it. 00:37:33.404 [2024-07-14 15:10:12.583013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.404 [2024-07-14 15:10:12.583046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.404 qpair failed and we were unable to recover it. 00:37:33.404 [2024-07-14 15:10:12.583182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.404 [2024-07-14 15:10:12.583215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.404 qpair failed and we were unable to recover it. 00:37:33.404 [2024-07-14 15:10:12.583332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.404 [2024-07-14 15:10:12.583366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.404 qpair failed and we were unable to recover it. 00:37:33.404 [2024-07-14 15:10:12.583482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.404 [2024-07-14 15:10:12.583514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.404 qpair failed and we were unable to recover it. 00:37:33.404 [2024-07-14 15:10:12.583623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.404 [2024-07-14 15:10:12.583655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.404 qpair failed and we were unable to recover it. 00:37:33.404 [2024-07-14 15:10:12.583796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.404 [2024-07-14 15:10:12.583829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.404 qpair failed and we were unable to recover it. 00:37:33.405 [2024-07-14 15:10:12.583948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.405 [2024-07-14 15:10:12.583982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.405 qpair failed and we were unable to recover it. 00:37:33.405 [2024-07-14 15:10:12.584119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.405 [2024-07-14 15:10:12.584152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.405 qpair failed and we were unable to recover it. 00:37:33.405 [2024-07-14 15:10:12.584265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.405 [2024-07-14 15:10:12.584298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.405 qpair failed and we were unable to recover it. 00:37:33.405 [2024-07-14 15:10:12.584427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.405 [2024-07-14 15:10:12.584460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.405 qpair failed and we were unable to recover it. 00:37:33.405 [2024-07-14 15:10:12.584594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.405 [2024-07-14 15:10:12.584627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.405 qpair failed and we were unable to recover it. 00:37:33.405 [2024-07-14 15:10:12.584743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.405 [2024-07-14 15:10:12.584777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.405 qpair failed and we were unable to recover it. 00:37:33.405 [2024-07-14 15:10:12.584912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.405 [2024-07-14 15:10:12.584946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.405 qpair failed and we were unable to recover it. 00:37:33.405 [2024-07-14 15:10:12.585068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.405 [2024-07-14 15:10:12.585115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.405 qpair failed and we were unable to recover it. 00:37:33.405 [2024-07-14 15:10:12.585235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.405 [2024-07-14 15:10:12.585270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.405 qpair failed and we were unable to recover it. 00:37:33.405 [2024-07-14 15:10:12.585394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.405 [2024-07-14 15:10:12.585427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.405 qpair failed and we were unable to recover it. 00:37:33.405 [2024-07-14 15:10:12.585531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.405 [2024-07-14 15:10:12.585564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.405 qpair failed and we were unable to recover it. 00:37:33.405 [2024-07-14 15:10:12.585696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.405 [2024-07-14 15:10:12.585729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.405 qpair failed and we were unable to recover it. 00:37:33.405 [2024-07-14 15:10:12.585840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.405 [2024-07-14 15:10:12.585872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.405 qpair failed and we were unable to recover it. 00:37:33.405 [2024-07-14 15:10:12.586015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.405 [2024-07-14 15:10:12.586053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.405 qpair failed and we were unable to recover it. 00:37:33.405 [2024-07-14 15:10:12.586166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.405 [2024-07-14 15:10:12.586199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.405 qpair failed and we were unable to recover it. 00:37:33.405 [2024-07-14 15:10:12.586306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.405 [2024-07-14 15:10:12.586339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.405 qpair failed and we were unable to recover it. 00:37:33.405 [2024-07-14 15:10:12.586446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.405 [2024-07-14 15:10:12.586478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.405 qpair failed and we were unable to recover it. 00:37:33.405 [2024-07-14 15:10:12.586586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.405 [2024-07-14 15:10:12.586619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.405 qpair failed and we were unable to recover it. 00:37:33.405 [2024-07-14 15:10:12.586714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.405 [2024-07-14 15:10:12.586746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.405 qpair failed and we were unable to recover it. 00:37:33.405 [2024-07-14 15:10:12.586903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.405 [2024-07-14 15:10:12.586936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.405 qpair failed and we were unable to recover it. 00:37:33.405 [2024-07-14 15:10:12.587051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.405 [2024-07-14 15:10:12.587084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.405 qpair failed and we were unable to recover it. 00:37:33.405 [2024-07-14 15:10:12.587219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.405 [2024-07-14 15:10:12.587252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.405 qpair failed and we were unable to recover it. 00:37:33.405 [2024-07-14 15:10:12.587390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.405 [2024-07-14 15:10:12.587426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.405 qpair failed and we were unable to recover it. 00:37:33.405 [2024-07-14 15:10:12.587545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.405 [2024-07-14 15:10:12.587580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.405 qpair failed and we were unable to recover it. 00:37:33.405 [2024-07-14 15:10:12.587746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.405 [2024-07-14 15:10:12.587778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.405 qpair failed and we were unable to recover it. 00:37:33.405 [2024-07-14 15:10:12.587918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.405 [2024-07-14 15:10:12.587951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.405 qpair failed and we were unable to recover it. 00:37:33.405 [2024-07-14 15:10:12.588068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.405 [2024-07-14 15:10:12.588101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.405 qpair failed and we were unable to recover it. 00:37:33.405 [2024-07-14 15:10:12.588210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.405 [2024-07-14 15:10:12.588242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.405 qpair failed and we were unable to recover it. 00:37:33.405 [2024-07-14 15:10:12.588385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.405 [2024-07-14 15:10:12.588418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.405 qpair failed and we were unable to recover it. 00:37:33.405 [2024-07-14 15:10:12.588548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.405 [2024-07-14 15:10:12.588581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.405 qpair failed and we were unable to recover it. 00:37:33.405 [2024-07-14 15:10:12.588716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.405 [2024-07-14 15:10:12.588748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.405 qpair failed and we were unable to recover it. 00:37:33.405 [2024-07-14 15:10:12.588861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.405 [2024-07-14 15:10:12.588905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.405 qpair failed and we were unable to recover it. 00:37:33.405 [2024-07-14 15:10:12.589040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.405 [2024-07-14 15:10:12.589073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.405 qpair failed and we were unable to recover it. 00:37:33.405 [2024-07-14 15:10:12.589208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.405 [2024-07-14 15:10:12.589266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.405 qpair failed and we were unable to recover it. 00:37:33.405 [2024-07-14 15:10:12.589407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.405 [2024-07-14 15:10:12.589439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.405 qpair failed and we were unable to recover it. 00:37:33.405 [2024-07-14 15:10:12.589548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.405 [2024-07-14 15:10:12.589580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.405 qpair failed and we were unable to recover it. 00:37:33.405 [2024-07-14 15:10:12.589737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.405 [2024-07-14 15:10:12.589769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.405 qpair failed and we were unable to recover it. 00:37:33.405 [2024-07-14 15:10:12.589931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.405 [2024-07-14 15:10:12.589966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.405 qpair failed and we were unable to recover it. 00:37:33.405 [2024-07-14 15:10:12.590069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.405 [2024-07-14 15:10:12.590103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.405 qpair failed and we were unable to recover it. 00:37:33.405 [2024-07-14 15:10:12.590233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.406 [2024-07-14 15:10:12.590281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.406 qpair failed and we were unable to recover it. 00:37:33.406 [2024-07-14 15:10:12.590427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.406 [2024-07-14 15:10:12.590460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.406 qpair failed and we were unable to recover it. 00:37:33.406 [2024-07-14 15:10:12.590603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.406 [2024-07-14 15:10:12.590636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.406 qpair failed and we were unable to recover it. 00:37:33.406 [2024-07-14 15:10:12.590780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.406 [2024-07-14 15:10:12.590813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.406 qpair failed and we were unable to recover it. 00:37:33.406 [2024-07-14 15:10:12.590917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.406 [2024-07-14 15:10:12.590951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.406 qpair failed and we were unable to recover it. 00:37:33.406 [2024-07-14 15:10:12.591063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.406 [2024-07-14 15:10:12.591095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.406 qpair failed and we were unable to recover it. 00:37:33.406 [2024-07-14 15:10:12.591230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.406 [2024-07-14 15:10:12.591264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.406 qpair failed and we were unable to recover it. 00:37:33.406 [2024-07-14 15:10:12.591378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.406 [2024-07-14 15:10:12.591411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.406 qpair failed and we were unable to recover it. 00:37:33.406 [2024-07-14 15:10:12.591517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.406 [2024-07-14 15:10:12.591550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.406 qpair failed and we were unable to recover it. 00:37:33.406 [2024-07-14 15:10:12.591662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.406 [2024-07-14 15:10:12.591696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.406 qpair failed and we were unable to recover it. 00:37:33.406 [2024-07-14 15:10:12.591889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.406 [2024-07-14 15:10:12.591937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.406 qpair failed and we were unable to recover it. 00:37:33.406 [2024-07-14 15:10:12.592091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.406 [2024-07-14 15:10:12.592127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.406 qpair failed and we were unable to recover it. 00:37:33.406 [2024-07-14 15:10:12.592230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.406 [2024-07-14 15:10:12.592264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.406 qpair failed and we were unable to recover it. 00:37:33.406 [2024-07-14 15:10:12.592390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.406 [2024-07-14 15:10:12.592429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.406 qpair failed and we were unable to recover it. 00:37:33.406 [2024-07-14 15:10:12.592603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.406 [2024-07-14 15:10:12.592661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.406 qpair failed and we were unable to recover it. 00:37:33.406 [2024-07-14 15:10:12.592777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.406 [2024-07-14 15:10:12.592810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.406 qpair failed and we were unable to recover it. 00:37:33.406 [2024-07-14 15:10:12.592957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.406 [2024-07-14 15:10:12.592992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.406 qpair failed and we were unable to recover it. 00:37:33.406 [2024-07-14 15:10:12.593157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.406 [2024-07-14 15:10:12.593190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.406 qpair failed and we were unable to recover it. 00:37:33.406 [2024-07-14 15:10:12.593325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.406 [2024-07-14 15:10:12.593357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.406 qpair failed and we were unable to recover it. 00:37:33.406 [2024-07-14 15:10:12.593461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.406 [2024-07-14 15:10:12.593493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.406 qpair failed and we were unable to recover it. 00:37:33.406 [2024-07-14 15:10:12.593627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.406 [2024-07-14 15:10:12.593659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.406 qpair failed and we were unable to recover it. 00:37:33.406 [2024-07-14 15:10:12.593809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.406 [2024-07-14 15:10:12.593855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.406 qpair failed and we were unable to recover it. 00:37:33.406 [2024-07-14 15:10:12.593983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.406 [2024-07-14 15:10:12.594018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.406 qpair failed and we were unable to recover it. 00:37:33.406 [2024-07-14 15:10:12.594204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.406 [2024-07-14 15:10:12.594256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.406 qpair failed and we were unable to recover it. 00:37:33.406 [2024-07-14 15:10:12.594387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.406 [2024-07-14 15:10:12.594440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.406 qpair failed and we were unable to recover it. 00:37:33.406 [2024-07-14 15:10:12.594597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.406 [2024-07-14 15:10:12.594647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.406 qpair failed and we were unable to recover it. 00:37:33.406 [2024-07-14 15:10:12.594757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.406 [2024-07-14 15:10:12.594790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.406 qpair failed and we were unable to recover it. 00:37:33.406 [2024-07-14 15:10:12.594903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.406 [2024-07-14 15:10:12.594956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.406 qpair failed and we were unable to recover it. 00:37:33.406 [2024-07-14 15:10:12.595091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.406 [2024-07-14 15:10:12.595127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.406 qpair failed and we were unable to recover it. 00:37:33.406 [2024-07-14 15:10:12.595258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.406 [2024-07-14 15:10:12.595294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.406 qpair failed and we were unable to recover it. 00:37:33.406 [2024-07-14 15:10:12.595440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.406 [2024-07-14 15:10:12.595476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.406 qpair failed and we were unable to recover it. 00:37:33.406 [2024-07-14 15:10:12.595610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.406 [2024-07-14 15:10:12.595646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.406 qpair failed and we were unable to recover it. 00:37:33.406 [2024-07-14 15:10:12.595770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.406 [2024-07-14 15:10:12.595807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.406 qpair failed and we were unable to recover it. 00:37:33.406 [2024-07-14 15:10:12.595951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.406 [2024-07-14 15:10:12.595986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.406 qpair failed and we were unable to recover it. 00:37:33.406 [2024-07-14 15:10:12.596141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.406 [2024-07-14 15:10:12.596193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.406 qpair failed and we were unable to recover it. 00:37:33.406 [2024-07-14 15:10:12.596305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.406 [2024-07-14 15:10:12.596340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.406 qpair failed and we were unable to recover it. 00:37:33.406 [2024-07-14 15:10:12.596473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.406 [2024-07-14 15:10:12.596524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.406 qpair failed and we were unable to recover it. 00:37:33.406 [2024-07-14 15:10:12.596634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.406 [2024-07-14 15:10:12.596668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.406 qpair failed and we were unable to recover it. 00:37:33.406 [2024-07-14 15:10:12.596786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.406 [2024-07-14 15:10:12.596819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.406 qpair failed and we were unable to recover it. 00:37:33.406 [2024-07-14 15:10:12.596937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.406 [2024-07-14 15:10:12.596971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.406 qpair failed and we were unable to recover it. 00:37:33.406 [2024-07-14 15:10:12.597097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.406 [2024-07-14 15:10:12.597144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.406 qpair failed and we were unable to recover it. 00:37:33.406 [2024-07-14 15:10:12.597309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.406 [2024-07-14 15:10:12.597345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.406 qpair failed and we were unable to recover it. 00:37:33.406 [2024-07-14 15:10:12.597449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.406 [2024-07-14 15:10:12.597482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.406 qpair failed and we were unable to recover it. 00:37:33.406 [2024-07-14 15:10:12.597594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.406 [2024-07-14 15:10:12.597628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.406 qpair failed and we were unable to recover it. 00:37:33.406 [2024-07-14 15:10:12.597776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.406 [2024-07-14 15:10:12.597809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.407 qpair failed and we were unable to recover it. 00:37:33.407 [2024-07-14 15:10:12.597963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.407 [2024-07-14 15:10:12.598000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.407 qpair failed and we were unable to recover it. 00:37:33.407 [2024-07-14 15:10:12.598125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.407 [2024-07-14 15:10:12.598162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.407 qpair failed and we were unable to recover it. 00:37:33.407 [2024-07-14 15:10:12.598277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.407 [2024-07-14 15:10:12.598313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.407 qpair failed and we were unable to recover it. 00:37:33.407 [2024-07-14 15:10:12.598479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.407 [2024-07-14 15:10:12.598515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.407 qpair failed and we were unable to recover it. 00:37:33.407 [2024-07-14 15:10:12.598627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.407 [2024-07-14 15:10:12.598663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.407 qpair failed and we were unable to recover it. 00:37:33.407 [2024-07-14 15:10:12.598796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.407 [2024-07-14 15:10:12.598831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.407 qpair failed and we were unable to recover it. 00:37:33.407 [2024-07-14 15:10:12.598966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.407 [2024-07-14 15:10:12.598998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.407 qpair failed and we were unable to recover it. 00:37:33.407 [2024-07-14 15:10:12.599156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.407 [2024-07-14 15:10:12.599199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.407 qpair failed and we were unable to recover it. 00:37:33.407 [2024-07-14 15:10:12.599352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.407 [2024-07-14 15:10:12.599388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.407 qpair failed and we were unable to recover it. 00:37:33.407 [2024-07-14 15:10:12.599535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.407 [2024-07-14 15:10:12.599576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.407 qpair failed and we were unable to recover it. 00:37:33.407 [2024-07-14 15:10:12.599691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.407 [2024-07-14 15:10:12.599726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.407 qpair failed and we were unable to recover it. 00:37:33.407 [2024-07-14 15:10:12.599882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.407 [2024-07-14 15:10:12.599919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.407 qpair failed and we were unable to recover it. 00:37:33.407 [2024-07-14 15:10:12.600072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.407 [2024-07-14 15:10:12.600104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.407 qpair failed and we were unable to recover it. 00:37:33.407 [2024-07-14 15:10:12.600213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.407 [2024-07-14 15:10:12.600245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.407 qpair failed and we were unable to recover it. 00:37:33.407 [2024-07-14 15:10:12.600395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.407 [2024-07-14 15:10:12.600431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.407 qpair failed and we were unable to recover it. 00:37:33.407 [2024-07-14 15:10:12.600641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.407 [2024-07-14 15:10:12.600676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.407 qpair failed and we were unable to recover it. 00:37:33.407 [2024-07-14 15:10:12.600830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.407 [2024-07-14 15:10:12.600870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.407 qpair failed and we were unable to recover it. 00:37:33.407 [2024-07-14 15:10:12.601048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.407 [2024-07-14 15:10:12.601082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.407 qpair failed and we were unable to recover it. 00:37:33.407 [2024-07-14 15:10:12.601235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.407 [2024-07-14 15:10:12.601282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.407 qpair failed and we were unable to recover it. 00:37:33.407 [2024-07-14 15:10:12.601426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.407 [2024-07-14 15:10:12.601464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.407 qpair failed and we were unable to recover it. 00:37:33.407 [2024-07-14 15:10:12.601647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.407 [2024-07-14 15:10:12.601683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.407 qpair failed and we were unable to recover it. 00:37:33.407 [2024-07-14 15:10:12.601808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.407 [2024-07-14 15:10:12.601839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.407 qpair failed and we were unable to recover it. 00:37:33.407 [2024-07-14 15:10:12.601985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.407 [2024-07-14 15:10:12.602018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.407 qpair failed and we were unable to recover it. 00:37:33.407 [2024-07-14 15:10:12.602137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.407 [2024-07-14 15:10:12.602169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.407 qpair failed and we were unable to recover it. 00:37:33.407 [2024-07-14 15:10:12.602346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.407 [2024-07-14 15:10:12.602382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.407 qpair failed and we were unable to recover it. 00:37:33.407 [2024-07-14 15:10:12.602503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.407 [2024-07-14 15:10:12.602540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.407 qpair failed and we were unable to recover it. 00:37:33.407 [2024-07-14 15:10:12.602665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.407 [2024-07-14 15:10:12.602700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.407 qpair failed and we were unable to recover it. 00:37:33.407 [2024-07-14 15:10:12.602819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.407 [2024-07-14 15:10:12.602867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.407 qpair failed and we were unable to recover it. 00:37:33.407 [2024-07-14 15:10:12.602997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.407 [2024-07-14 15:10:12.603031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.407 qpair failed and we were unable to recover it. 00:37:33.407 [2024-07-14 15:10:12.603135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.407 [2024-07-14 15:10:12.603167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.407 qpair failed and we were unable to recover it. 00:37:33.407 [2024-07-14 15:10:12.603302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.407 [2024-07-14 15:10:12.603338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.407 qpair failed and we were unable to recover it. 00:37:33.407 [2024-07-14 15:10:12.603541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.407 [2024-07-14 15:10:12.603577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.407 qpair failed and we were unable to recover it. 00:37:33.407 [2024-07-14 15:10:12.603700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.407 [2024-07-14 15:10:12.603736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.407 qpair failed and we were unable to recover it. 00:37:33.407 [2024-07-14 15:10:12.603869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.407 [2024-07-14 15:10:12.603910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.407 qpair failed and we were unable to recover it. 00:37:33.407 [2024-07-14 15:10:12.604013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.407 [2024-07-14 15:10:12.604046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.407 qpair failed and we were unable to recover it. 00:37:33.407 [2024-07-14 15:10:12.604184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.407 [2024-07-14 15:10:12.604233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.407 qpair failed and we were unable to recover it. 00:37:33.407 [2024-07-14 15:10:12.604410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.407 [2024-07-14 15:10:12.604446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.407 qpair failed and we were unable to recover it. 00:37:33.407 [2024-07-14 15:10:12.604574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.407 [2024-07-14 15:10:12.604626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.407 qpair failed and we were unable to recover it. 00:37:33.407 [2024-07-14 15:10:12.604774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.407 [2024-07-14 15:10:12.604810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.407 qpair failed and we were unable to recover it. 00:37:33.407 [2024-07-14 15:10:12.604991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.407 [2024-07-14 15:10:12.605039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.407 qpair failed and we were unable to recover it. 00:37:33.407 [2024-07-14 15:10:12.605177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.407 [2024-07-14 15:10:12.605224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.407 qpair failed and we were unable to recover it. 00:37:33.407 [2024-07-14 15:10:12.605426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.407 [2024-07-14 15:10:12.605479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.407 qpair failed and we were unable to recover it. 00:37:33.407 [2024-07-14 15:10:12.605647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.407 [2024-07-14 15:10:12.605698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.407 qpair failed and we were unable to recover it. 00:37:33.407 [2024-07-14 15:10:12.605848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.407 [2024-07-14 15:10:12.605888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.407 qpair failed and we were unable to recover it. 00:37:33.408 [2024-07-14 15:10:12.606024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.408 [2024-07-14 15:10:12.606057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.408 qpair failed and we were unable to recover it. 00:37:33.408 [2024-07-14 15:10:12.606185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.408 [2024-07-14 15:10:12.606237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.408 qpair failed and we were unable to recover it. 00:37:33.408 [2024-07-14 15:10:12.606350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.408 [2024-07-14 15:10:12.606385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.408 qpair failed and we were unable to recover it. 00:37:33.408 [2024-07-14 15:10:12.606510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.408 [2024-07-14 15:10:12.606544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.408 qpair failed and we were unable to recover it. 00:37:33.408 [2024-07-14 15:10:12.606685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.408 [2024-07-14 15:10:12.606719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.408 qpair failed and we were unable to recover it. 00:37:33.408 [2024-07-14 15:10:12.606857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.408 [2024-07-14 15:10:12.606901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.408 qpair failed and we were unable to recover it. 00:37:33.408 [2024-07-14 15:10:12.607044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.408 [2024-07-14 15:10:12.607076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.408 qpair failed and we were unable to recover it. 00:37:33.408 [2024-07-14 15:10:12.607261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.408 [2024-07-14 15:10:12.607296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.408 qpair failed and we were unable to recover it. 00:37:33.408 [2024-07-14 15:10:12.607443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.408 [2024-07-14 15:10:12.607479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.408 qpair failed and we were unable to recover it. 00:37:33.408 [2024-07-14 15:10:12.607601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.408 [2024-07-14 15:10:12.607637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.408 qpair failed and we were unable to recover it. 00:37:33.408 [2024-07-14 15:10:12.607766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.408 [2024-07-14 15:10:12.607798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.408 qpair failed and we were unable to recover it. 00:37:33.408 [2024-07-14 15:10:12.607905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.408 [2024-07-14 15:10:12.607937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.408 qpair failed and we were unable to recover it. 00:37:33.408 [2024-07-14 15:10:12.608052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.408 [2024-07-14 15:10:12.608084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.408 qpair failed and we were unable to recover it. 00:37:33.408 [2024-07-14 15:10:12.608233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.408 [2024-07-14 15:10:12.608270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.408 qpair failed and we were unable to recover it. 00:37:33.408 [2024-07-14 15:10:12.608425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.408 [2024-07-14 15:10:12.608461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.408 qpair failed and we were unable to recover it. 00:37:33.408 [2024-07-14 15:10:12.608639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.408 [2024-07-14 15:10:12.608674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.408 qpair failed and we were unable to recover it. 00:37:33.408 [2024-07-14 15:10:12.608803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.408 [2024-07-14 15:10:12.608838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.408 qpair failed and we were unable to recover it. 00:37:33.408 [2024-07-14 15:10:12.608989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.408 [2024-07-14 15:10:12.609023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.408 qpair failed and we were unable to recover it. 00:37:33.408 [2024-07-14 15:10:12.609175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.408 [2024-07-14 15:10:12.609233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.408 qpair failed and we were unable to recover it. 00:37:33.408 [2024-07-14 15:10:12.609396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.408 [2024-07-14 15:10:12.609448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.408 qpair failed and we were unable to recover it. 00:37:33.408 [2024-07-14 15:10:12.609608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.408 [2024-07-14 15:10:12.609660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.408 qpair failed and we were unable to recover it. 00:37:33.408 [2024-07-14 15:10:12.609777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.408 [2024-07-14 15:10:12.609810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.408 qpair failed and we were unable to recover it. 00:37:33.408 [2024-07-14 15:10:12.609953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.408 [2024-07-14 15:10:12.609987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.408 qpair failed and we were unable to recover it. 00:37:33.408 [2024-07-14 15:10:12.610115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.408 [2024-07-14 15:10:12.610162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.408 qpair failed and we were unable to recover it. 00:37:33.408 [2024-07-14 15:10:12.610279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.408 [2024-07-14 15:10:12.610314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.408 qpair failed and we were unable to recover it. 00:37:33.408 [2024-07-14 15:10:12.610473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.408 [2024-07-14 15:10:12.610506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.408 qpair failed and we were unable to recover it. 00:37:33.408 [2024-07-14 15:10:12.610640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.408 [2024-07-14 15:10:12.610673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.408 qpair failed and we were unable to recover it. 00:37:33.408 [2024-07-14 15:10:12.610811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.408 [2024-07-14 15:10:12.610844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.408 qpair failed and we were unable to recover it. 00:37:33.408 [2024-07-14 15:10:12.611014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.408 [2024-07-14 15:10:12.611047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.408 qpair failed and we were unable to recover it. 00:37:33.408 [2024-07-14 15:10:12.611189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.408 [2024-07-14 15:10:12.611225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.408 qpair failed and we were unable to recover it. 00:37:33.408 [2024-07-14 15:10:12.611381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.408 [2024-07-14 15:10:12.611419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.408 qpair failed and we were unable to recover it. 00:37:33.408 [2024-07-14 15:10:12.611590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.408 [2024-07-14 15:10:12.611627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.408 qpair failed and we were unable to recover it. 00:37:33.408 [2024-07-14 15:10:12.611766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.408 [2024-07-14 15:10:12.611802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.408 qpair failed and we were unable to recover it. 00:37:33.408 [2024-07-14 15:10:12.611929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.408 [2024-07-14 15:10:12.611981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.408 qpair failed and we were unable to recover it. 00:37:33.408 [2024-07-14 15:10:12.612093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.408 [2024-07-14 15:10:12.612126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.408 qpair failed and we were unable to recover it. 00:37:33.408 [2024-07-14 15:10:12.612258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.408 [2024-07-14 15:10:12.612308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.408 qpair failed and we were unable to recover it. 00:37:33.408 [2024-07-14 15:10:12.612444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.408 [2024-07-14 15:10:12.612481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.408 qpair failed and we were unable to recover it. 00:37:33.408 [2024-07-14 15:10:12.612684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.408 [2024-07-14 15:10:12.612721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.408 qpair failed and we were unable to recover it. 00:37:33.408 [2024-07-14 15:10:12.612850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.408 [2024-07-14 15:10:12.612892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.408 qpair failed and we were unable to recover it. 00:37:33.408 [2024-07-14 15:10:12.613027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.408 [2024-07-14 15:10:12.613060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.408 qpair failed and we were unable to recover it. 00:37:33.408 [2024-07-14 15:10:12.613211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.408 [2024-07-14 15:10:12.613258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.408 qpair failed and we were unable to recover it. 00:37:33.408 [2024-07-14 15:10:12.613414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.408 [2024-07-14 15:10:12.613468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.408 qpair failed and we were unable to recover it. 00:37:33.408 [2024-07-14 15:10:12.613634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.408 [2024-07-14 15:10:12.613687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.408 qpair failed and we were unable to recover it. 00:37:33.408 [2024-07-14 15:10:12.613819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.408 [2024-07-14 15:10:12.613852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.408 qpair failed and we were unable to recover it. 00:37:33.409 [2024-07-14 15:10:12.613995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.409 [2024-07-14 15:10:12.614042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.409 qpair failed and we were unable to recover it. 00:37:33.409 [2024-07-14 15:10:12.614218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.409 [2024-07-14 15:10:12.614263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.409 qpair failed and we were unable to recover it. 00:37:33.409 [2024-07-14 15:10:12.614395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.409 [2024-07-14 15:10:12.614431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.409 qpair failed and we were unable to recover it. 00:37:33.409 [2024-07-14 15:10:12.614606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.409 [2024-07-14 15:10:12.614641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.409 qpair failed and we were unable to recover it. 00:37:33.409 [2024-07-14 15:10:12.614792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.409 [2024-07-14 15:10:12.614828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.409 qpair failed and we were unable to recover it. 00:37:33.409 [2024-07-14 15:10:12.615001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.409 [2024-07-14 15:10:12.615035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.409 qpair failed and we were unable to recover it. 00:37:33.409 [2024-07-14 15:10:12.615134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.409 [2024-07-14 15:10:12.615165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.409 qpair failed and we were unable to recover it. 00:37:33.409 [2024-07-14 15:10:12.615327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.409 [2024-07-14 15:10:12.615360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.409 qpair failed and we were unable to recover it. 00:37:33.409 [2024-07-14 15:10:12.615487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.409 [2024-07-14 15:10:12.615523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.409 qpair failed and we were unable to recover it. 00:37:33.409 [2024-07-14 15:10:12.615702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.409 [2024-07-14 15:10:12.615737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.409 qpair failed and we were unable to recover it. 00:37:33.409 [2024-07-14 15:10:12.615859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.409 [2024-07-14 15:10:12.615906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.409 qpair failed and we were unable to recover it. 00:37:33.409 [2024-07-14 15:10:12.616034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.409 [2024-07-14 15:10:12.616066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.409 qpair failed and we were unable to recover it. 00:37:33.409 [2024-07-14 15:10:12.616221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.409 [2024-07-14 15:10:12.616285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.409 qpair failed and we were unable to recover it. 00:37:33.409 [2024-07-14 15:10:12.616444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.409 [2024-07-14 15:10:12.616498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.409 qpair failed and we were unable to recover it. 00:37:33.409 [2024-07-14 15:10:12.616628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.409 [2024-07-14 15:10:12.616680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.409 qpair failed and we were unable to recover it. 00:37:33.409 [2024-07-14 15:10:12.616797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.409 [2024-07-14 15:10:12.616832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.409 qpair failed and we were unable to recover it. 00:37:33.409 [2024-07-14 15:10:12.616976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.409 [2024-07-14 15:10:12.617029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.409 qpair failed and we were unable to recover it. 00:37:33.409 [2024-07-14 15:10:12.617183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.409 [2024-07-14 15:10:12.617235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.409 qpair failed and we were unable to recover it. 00:37:33.409 [2024-07-14 15:10:12.617398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.409 [2024-07-14 15:10:12.617431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.409 qpair failed and we were unable to recover it. 00:37:33.409 [2024-07-14 15:10:12.617547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.409 [2024-07-14 15:10:12.617582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.409 qpair failed and we were unable to recover it. 00:37:33.409 [2024-07-14 15:10:12.617736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.409 [2024-07-14 15:10:12.617772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.409 qpair failed and we were unable to recover it. 00:37:33.409 [2024-07-14 15:10:12.617912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.409 [2024-07-14 15:10:12.617947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.409 qpair failed and we were unable to recover it. 00:37:33.409 [2024-07-14 15:10:12.618091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.409 [2024-07-14 15:10:12.618124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.409 qpair failed and we were unable to recover it. 00:37:33.409 [2024-07-14 15:10:12.618232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.409 [2024-07-14 15:10:12.618263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.409 qpair failed and we were unable to recover it. 00:37:33.409 [2024-07-14 15:10:12.618390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.409 [2024-07-14 15:10:12.618423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.409 qpair failed and we were unable to recover it. 00:37:33.409 [2024-07-14 15:10:12.618528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.409 [2024-07-14 15:10:12.618561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.409 qpair failed and we were unable to recover it. 00:37:33.409 [2024-07-14 15:10:12.618661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.409 [2024-07-14 15:10:12.618693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.409 qpair failed and we were unable to recover it. 00:37:33.409 [2024-07-14 15:10:12.618817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.409 [2024-07-14 15:10:12.618849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.409 qpair failed and we were unable to recover it. 00:37:33.409 [2024-07-14 15:10:12.618991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.409 [2024-07-14 15:10:12.619028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.409 qpair failed and we were unable to recover it. 00:37:33.409 [2024-07-14 15:10:12.619175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.409 [2024-07-14 15:10:12.619211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.409 qpair failed and we were unable to recover it. 00:37:33.409 [2024-07-14 15:10:12.619357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.409 [2024-07-14 15:10:12.619392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.409 qpair failed and we were unable to recover it. 00:37:33.409 [2024-07-14 15:10:12.619534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.410 [2024-07-14 15:10:12.619582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.410 qpair failed and we were unable to recover it. 00:37:33.410 [2024-07-14 15:10:12.619759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.410 [2024-07-14 15:10:12.619795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.410 qpair failed and we were unable to recover it. 00:37:33.410 [2024-07-14 15:10:12.619946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.410 [2024-07-14 15:10:12.619979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.410 qpair failed and we were unable to recover it. 00:37:33.410 [2024-07-14 15:10:12.620124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.410 [2024-07-14 15:10:12.620174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.410 qpair failed and we were unable to recover it. 00:37:33.410 [2024-07-14 15:10:12.620319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.410 [2024-07-14 15:10:12.620355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.410 qpair failed and we were unable to recover it. 00:37:33.410 [2024-07-14 15:10:12.620474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.410 [2024-07-14 15:10:12.620510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.410 qpair failed and we were unable to recover it. 00:37:33.410 [2024-07-14 15:10:12.620637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.410 [2024-07-14 15:10:12.620673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.410 qpair failed and we were unable to recover it. 00:37:33.410 [2024-07-14 15:10:12.620789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.410 [2024-07-14 15:10:12.620824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.410 qpair failed and we were unable to recover it. 00:37:33.410 [2024-07-14 15:10:12.620958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.410 [2024-07-14 15:10:12.620991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.410 qpair failed and we were unable to recover it. 00:37:33.410 [2024-07-14 15:10:12.621099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.410 [2024-07-14 15:10:12.621131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.410 qpair failed and we were unable to recover it. 00:37:33.410 [2024-07-14 15:10:12.621270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.410 [2024-07-14 15:10:12.621310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.410 qpair failed and we were unable to recover it. 00:37:33.410 [2024-07-14 15:10:12.621457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.410 [2024-07-14 15:10:12.621494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.410 qpair failed and we were unable to recover it. 00:37:33.410 [2024-07-14 15:10:12.621673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.410 [2024-07-14 15:10:12.621709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.410 qpair failed and we were unable to recover it. 00:37:33.410 [2024-07-14 15:10:12.621852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.410 [2024-07-14 15:10:12.621897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.410 qpair failed and we were unable to recover it. 00:37:33.410 [2024-07-14 15:10:12.622025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.410 [2024-07-14 15:10:12.622057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.410 qpair failed and we were unable to recover it. 00:37:33.410 [2024-07-14 15:10:12.622170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.410 [2024-07-14 15:10:12.622203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.410 qpair failed and we were unable to recover it. 00:37:33.410 [2024-07-14 15:10:12.622337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.410 [2024-07-14 15:10:12.622371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.410 qpair failed and we were unable to recover it. 00:37:33.410 [2024-07-14 15:10:12.622495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.410 [2024-07-14 15:10:12.622546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.410 qpair failed and we were unable to recover it. 00:37:33.410 [2024-07-14 15:10:12.622735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.410 [2024-07-14 15:10:12.622771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.410 qpair failed and we were unable to recover it. 00:37:33.410 [2024-07-14 15:10:12.622962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.410 [2024-07-14 15:10:12.622994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.410 qpair failed and we were unable to recover it. 00:37:33.410 [2024-07-14 15:10:12.623104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.410 [2024-07-14 15:10:12.623136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.410 qpair failed and we were unable to recover it. 00:37:33.410 [2024-07-14 15:10:12.623240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.410 [2024-07-14 15:10:12.623272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.410 qpair failed and we were unable to recover it. 00:37:33.410 [2024-07-14 15:10:12.623405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.410 [2024-07-14 15:10:12.623436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.410 qpair failed and we were unable to recover it. 00:37:33.410 [2024-07-14 15:10:12.623558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.410 [2024-07-14 15:10:12.623590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.410 qpair failed and we were unable to recover it. 00:37:33.410 [2024-07-14 15:10:12.623730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.410 [2024-07-14 15:10:12.623763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.410 qpair failed and we were unable to recover it. 00:37:33.410 [2024-07-14 15:10:12.623891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.410 [2024-07-14 15:10:12.623939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.410 qpair failed and we were unable to recover it. 00:37:33.410 [2024-07-14 15:10:12.624061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.410 [2024-07-14 15:10:12.624096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.410 qpair failed and we were unable to recover it. 00:37:33.410 [2024-07-14 15:10:12.624212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.410 [2024-07-14 15:10:12.624247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.410 qpair failed and we were unable to recover it. 00:37:33.410 [2024-07-14 15:10:12.624359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.410 [2024-07-14 15:10:12.624393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.410 qpair failed and we were unable to recover it. 00:37:33.410 [2024-07-14 15:10:12.624527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.410 [2024-07-14 15:10:12.624559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.410 qpair failed and we were unable to recover it. 00:37:33.410 [2024-07-14 15:10:12.624670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.410 [2024-07-14 15:10:12.624702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.410 qpair failed and we were unable to recover it. 00:37:33.410 [2024-07-14 15:10:12.624841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.410 [2024-07-14 15:10:12.624874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.410 qpair failed and we were unable to recover it. 00:37:33.410 [2024-07-14 15:10:12.625030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.410 [2024-07-14 15:10:12.625090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.410 qpair failed and we were unable to recover it. 00:37:33.410 [2024-07-14 15:10:12.625247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.410 [2024-07-14 15:10:12.625283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.410 qpair failed and we were unable to recover it. 00:37:33.410 [2024-07-14 15:10:12.625431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.410 [2024-07-14 15:10:12.625484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.410 qpair failed and we were unable to recover it. 00:37:33.410 [2024-07-14 15:10:12.625630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.410 [2024-07-14 15:10:12.625682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.410 qpair failed and we were unable to recover it. 00:37:33.411 [2024-07-14 15:10:12.625821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.411 [2024-07-14 15:10:12.625854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.411 qpair failed and we were unable to recover it. 00:37:33.411 [2024-07-14 15:10:12.625988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.411 [2024-07-14 15:10:12.626035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.411 qpair failed and we were unable to recover it. 00:37:33.411 [2024-07-14 15:10:12.626150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.411 [2024-07-14 15:10:12.626185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.411 qpair failed and we were unable to recover it. 00:37:33.411 [2024-07-14 15:10:12.626321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.411 [2024-07-14 15:10:12.626353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.411 qpair failed and we were unable to recover it. 00:37:33.411 [2024-07-14 15:10:12.626511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.411 [2024-07-14 15:10:12.626546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.411 qpair failed and we were unable to recover it. 00:37:33.411 [2024-07-14 15:10:12.626672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.411 [2024-07-14 15:10:12.626704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.411 qpair failed and we were unable to recover it. 00:37:33.411 [2024-07-14 15:10:12.626824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.411 [2024-07-14 15:10:12.626856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.411 qpair failed and we were unable to recover it. 00:37:33.411 [2024-07-14 15:10:12.626978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.411 [2024-07-14 15:10:12.627014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.411 qpair failed and we were unable to recover it. 00:37:33.411 [2024-07-14 15:10:12.627122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.411 [2024-07-14 15:10:12.627155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.411 qpair failed and we were unable to recover it. 00:37:33.411 [2024-07-14 15:10:12.627296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.411 [2024-07-14 15:10:12.627329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.411 qpair failed and we were unable to recover it. 00:37:33.411 [2024-07-14 15:10:12.627441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.411 [2024-07-14 15:10:12.627475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.411 qpair failed and we were unable to recover it. 00:37:33.411 [2024-07-14 15:10:12.627618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.411 [2024-07-14 15:10:12.627656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.411 qpair failed and we were unable to recover it. 00:37:33.411 [2024-07-14 15:10:12.627771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.411 [2024-07-14 15:10:12.627806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.411 qpair failed and we were unable to recover it. 00:37:33.411 [2024-07-14 15:10:12.627940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.411 [2024-07-14 15:10:12.627975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.411 qpair failed and we were unable to recover it. 00:37:33.411 [2024-07-14 15:10:12.628106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.411 [2024-07-14 15:10:12.628145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.411 qpair failed and we were unable to recover it. 00:37:33.411 [2024-07-14 15:10:12.628283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.411 [2024-07-14 15:10:12.628314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.411 qpair failed and we were unable to recover it. 00:37:33.411 [2024-07-14 15:10:12.628415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.411 [2024-07-14 15:10:12.628447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.411 qpair failed and we were unable to recover it. 00:37:33.411 [2024-07-14 15:10:12.628552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.411 [2024-07-14 15:10:12.628584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.411 qpair failed and we were unable to recover it. 00:37:33.411 [2024-07-14 15:10:12.628729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.411 [2024-07-14 15:10:12.628762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.411 qpair failed and we were unable to recover it. 00:37:33.411 [2024-07-14 15:10:12.628930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.411 [2024-07-14 15:10:12.628967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.411 qpair failed and we were unable to recover it. 00:37:33.411 [2024-07-14 15:10:12.629101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.411 [2024-07-14 15:10:12.629135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.411 qpair failed and we were unable to recover it. 00:37:33.411 [2024-07-14 15:10:12.629299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.411 [2024-07-14 15:10:12.629333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.411 qpair failed and we were unable to recover it. 00:37:33.411 [2024-07-14 15:10:12.629464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.411 [2024-07-14 15:10:12.629498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.411 qpair failed and we were unable to recover it. 00:37:33.411 [2024-07-14 15:10:12.629608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.411 [2024-07-14 15:10:12.629644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.411 qpair failed and we were unable to recover it. 00:37:33.411 [2024-07-14 15:10:12.629784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.411 [2024-07-14 15:10:12.629817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.411 qpair failed and we were unable to recover it. 00:37:33.411 [2024-07-14 15:10:12.629940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.411 [2024-07-14 15:10:12.629974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.411 qpair failed and we were unable to recover it. 00:37:33.411 [2024-07-14 15:10:12.630087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.411 [2024-07-14 15:10:12.630120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.411 qpair failed and we were unable to recover it. 00:37:33.411 [2024-07-14 15:10:12.630260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.411 [2024-07-14 15:10:12.630294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.411 qpair failed and we were unable to recover it. 00:37:33.411 [2024-07-14 15:10:12.630425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.411 [2024-07-14 15:10:12.630457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.411 qpair failed and we were unable to recover it. 00:37:33.411 [2024-07-14 15:10:12.630593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.411 [2024-07-14 15:10:12.630625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.411 qpair failed and we were unable to recover it. 00:37:33.411 [2024-07-14 15:10:12.630738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.411 [2024-07-14 15:10:12.630770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.411 qpair failed and we were unable to recover it. 00:37:33.411 [2024-07-14 15:10:12.630900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.411 [2024-07-14 15:10:12.630948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.411 qpair failed and we were unable to recover it. 00:37:33.411 [2024-07-14 15:10:12.631070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.411 [2024-07-14 15:10:12.631105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.411 qpair failed and we were unable to recover it. 00:37:33.411 [2024-07-14 15:10:12.631267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.411 [2024-07-14 15:10:12.631302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.411 qpair failed and we were unable to recover it. 00:37:33.411 [2024-07-14 15:10:12.631413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.411 [2024-07-14 15:10:12.631457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.411 qpair failed and we were unable to recover it. 00:37:33.411 [2024-07-14 15:10:12.631576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.411 [2024-07-14 15:10:12.631610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.411 qpair failed and we were unable to recover it. 00:37:33.411 [2024-07-14 15:10:12.631733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.411 [2024-07-14 15:10:12.631766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.411 qpair failed and we were unable to recover it. 00:37:33.411 [2024-07-14 15:10:12.631900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.411 [2024-07-14 15:10:12.631934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.411 qpair failed and we were unable to recover it. 00:37:33.411 [2024-07-14 15:10:12.632037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.411 [2024-07-14 15:10:12.632070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.411 qpair failed and we were unable to recover it. 00:37:33.411 [2024-07-14 15:10:12.632186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.411 [2024-07-14 15:10:12.632219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.411 qpair failed and we were unable to recover it. 00:37:33.412 [2024-07-14 15:10:12.632394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.412 [2024-07-14 15:10:12.632429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.412 qpair failed and we were unable to recover it. 00:37:33.412 [2024-07-14 15:10:12.632552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.412 [2024-07-14 15:10:12.632600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.412 qpair failed and we were unable to recover it. 00:37:33.412 [2024-07-14 15:10:12.632748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.412 [2024-07-14 15:10:12.632783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.412 qpair failed and we were unable to recover it. 00:37:33.412 [2024-07-14 15:10:12.632925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.412 [2024-07-14 15:10:12.632959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.412 qpair failed and we were unable to recover it. 00:37:33.412 [2024-07-14 15:10:12.633096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.412 [2024-07-14 15:10:12.633130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.412 qpair failed and we were unable to recover it. 00:37:33.412 [2024-07-14 15:10:12.633243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.412 [2024-07-14 15:10:12.633277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.412 qpair failed and we were unable to recover it. 00:37:33.412 [2024-07-14 15:10:12.633419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.412 [2024-07-14 15:10:12.633453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.412 qpair failed and we were unable to recover it. 00:37:33.412 [2024-07-14 15:10:12.633568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.412 [2024-07-14 15:10:12.633602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.412 qpair failed and we were unable to recover it. 00:37:33.412 [2024-07-14 15:10:12.633745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.412 [2024-07-14 15:10:12.633778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.412 qpair failed and we were unable to recover it. 00:37:33.412 [2024-07-14 15:10:12.633921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.412 [2024-07-14 15:10:12.633957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.412 qpair failed and we were unable to recover it. 00:37:33.412 [2024-07-14 15:10:12.634072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.412 [2024-07-14 15:10:12.634104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.412 qpair failed and we were unable to recover it. 00:37:33.412 [2024-07-14 15:10:12.634268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.412 [2024-07-14 15:10:12.634301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.412 qpair failed and we were unable to recover it. 00:37:33.412 [2024-07-14 15:10:12.634439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.412 [2024-07-14 15:10:12.634472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.412 qpair failed and we were unable to recover it. 00:37:33.412 [2024-07-14 15:10:12.634590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.412 [2024-07-14 15:10:12.634624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.412 qpair failed and we were unable to recover it. 00:37:33.412 [2024-07-14 15:10:12.634742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.412 [2024-07-14 15:10:12.634785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.412 qpair failed and we were unable to recover it. 00:37:33.412 [2024-07-14 15:10:12.634940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.412 [2024-07-14 15:10:12.634974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.412 qpair failed and we were unable to recover it. 00:37:33.412 [2024-07-14 15:10:12.635112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.412 [2024-07-14 15:10:12.635145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.412 qpair failed and we were unable to recover it. 00:37:33.412 [2024-07-14 15:10:12.635285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.412 [2024-07-14 15:10:12.635318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.412 qpair failed and we were unable to recover it. 00:37:33.412 [2024-07-14 15:10:12.635453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.412 [2024-07-14 15:10:12.635488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.412 qpair failed and we were unable to recover it. 00:37:33.412 [2024-07-14 15:10:12.635627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.412 [2024-07-14 15:10:12.635661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.412 qpair failed and we were unable to recover it. 00:37:33.412 [2024-07-14 15:10:12.635793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.412 [2024-07-14 15:10:12.635827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.412 qpair failed and we were unable to recover it. 00:37:33.412 [2024-07-14 15:10:12.635983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.412 [2024-07-14 15:10:12.636030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.412 qpair failed and we were unable to recover it. 00:37:33.412 [2024-07-14 15:10:12.636143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.412 [2024-07-14 15:10:12.636178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.412 qpair failed and we were unable to recover it. 00:37:33.412 [2024-07-14 15:10:12.636320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.412 [2024-07-14 15:10:12.636353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.412 qpair failed and we were unable to recover it. 00:37:33.412 [2024-07-14 15:10:12.636455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.412 [2024-07-14 15:10:12.636488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.412 qpair failed and we were unable to recover it. 00:37:33.412 [2024-07-14 15:10:12.636616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.412 [2024-07-14 15:10:12.636650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.412 qpair failed and we were unable to recover it. 00:37:33.412 [2024-07-14 15:10:12.636790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.412 [2024-07-14 15:10:12.636822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.412 qpair failed and we were unable to recover it. 00:37:33.412 [2024-07-14 15:10:12.636964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.412 [2024-07-14 15:10:12.636997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.412 qpair failed and we were unable to recover it. 00:37:33.412 [2024-07-14 15:10:12.637136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.412 [2024-07-14 15:10:12.637184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.412 qpair failed and we were unable to recover it. 00:37:33.412 [2024-07-14 15:10:12.637350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.412 [2024-07-14 15:10:12.637386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.412 qpair failed and we were unable to recover it. 00:37:33.412 [2024-07-14 15:10:12.637501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.412 [2024-07-14 15:10:12.637535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.412 qpair failed and we were unable to recover it. 00:37:33.412 [2024-07-14 15:10:12.637645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.412 [2024-07-14 15:10:12.637680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.412 qpair failed and we were unable to recover it. 00:37:33.412 [2024-07-14 15:10:12.637844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.412 [2024-07-14 15:10:12.637884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.412 qpair failed and we were unable to recover it. 00:37:33.412 [2024-07-14 15:10:12.637991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.412 [2024-07-14 15:10:12.638025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.412 qpair failed and we were unable to recover it. 00:37:33.412 [2024-07-14 15:10:12.638137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.412 [2024-07-14 15:10:12.638172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.412 qpair failed and we were unable to recover it. 00:37:33.412 [2024-07-14 15:10:12.638311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.412 [2024-07-14 15:10:12.638344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.412 qpair failed and we were unable to recover it. 00:37:33.412 [2024-07-14 15:10:12.638461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.412 [2024-07-14 15:10:12.638493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.412 qpair failed and we were unable to recover it. 00:37:33.412 [2024-07-14 15:10:12.638652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.412 [2024-07-14 15:10:12.638687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.412 qpair failed and we were unable to recover it. 00:37:33.412 [2024-07-14 15:10:12.638820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.412 [2024-07-14 15:10:12.638854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.412 qpair failed and we were unable to recover it. 00:37:33.412 [2024-07-14 15:10:12.639002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.412 [2024-07-14 15:10:12.639036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.412 qpair failed and we were unable to recover it. 00:37:33.412 [2024-07-14 15:10:12.639146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.412 [2024-07-14 15:10:12.639179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.412 qpair failed and we were unable to recover it. 00:37:33.413 [2024-07-14 15:10:12.639294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.413 [2024-07-14 15:10:12.639328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.413 qpair failed and we were unable to recover it. 00:37:33.413 [2024-07-14 15:10:12.639482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.413 [2024-07-14 15:10:12.639529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.413 qpair failed and we were unable to recover it. 00:37:33.413 [2024-07-14 15:10:12.639647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.413 [2024-07-14 15:10:12.639680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.413 qpair failed and we were unable to recover it. 00:37:33.413 [2024-07-14 15:10:12.639822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.413 [2024-07-14 15:10:12.639855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.413 qpair failed and we were unable to recover it. 00:37:33.413 [2024-07-14 15:10:12.639975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.413 [2024-07-14 15:10:12.640008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.413 qpair failed and we were unable to recover it. 00:37:33.413 [2024-07-14 15:10:12.640124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.413 [2024-07-14 15:10:12.640157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.413 qpair failed and we were unable to recover it. 00:37:33.413 [2024-07-14 15:10:12.640277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.413 [2024-07-14 15:10:12.640310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.413 qpair failed and we were unable to recover it. 00:37:33.413 [2024-07-14 15:10:12.640416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.413 [2024-07-14 15:10:12.640450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.413 qpair failed and we were unable to recover it. 00:37:33.413 [2024-07-14 15:10:12.640592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.413 [2024-07-14 15:10:12.640625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.413 qpair failed and we were unable to recover it. 00:37:33.413 [2024-07-14 15:10:12.640747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.413 [2024-07-14 15:10:12.640781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.413 qpair failed and we were unable to recover it. 00:37:33.413 [2024-07-14 15:10:12.640924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.413 [2024-07-14 15:10:12.640958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.413 qpair failed and we were unable to recover it. 00:37:33.413 [2024-07-14 15:10:12.641088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.413 [2024-07-14 15:10:12.641121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.413 qpair failed and we were unable to recover it. 00:37:33.413 [2024-07-14 15:10:12.641259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.413 [2024-07-14 15:10:12.641292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.413 qpair failed and we were unable to recover it. 00:37:33.413 [2024-07-14 15:10:12.641398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.413 [2024-07-14 15:10:12.641432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.413 qpair failed and we were unable to recover it. 00:37:33.413 [2024-07-14 15:10:12.641578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.413 [2024-07-14 15:10:12.641610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.413 qpair failed and we were unable to recover it. 00:37:33.413 [2024-07-14 15:10:12.641722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.413 [2024-07-14 15:10:12.641755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.413 qpair failed and we were unable to recover it. 00:37:33.413 [2024-07-14 15:10:12.641912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.413 [2024-07-14 15:10:12.641961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.413 qpair failed and we were unable to recover it. 00:37:33.413 [2024-07-14 15:10:12.642111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.413 [2024-07-14 15:10:12.642146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.413 qpair failed and we were unable to recover it. 00:37:33.413 [2024-07-14 15:10:12.642291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.413 [2024-07-14 15:10:12.642325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.413 qpair failed and we were unable to recover it. 00:37:33.413 [2024-07-14 15:10:12.642434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.413 [2024-07-14 15:10:12.642468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.413 qpair failed and we were unable to recover it. 00:37:33.413 [2024-07-14 15:10:12.642604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.413 [2024-07-14 15:10:12.642637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.413 qpair failed and we were unable to recover it. 00:37:33.413 [2024-07-14 15:10:12.642752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.413 [2024-07-14 15:10:12.642786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.413 qpair failed and we were unable to recover it. 00:37:33.413 [2024-07-14 15:10:12.642924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.413 [2024-07-14 15:10:12.642958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.413 qpair failed and we were unable to recover it. 00:37:33.413 [2024-07-14 15:10:12.643073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.413 [2024-07-14 15:10:12.643107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.413 qpair failed and we were unable to recover it. 00:37:33.413 [2024-07-14 15:10:12.643216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.413 [2024-07-14 15:10:12.643250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.413 qpair failed and we were unable to recover it. 00:37:33.413 [2024-07-14 15:10:12.643364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.413 [2024-07-14 15:10:12.643399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.413 qpair failed and we were unable to recover it. 00:37:33.413 [2024-07-14 15:10:12.643539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.413 [2024-07-14 15:10:12.643573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.413 qpair failed and we were unable to recover it. 00:37:33.413 [2024-07-14 15:10:12.643721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.413 [2024-07-14 15:10:12.643755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.413 qpair failed and we were unable to recover it. 00:37:33.413 [2024-07-14 15:10:12.643863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.413 [2024-07-14 15:10:12.643913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.413 qpair failed and we were unable to recover it. 00:37:33.413 [2024-07-14 15:10:12.644048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.413 [2024-07-14 15:10:12.644081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.413 qpair failed and we were unable to recover it. 00:37:33.413 [2024-07-14 15:10:12.644229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.413 [2024-07-14 15:10:12.644264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.413 qpair failed and we were unable to recover it. 00:37:33.413 [2024-07-14 15:10:12.644424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.413 [2024-07-14 15:10:12.644456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.413 qpair failed and we were unable to recover it. 00:37:33.413 [2024-07-14 15:10:12.644574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.413 [2024-07-14 15:10:12.644621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.413 qpair failed and we were unable to recover it. 00:37:33.413 [2024-07-14 15:10:12.644765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.413 [2024-07-14 15:10:12.644799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.413 qpair failed and we were unable to recover it. 00:37:33.413 [2024-07-14 15:10:12.644938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.414 [2024-07-14 15:10:12.644973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.414 qpair failed and we were unable to recover it. 00:37:33.414 [2024-07-14 15:10:12.645088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.414 [2024-07-14 15:10:12.645122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.414 qpair failed and we were unable to recover it. 00:37:33.414 [2024-07-14 15:10:12.645230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.414 [2024-07-14 15:10:12.645263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.414 qpair failed and we were unable to recover it. 00:37:33.414 [2024-07-14 15:10:12.645427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.414 [2024-07-14 15:10:12.645460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.414 qpair failed and we were unable to recover it. 00:37:33.414 [2024-07-14 15:10:12.645569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.414 [2024-07-14 15:10:12.645603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.414 qpair failed and we were unable to recover it. 00:37:33.414 [2024-07-14 15:10:12.645739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.414 [2024-07-14 15:10:12.645772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.414 qpair failed and we were unable to recover it. 00:37:33.414 [2024-07-14 15:10:12.645912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.414 [2024-07-14 15:10:12.645953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.414 qpair failed and we were unable to recover it. 00:37:33.414 [2024-07-14 15:10:12.646060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.414 [2024-07-14 15:10:12.646092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.414 qpair failed and we were unable to recover it. 00:37:33.414 [2024-07-14 15:10:12.646228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.414 [2024-07-14 15:10:12.646260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.414 qpair failed and we were unable to recover it. 00:37:33.414 [2024-07-14 15:10:12.646396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.414 [2024-07-14 15:10:12.646429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.414 qpair failed and we were unable to recover it. 00:37:33.414 [2024-07-14 15:10:12.646540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.414 [2024-07-14 15:10:12.646575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.414 qpair failed and we were unable to recover it. 00:37:33.414 [2024-07-14 15:10:12.646746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.414 [2024-07-14 15:10:12.646779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.414 qpair failed and we were unable to recover it. 00:37:33.414 [2024-07-14 15:10:12.646894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.414 [2024-07-14 15:10:12.646928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.414 qpair failed and we were unable to recover it. 00:37:33.414 [2024-07-14 15:10:12.647061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.414 [2024-07-14 15:10:12.647094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.414 qpair failed and we were unable to recover it. 00:37:33.414 [2024-07-14 15:10:12.647223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.414 [2024-07-14 15:10:12.647257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.414 qpair failed and we were unable to recover it. 00:37:33.414 [2024-07-14 15:10:12.647368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.414 [2024-07-14 15:10:12.647401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.414 qpair failed and we were unable to recover it. 00:37:33.414 [2024-07-14 15:10:12.647565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.414 [2024-07-14 15:10:12.647600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.414 qpair failed and we were unable to recover it. 00:37:33.414 [2024-07-14 15:10:12.647735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.414 [2024-07-14 15:10:12.647768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.414 qpair failed and we were unable to recover it. 00:37:33.414 [2024-07-14 15:10:12.647884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.414 [2024-07-14 15:10:12.647917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.414 qpair failed and we were unable to recover it. 00:37:33.414 [2024-07-14 15:10:12.648024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.414 [2024-07-14 15:10:12.648058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.414 qpair failed and we were unable to recover it. 00:37:33.414 [2024-07-14 15:10:12.648172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-07-14 15:10:12.648204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-07-14 15:10:12.648318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-07-14 15:10:12.648363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-07-14 15:10:12.648531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-07-14 15:10:12.648564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-07-14 15:10:12.648696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-07-14 15:10:12.648728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-07-14 15:10:12.648824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-07-14 15:10:12.648856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-07-14 15:10:12.648999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-07-14 15:10:12.649031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-07-14 15:10:12.649191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-07-14 15:10:12.649223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-07-14 15:10:12.649334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-07-14 15:10:12.649385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-07-14 15:10:12.649565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-07-14 15:10:12.649601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-07-14 15:10:12.649727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-07-14 15:10:12.649763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-07-14 15:10:12.649893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-07-14 15:10:12.649943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-07-14 15:10:12.650098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-07-14 15:10:12.650135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-07-14 15:10:12.650284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-07-14 15:10:12.650319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-07-14 15:10:12.650487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-07-14 15:10:12.650522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-07-14 15:10:12.650675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-07-14 15:10:12.650737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-07-14 15:10:12.650858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-07-14 15:10:12.650906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-07-14 15:10:12.651016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-07-14 15:10:12.651050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-07-14 15:10:12.651171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-07-14 15:10:12.651225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-07-14 15:10:12.651339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-07-14 15:10:12.651372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-07-14 15:10:12.651483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-07-14 15:10:12.651515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-07-14 15:10:12.651635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-07-14 15:10:12.651669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-07-14 15:10:12.651778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-07-14 15:10:12.651822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-07-14 15:10:12.651957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-14 15:10:12.652004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-14 15:10:12.652125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-14 15:10:12.652160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-14 15:10:12.652278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-14 15:10:12.652311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-14 15:10:12.652421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-14 15:10:12.652453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-14 15:10:12.652563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-14 15:10:12.652599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-14 15:10:12.652734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-14 15:10:12.652766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-14 15:10:12.652890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-14 15:10:12.652924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-14 15:10:12.653034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-14 15:10:12.653066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-14 15:10:12.653187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-14 15:10:12.653223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-14 15:10:12.653353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-14 15:10:12.653403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-14 15:10:12.653531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-14 15:10:12.653562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-14 15:10:12.653661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-14 15:10:12.653693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-14 15:10:12.653810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-14 15:10:12.653841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-14 15:10:12.653984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-14 15:10:12.654016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-14 15:10:12.654128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-14 15:10:12.654159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-14 15:10:12.654287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-14 15:10:12.654323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-14 15:10:12.654437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-14 15:10:12.654472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-14 15:10:12.654622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-14 15:10:12.654657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-14 15:10:12.654802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-14 15:10:12.654850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-14 15:10:12.654994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-14 15:10:12.655041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-14 15:10:12.655195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-14 15:10:12.655232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-14 15:10:12.655408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-14 15:10:12.655443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-14 15:10:12.655646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-14 15:10:12.655683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-14 15:10:12.655841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-14 15:10:12.655884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-14 15:10:12.656016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-14 15:10:12.656050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-14 15:10:12.656157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-14 15:10:12.656190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-14 15:10:12.656299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-14 15:10:12.656332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-14 15:10:12.656437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-14 15:10:12.656469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-14 15:10:12.656610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-14 15:10:12.656643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-14 15:10:12.656780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-14 15:10:12.656813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-14 15:10:12.656949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-14 15:10:12.656982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-14 15:10:12.657093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-14 15:10:12.657125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-14 15:10:12.657269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-14 15:10:12.657302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-14 15:10:12.657441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-14 15:10:12.657474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-14 15:10:12.657569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-14 15:10:12.657602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-14 15:10:12.657725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-14 15:10:12.657757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-14 15:10:12.657859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-14 15:10:12.657899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-14 15:10:12.658039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-14 15:10:12.658072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-14 15:10:12.658239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-14 15:10:12.658275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-14 15:10:12.658428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-14 15:10:12.658465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-14 15:10:12.658586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-14 15:10:12.658637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-14 15:10:12.658752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-14 15:10:12.658789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-14 15:10:12.658916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-14 15:10:12.658949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-14 15:10:12.659055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-14 15:10:12.659087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-14 15:10:12.659190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-14 15:10:12.659228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-14 15:10:12.659389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-14 15:10:12.659426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-14 15:10:12.659548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-14 15:10:12.659596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-14 15:10:12.659709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-14 15:10:12.659745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-14 15:10:12.659913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-14 15:10:12.659961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-14 15:10:12.660096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-14 15:10:12.660143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-14 15:10:12.660253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-14 15:10:12.660289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-14 15:10:12.660541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-14 15:10:12.660594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-14 15:10:12.660690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-14 15:10:12.660723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-14 15:10:12.660865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-14 15:10:12.660914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-14 15:10:12.661078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-14 15:10:12.661115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-14 15:10:12.661290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-14 15:10:12.661326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-14 15:10:12.661475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-14 15:10:12.661510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-14 15:10:12.661685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-14 15:10:12.661721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-14 15:10:12.661931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-14 15:10:12.661964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-14 15:10:12.662073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-14 15:10:12.662106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-14 15:10:12.662294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-14 15:10:12.662331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-14 15:10:12.662491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-14 15:10:12.662527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-14 15:10:12.662677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-14 15:10:12.662714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-14 15:10:12.662851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-14 15:10:12.662893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-14 15:10:12.663006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-14 15:10:12.663070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-14 15:10:12.663223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-14 15:10:12.663259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-14 15:10:12.663442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-14 15:10:12.663478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-14 15:10:12.663642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-14 15:10:12.663675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-14 15:10:12.663843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-14 15:10:12.663885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-14 15:10:12.663989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-14 15:10:12.664021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-14 15:10:12.664121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-14 15:10:12.664153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-14 15:10:12.664288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-14 15:10:12.664337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-14 15:10:12.664510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-14 15:10:12.664542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-14 15:10:12.664681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-14 15:10:12.664717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-14 15:10:12.664839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-14 15:10:12.664884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-14 15:10:12.665045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-14 15:10:12.665077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-14 15:10:12.665198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-14 15:10:12.665233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-14 15:10:12.665414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-14 15:10:12.665451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-14 15:10:12.665576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-14 15:10:12.665626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-14 15:10:12.665802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-14 15:10:12.665838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-14 15:10:12.666001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-14 15:10:12.666034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-14 15:10:12.666164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-14 15:10:12.666197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-14 15:10:12.666353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-14 15:10:12.666391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-14 15:10:12.666509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-14 15:10:12.666546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-14 15:10:12.666751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-14 15:10:12.666793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-14 15:10:12.666929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-14 15:10:12.666963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-14 15:10:12.667077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-14 15:10:12.667110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-14 15:10:12.667237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-14 15:10:12.667270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-14 15:10:12.667457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-14 15:10:12.667494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-14 15:10:12.667610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-14 15:10:12.667646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-14 15:10:12.667774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-14 15:10:12.667807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-14 15:10:12.667928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-14 15:10:12.667962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-14 15:10:12.668068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-14 15:10:12.668101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-14 15:10:12.668197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-14 15:10:12.668248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-14 15:10:12.668398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-14 15:10:12.668448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-14 15:10:12.668585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-14 15:10:12.668618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-14 15:10:12.668714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-14 15:10:12.668747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-14 15:10:12.668857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-14 15:10:12.668897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-14 15:10:12.669013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-14 15:10:12.669046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-14 15:10:12.669178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-14 15:10:12.669211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-14 15:10:12.669316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-14 15:10:12.669365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-14 15:10:12.669485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-14 15:10:12.669522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-14 15:10:12.669723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-14 15:10:12.669760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-14 15:10:12.669934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-14 15:10:12.669968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-14 15:10:12.670109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-14 15:10:12.670141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-14 15:10:12.670277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-14 15:10:12.670310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-14 15:10:12.670459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-14 15:10:12.670495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-14 15:10:12.670637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-14 15:10:12.670674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-14 15:10:12.670833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-14 15:10:12.670865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-14 15:10:12.671016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-14 15:10:12.671049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-14 15:10:12.671178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-14 15:10:12.671214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-14 15:10:12.671395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-14 15:10:12.671432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-14 15:10:12.671563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-14 15:10:12.671596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-14 15:10:12.671718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-14 15:10:12.671754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-14 15:10:12.671885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-14 15:10:12.671918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-14 15:10:12.672043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-14 15:10:12.672075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-14 15:10:12.672284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-14 15:10:12.672317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-14 15:10:12.672423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-14 15:10:12.672456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-14 15:10:12.672554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-14 15:10:12.672586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-14 15:10:12.672720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-14 15:10:12.672752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-14 15:10:12.672910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-14 15:10:12.672957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-14 15:10:12.673106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-14 15:10:12.673142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-14 15:10:12.673259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-14 15:10:12.673294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-14 15:10:12.673446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-14 15:10:12.673497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-14 15:10:12.673645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-14 15:10:12.673702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-14 15:10:12.673864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-14 15:10:12.673926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-14 15:10:12.674043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-14 15:10:12.674077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-14 15:10:12.674214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-14 15:10:12.674246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-14 15:10:12.674355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-14 15:10:12.674388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-14 15:10:12.674523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-14 15:10:12.674555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-14 15:10:12.674671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-14 15:10:12.674703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-14 15:10:12.674856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-14 15:10:12.674896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-14 15:10:12.675033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-14 15:10:12.675065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-14 15:10:12.675214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-14 15:10:12.675251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-14 15:10:12.675371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-14 15:10:12.675419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-14 15:10:12.675619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-14 15:10:12.675655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-14 15:10:12.675803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-14 15:10:12.675839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-14 15:10:12.675987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-14 15:10:12.676020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-14 15:10:12.676167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-14 15:10:12.676200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-14 15:10:12.676359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-14 15:10:12.676392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-14 15:10:12.676567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-14 15:10:12.676603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-14 15:10:12.676848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-14 15:10:12.676891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-14 15:10:12.677053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-14 15:10:12.677086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-14 15:10:12.677220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-14 15:10:12.677252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-14 15:10:12.677425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-14 15:10:12.677461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-14 15:10:12.677637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-14 15:10:12.677673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-14 15:10:12.677851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-14 15:10:12.677897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-14 15:10:12.678044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-14 15:10:12.678091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-14 15:10:12.678280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-14 15:10:12.678333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-14 15:10:12.678530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-14 15:10:12.678564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-14 15:10:12.678700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-14 15:10:12.678734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-14 15:10:12.678907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-14 15:10:12.678942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-14 15:10:12.679047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-14 15:10:12.679081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-14 15:10:12.679234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-14 15:10:12.679286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-14 15:10:12.679451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-14 15:10:12.679504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-14 15:10:12.679635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-14 15:10:12.679669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-14 15:10:12.679805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-14 15:10:12.679838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-14 15:10:12.679947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-14 15:10:12.679980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-14 15:10:12.680091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-14 15:10:12.680125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-14 15:10:12.680257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-14 15:10:12.680291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-14 15:10:12.680418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-14 15:10:12.680451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-14 15:10:12.680585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-14 15:10:12.680619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-14 15:10:12.680783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-14 15:10:12.680816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-14 15:10:12.680938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-14 15:10:12.680971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-14 15:10:12.681074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-14 15:10:12.681111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-14 15:10:12.681284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-14 15:10:12.681320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-14 15:10:12.681476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-14 15:10:12.681527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-14 15:10:12.681637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-14 15:10:12.681670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-14 15:10:12.681776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-14 15:10:12.681809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-14 15:10:12.681993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-14 15:10:12.682048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-14 15:10:12.682206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-14 15:10:12.682256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-14 15:10:12.682410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-14 15:10:12.682460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-14 15:10:12.682602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-14 15:10:12.682635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-14 15:10:12.682797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-14 15:10:12.682830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-14 15:10:12.682986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-14 15:10:12.683025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-14 15:10:12.683146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-14 15:10:12.683182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-14 15:10:12.683325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-14 15:10:12.683361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-14 15:10:12.683512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-14 15:10:12.683548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-14 15:10:12.683703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-14 15:10:12.683739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-14 15:10:12.683928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-14 15:10:12.683961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-14 15:10:12.684091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-14 15:10:12.684127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-14 15:10:12.684298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-14 15:10:12.684334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-14 15:10:12.684447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-14 15:10:12.684484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-14 15:10:12.684663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-14 15:10:12.684716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-14 15:10:12.684867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-14 15:10:12.684908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-14 15:10:12.685066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-14 15:10:12.685115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-14 15:10:12.685294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-14 15:10:12.685345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-14 15:10:12.685526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-14 15:10:12.685576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-14 15:10:12.685713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-14 15:10:12.685746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-14 15:10:12.685888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-14 15:10:12.685922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-14 15:10:12.686063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-14 15:10:12.686096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-14 15:10:12.686239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-14 15:10:12.686271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-14 15:10:12.686445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-14 15:10:12.686478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-14 15:10:12.686606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-14 15:10:12.686638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-14 15:10:12.686778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-14 15:10:12.686811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-14 15:10:12.686958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-14 15:10:12.686991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-14 15:10:12.687103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-14 15:10:12.687135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-14 15:10:12.687321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-14 15:10:12.687353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-14 15:10:12.687463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-14 15:10:12.687496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-14 15:10:12.687658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-14 15:10:12.687690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-14 15:10:12.687802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-14 15:10:12.687853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-14 15:10:12.688002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-14 15:10:12.688035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-14 15:10:12.688139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-14 15:10:12.688189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-14 15:10:12.688359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-14 15:10:12.688393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-14 15:10:12.688530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-14 15:10:12.688566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-14 15:10:12.688740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-14 15:10:12.688776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-14 15:10:12.688915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-14 15:10:12.688960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-14 15:10:12.689068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-14 15:10:12.689101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-14 15:10:12.689219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-14 15:10:12.689251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-14 15:10:12.689414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-14 15:10:12.689447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-14 15:10:12.689588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-14 15:10:12.689620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-14 15:10:12.689777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-14 15:10:12.689813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-14 15:10:12.689951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-14 15:10:12.689983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-14 15:10:12.690083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-14 15:10:12.690115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-14 15:10:12.690249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-14 15:10:12.690298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-14 15:10:12.690433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-14 15:10:12.690466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-14 15:10:12.690594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-14 15:10:12.690627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-14 15:10:12.690726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-14 15:10:12.690759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-14 15:10:12.690895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-14 15:10:12.690929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-14 15:10:12.691061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-14 15:10:12.691094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-14 15:10:12.691254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-14 15:10:12.691291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-14 15:10:12.691468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-14 15:10:12.691505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-14 15:10:12.691684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-14 15:10:12.691720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-14 15:10:12.691892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-14 15:10:12.691925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-14 15:10:12.692065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-14 15:10:12.692098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-14 15:10:12.692241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-14 15:10:12.692284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-14 15:10:12.692428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-14 15:10:12.692464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-14 15:10:12.692650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-14 15:10:12.692687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-14 15:10:12.692843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-14 15:10:12.692897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-14 15:10:12.693036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-14 15:10:12.693069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-14 15:10:12.693210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-14 15:10:12.693243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-14 15:10:12.693400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-14 15:10:12.693437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-14 15:10:12.693546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-14 15:10:12.693594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-14 15:10:12.693764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-14 15:10:12.693801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-14 15:10:12.693964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-14 15:10:12.693998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-14 15:10:12.694134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-14 15:10:12.694186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-14 15:10:12.694362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-14 15:10:12.694398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-14 15:10:12.694607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-14 15:10:12.694655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-14 15:10:12.694815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-14 15:10:12.694848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-14 15:10:12.694995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-14 15:10:12.695028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-14 15:10:12.695140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-14 15:10:12.695173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-14 15:10:12.695308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-14 15:10:12.695340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-14 15:10:12.695521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-14 15:10:12.695557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-14 15:10:12.695743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-14 15:10:12.695779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-14 15:10:12.695939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-14 15:10:12.695977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-14 15:10:12.696159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-14 15:10:12.696195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-14 15:10:12.696372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-14 15:10:12.696408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-14 15:10:12.696544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-14 15:10:12.696580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-14 15:10:12.696696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-14 15:10:12.696732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-14 15:10:12.696861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-14 15:10:12.696901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-14 15:10:12.697017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-14 15:10:12.697050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-14 15:10:12.697208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-14 15:10:12.697245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-14 15:10:12.697455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-14 15:10:12.697491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-14 15:10:12.697665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-14 15:10:12.697701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-14 15:10:12.697824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-14 15:10:12.697860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-14 15:10:12.698035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-14 15:10:12.698068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-14 15:10:12.698201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-14 15:10:12.698234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-14 15:10:12.698399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-14 15:10:12.698436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-14 15:10:12.698613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-14 15:10:12.698649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-14 15:10:12.698824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-14 15:10:12.698860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-14 15:10:12.699052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-14 15:10:12.699089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-14 15:10:12.699246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-14 15:10:12.699279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-14 15:10:12.699429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-14 15:10:12.699464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-14 15:10:12.699607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-14 15:10:12.699643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-14 15:10:12.699821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-14 15:10:12.699858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-14 15:10:12.700017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-14 15:10:12.700050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-14 15:10:12.700195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-14 15:10:12.700228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-14 15:10:12.700397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-14 15:10:12.700429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-14 15:10:12.700557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-14 15:10:12.700592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-14 15:10:12.700747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-14 15:10:12.700784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-14 15:10:12.700948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-14 15:10:12.700983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-14 15:10:12.701137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-14 15:10:12.701185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-14 15:10:12.701358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-14 15:10:12.701398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-14 15:10:12.701527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-14 15:10:12.701579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-14 15:10:12.701721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-14 15:10:12.701757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-14 15:10:12.701874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-14 15:10:12.701936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-14 15:10:12.702100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-14 15:10:12.702133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-14 15:10:12.702288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-14 15:10:12.702324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-14 15:10:12.702473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-14 15:10:12.702510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-14 15:10:12.702710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-14 15:10:12.702747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-14 15:10:12.702928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-14 15:10:12.702976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-14 15:10:12.703096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-14 15:10:12.703132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-14 15:10:12.703267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-14 15:10:12.703300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-14 15:10:12.703449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-14 15:10:12.703485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-14 15:10:12.703641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-14 15:10:12.703685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-14 15:10:12.703868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-14 15:10:12.703909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-14 15:10:12.704017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-14 15:10:12.704050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-14 15:10:12.704242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-14 15:10:12.704279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-14 15:10:12.704412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-14 15:10:12.704465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-14 15:10:12.704581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-14 15:10:12.704618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-14 15:10:12.704774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-14 15:10:12.704810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-14 15:10:12.704990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-14 15:10:12.705038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-14 15:10:12.705157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-14 15:10:12.705193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-14 15:10:12.705382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-14 15:10:12.705434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-14 15:10:12.705587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-14 15:10:12.705639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-14 15:10:12.705742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-14 15:10:12.705775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-14 15:10:12.705916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-14 15:10:12.705950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-14 15:10:12.706109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-14 15:10:12.706142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-14 15:10:12.706294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-14 15:10:12.706328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-14 15:10:12.706487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-14 15:10:12.706539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-14 15:10:12.706677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-14 15:10:12.706711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-14 15:10:12.706873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-14 15:10:12.706928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-14 15:10:12.707043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-14 15:10:12.707076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-14 15:10:12.707214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-14 15:10:12.707251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-14 15:10:12.707403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-14 15:10:12.707439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-14 15:10:12.707560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-14 15:10:12.707596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-14 15:10:12.707783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-14 15:10:12.707818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-14 15:10:12.707985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-14 15:10:12.708019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-14 15:10:12.708138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-14 15:10:12.708176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-14 15:10:12.708347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-14 15:10:12.708399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-14 15:10:12.708520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-14 15:10:12.708557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-14 15:10:12.708743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-14 15:10:12.708776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-14 15:10:12.708931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-14 15:10:12.708970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-14 15:10:12.709124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-14 15:10:12.709173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-14 15:10:12.709349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-14 15:10:12.709385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-14 15:10:12.709545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-14 15:10:12.709577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-14 15:10:12.709704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-14 15:10:12.709740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-14 15:10:12.709891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-14 15:10:12.709941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-14 15:10:12.710052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-14 15:10:12.710084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-14 15:10:12.710196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-14 15:10:12.710230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-14 15:10:12.710380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-14 15:10:12.710413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-14 15:10:12.710573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-14 15:10:12.710606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-14 15:10:12.710705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-14 15:10:12.710737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-14 15:10:12.710848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-14 15:10:12.710889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-14 15:10:12.711028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-14 15:10:12.711067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-14 15:10:12.711172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-14 15:10:12.711204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-14 15:10:12.711316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-14 15:10:12.711349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-14 15:10:12.711504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-14 15:10:12.711536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-14 15:10:12.711666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-14 15:10:12.711699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-14 15:10:12.711843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-14 15:10:12.711884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-14 15:10:12.712023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-14 15:10:12.712055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-14 15:10:12.712196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-14 15:10:12.712228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-14 15:10:12.712329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-14 15:10:12.712362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-14 15:10:12.712497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-14 15:10:12.712530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-14 15:10:12.712650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-14 15:10:12.712698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-14 15:10:12.712846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-14 15:10:12.712887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-14 15:10:12.713039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-14 15:10:12.713086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-14 15:10:12.713259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-14 15:10:12.713295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-14 15:10:12.713445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-14 15:10:12.713479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-14 15:10:12.713585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-14 15:10:12.713619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-14 15:10:12.713745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-14 15:10:12.713779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-14 15:10:12.713903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-14 15:10:12.713937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-14 15:10:12.714072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-14 15:10:12.714105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-14 15:10:12.714239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-14 15:10:12.714290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-14 15:10:12.714441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-14 15:10:12.714478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-14 15:10:12.714591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-14 15:10:12.714627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-14 15:10:12.714794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-14 15:10:12.714829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-14 15:10:12.715008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-14 15:10:12.715043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-14 15:10:12.715171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-14 15:10:12.715227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-14 15:10:12.715378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-14 15:10:12.715429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-14 15:10:12.715639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-14 15:10:12.715674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-14 15:10:12.715840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-14 15:10:12.715893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-14 15:10:12.716032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-14 15:10:12.716065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-14 15:10:12.716174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-14 15:10:12.716207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-14 15:10:12.716370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-14 15:10:12.716404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-14 15:10:12.716544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-14 15:10:12.716580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-14 15:10:12.716719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-14 15:10:12.716752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-14 15:10:12.716920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-14 15:10:12.716953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-14 15:10:12.717082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-14 15:10:12.717118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-14 15:10:12.717242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-14 15:10:12.717278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-14 15:10:12.717424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-14 15:10:12.717461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-14 15:10:12.717634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-14 15:10:12.717670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-14 15:10:12.717824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-14 15:10:12.717860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-14 15:10:12.718050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-14 15:10:12.718083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-14 15:10:12.718200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-14 15:10:12.718242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-14 15:10:12.718364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-14 15:10:12.718401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-14 15:10:12.718553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-14 15:10:12.718589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-14 15:10:12.718773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-14 15:10:12.718807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-14 15:10:12.719124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-14 15:10:12.719161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-14 15:10:12.719299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-14 15:10:12.719351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-14 15:10:12.719509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-14 15:10:12.719561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-14 15:10:12.719686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-14 15:10:12.719737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-14 15:10:12.719858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-14 15:10:12.719903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-14 15:10:12.720086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-14 15:10:12.720137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-14 15:10:12.720323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-14 15:10:12.720373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-14 15:10:12.720513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-14 15:10:12.720546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-14 15:10:12.720697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-14 15:10:12.720731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-14 15:10:12.720869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-14 15:10:12.720924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-14 15:10:12.721094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-14 15:10:12.721129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-14 15:10:12.721268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-14 15:10:12.721301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-14 15:10:12.721445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-14 15:10:12.721478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-14 15:10:12.721640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-14 15:10:12.721673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-14 15:10:12.721806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-14 15:10:12.721839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-14 15:10:12.722012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-14 15:10:12.722059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-14 15:10:12.722212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-14 15:10:12.722248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-14 15:10:12.722386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-14 15:10:12.722420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-14 15:10:12.722558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-14 15:10:12.722591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-14 15:10:12.722730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-14 15:10:12.722763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-14 15:10:12.722913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-14 15:10:12.722961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-14 15:10:12.723084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-14 15:10:12.723119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-14 15:10:12.723272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-14 15:10:12.723308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-14 15:10:12.723471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-14 15:10:12.723505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-14 15:10:12.723669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-14 15:10:12.723702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-14 15:10:12.723811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-14 15:10:12.723844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-14 15:10:12.723986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-14 15:10:12.724019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-14 15:10:12.724155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-14 15:10:12.724188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-14 15:10:12.724326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-14 15:10:12.724359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-14 15:10:12.724519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-14 15:10:12.724551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-14 15:10:12.724667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-14 15:10:12.724700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-14 15:10:12.724841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-14 15:10:12.724874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-14 15:10:12.725018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-14 15:10:12.725050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-14 15:10:12.725233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-14 15:10:12.725280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-14 15:10:12.725431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-14 15:10:12.725466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-14 15:10:12.725601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-14 15:10:12.725634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-14 15:10:12.725797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-14 15:10:12.725835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-14 15:10:12.726015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-14 15:10:12.726049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-14 15:10:12.726185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-14 15:10:12.726218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-14 15:10:12.726325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-14 15:10:12.726357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-14 15:10:12.726513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-14 15:10:12.726546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-14 15:10:12.726658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-14 15:10:12.726690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-14 15:10:12.726810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-14 15:10:12.726843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-14 15:10:12.726969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-14 15:10:12.727003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-14 15:10:12.727132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-14 15:10:12.727165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-14 15:10:12.727302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-14 15:10:12.727334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-14 15:10:12.727469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-14 15:10:12.727501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-14 15:10:12.727662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-14 15:10:12.727695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-14 15:10:12.727806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-14 15:10:12.727839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-14 15:10:12.727981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-14 15:10:12.728014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-14 15:10:12.728118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-14 15:10:12.728150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-14 15:10:12.728251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-14 15:10:12.728296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-14 15:10:12.728457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-14 15:10:12.728490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-14 15:10:12.728594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-14 15:10:12.728627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-14 15:10:12.728791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-14 15:10:12.728823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-14 15:10:12.728972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-14 15:10:12.729006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-14 15:10:12.729184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-14 15:10:12.729233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-14 15:10:12.729380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-14 15:10:12.729416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-14 15:10:12.729584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-14 15:10:12.729618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-14 15:10:12.729782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-14 15:10:12.729815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-14 15:10:12.729974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-14 15:10:12.730009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-14 15:10:12.730151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-14 15:10:12.730185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-14 15:10:12.730346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-14 15:10:12.730380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-14 15:10:12.730525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-14 15:10:12.730559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-14 15:10:12.730721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-14 15:10:12.730755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-14 15:10:12.730890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-14 15:10:12.730924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-14 15:10:12.731053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-14 15:10:12.731086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-14 15:10:12.731307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-14 15:10:12.731341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-14 15:10:12.731483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-14 15:10:12.731516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-14 15:10:12.731652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-14 15:10:12.731685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-14 15:10:12.731802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-14 15:10:12.731837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-14 15:10:12.731979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-14 15:10:12.732015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-14 15:10:12.732130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-14 15:10:12.732163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-14 15:10:12.732327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-14 15:10:12.732360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-14 15:10:12.732486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-14 15:10:12.732518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-14 15:10:12.732642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-14 15:10:12.732675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-14 15:10:12.732786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-14 15:10:12.732823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-14 15:10:12.732978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-14 15:10:12.733013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-14 15:10:12.733146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-14 15:10:12.733179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-14 15:10:12.733313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-14 15:10:12.733347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-14 15:10:12.733458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-14 15:10:12.733490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-14 15:10:12.733609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-14 15:10:12.733655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-14 15:10:12.733810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-14 15:10:12.733845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-14 15:10:12.733960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-14 15:10:12.733994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-14 15:10:12.734105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-14 15:10:12.734139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-14 15:10:12.734273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-14 15:10:12.734305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-14 15:10:12.734408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-14 15:10:12.734441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-14 15:10:12.734657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-14 15:10:12.734691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-14 15:10:12.734832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-14 15:10:12.734865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-14 15:10:12.735049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-14 15:10:12.735096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-14 15:10:12.735271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-14 15:10:12.735307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-14 15:10:12.735476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-14 15:10:12.735510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-14 15:10:12.735616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-14 15:10:12.735649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-14 15:10:12.735788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-14 15:10:12.735822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-14 15:10:12.735942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-14 15:10:12.735976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-14 15:10:12.736127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-14 15:10:12.736160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-14 15:10:12.736324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-14 15:10:12.736356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-14 15:10:12.736471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-14 15:10:12.736504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-14 15:10:12.736651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-14 15:10:12.736699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-14 15:10:12.736870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-14 15:10:12.736927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-14 15:10:12.737056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-14 15:10:12.737091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-14 15:10:12.737233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-14 15:10:12.737267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-14 15:10:12.737387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-14 15:10:12.737421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-14 15:10:12.737553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-14 15:10:12.737591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-14 15:10:12.737695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-14 15:10:12.737728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-14 15:10:12.737856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-14 15:10:12.737914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-14 15:10:12.738055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-14 15:10:12.738092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-14 15:10:12.738225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-14 15:10:12.738258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-14 15:10:12.738394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-14 15:10:12.738427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-14 15:10:12.738562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-14 15:10:12.738595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-14 15:10:12.738756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-14 15:10:12.738789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-14 15:10:12.738932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-14 15:10:12.738967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-14 15:10:12.739078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-14 15:10:12.739111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-14 15:10:12.739246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-14 15:10:12.739280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-14 15:10:12.739391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-14 15:10:12.739425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-14 15:10:12.739558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-14 15:10:12.739591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-14 15:10:12.739741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-14 15:10:12.739777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-14 15:10:12.739923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-14 15:10:12.739957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-14 15:10:12.740110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-14 15:10:12.740143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-14 15:10:12.740305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-14 15:10:12.740338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-14 15:10:12.740488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-14 15:10:12.740520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-14 15:10:12.740626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-14 15:10:12.740659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-14 15:10:12.740822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-14 15:10:12.740869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-14 15:10:12.740992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-14 15:10:12.741027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-14 15:10:12.741139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-14 15:10:12.741173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-14 15:10:12.741301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-14 15:10:12.741335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-14 15:10:12.741473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-14 15:10:12.741506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-14 15:10:12.741618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-14 15:10:12.741651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-14 15:10:12.741783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-14 15:10:12.741817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-14 15:10:12.741982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-14 15:10:12.742015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-14 15:10:12.742138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-14 15:10:12.742171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-14 15:10:12.742308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-14 15:10:12.742342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-14 15:10:12.742508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-14 15:10:12.742541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-14 15:10:12.742653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-14 15:10:12.742687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-14 15:10:12.742836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-14 15:10:12.742890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-14 15:10:12.743021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-14 15:10:12.743058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-14 15:10:12.743217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-14 15:10:12.743250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-14 15:10:12.743384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-14 15:10:12.743417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-14 15:10:12.743528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-14 15:10:12.743562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-14 15:10:12.743724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-14 15:10:12.743757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-14 15:10:12.743895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-14 15:10:12.743930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-14 15:10:12.744084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-14 15:10:12.744132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-14 15:10:12.744291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-14 15:10:12.744326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-14 15:10:12.744433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-14 15:10:12.744472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-14 15:10:12.744633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-14 15:10:12.744666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-14 15:10:12.744769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-14 15:10:12.744802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-14 15:10:12.744931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-14 15:10:12.744965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-14 15:10:12.745126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-14 15:10:12.745159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-14 15:10:12.745295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-14 15:10:12.745328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-14 15:10:12.745489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-14 15:10:12.745521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-14 15:10:12.745671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-14 15:10:12.745704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-14 15:10:12.745888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-14 15:10:12.745923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-14 15:10:12.746077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-14 15:10:12.746124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-14 15:10:12.746242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-14 15:10:12.746277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-14 15:10:12.746402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-14 15:10:12.746435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-14 15:10:12.746560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-14 15:10:12.746594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-14 15:10:12.746764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-14 15:10:12.746797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-14 15:10:12.746907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-14 15:10:12.746941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-14 15:10:12.747051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-14 15:10:12.747083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-14 15:10:12.747211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-14 15:10:12.747243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-14 15:10:12.747376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-14 15:10:12.747409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-14 15:10:12.747539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-14 15:10:12.747583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-14 15:10:12.747717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-14 15:10:12.747749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-14 15:10:12.747915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-14 15:10:12.747948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-14 15:10:12.748059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-14 15:10:12.748091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-14 15:10:12.748252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-14 15:10:12.748284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-14 15:10:12.748454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-14 15:10:12.748487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-14 15:10:12.748601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-14 15:10:12.748635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-14 15:10:12.748763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-14 15:10:12.748795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-14 15:10:12.748959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-14 15:10:12.748992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-14 15:10:12.749125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-14 15:10:12.749158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-14 15:10:12.749292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-14 15:10:12.749324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-14 15:10:12.749491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-14 15:10:12.749524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-14 15:10:12.749659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-14 15:10:12.749690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-14 15:10:12.749826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-14 15:10:12.749859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-14 15:10:12.749991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-14 15:10:12.750024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-14 15:10:12.750162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-14 15:10:12.750194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-14 15:10:12.750332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-14 15:10:12.750366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-14 15:10:12.750475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-14 15:10:12.750508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-14 15:10:12.750640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-14 15:10:12.750672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-14 15:10:12.750777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-14 15:10:12.750809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-14 15:10:12.750920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-14 15:10:12.750953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-14 15:10:12.751056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-14 15:10:12.751088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-14 15:10:12.751191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-14 15:10:12.751229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-14 15:10:12.751364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-14 15:10:12.751396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-14 15:10:12.751506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-14 15:10:12.751541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-14 15:10:12.751656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-14 15:10:12.751688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-14 15:10:12.751802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-14 15:10:12.751836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-14 15:10:12.751950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-14 15:10:12.751983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-14 15:10:12.752116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-14 15:10:12.752149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-14 15:10:12.752289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-14 15:10:12.752322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-14 15:10:12.752450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-14 15:10:12.752484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-14 15:10:12.752621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-14 15:10:12.752654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-14 15:10:12.752798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-14 15:10:12.752830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-14 15:10:12.753002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-14 15:10:12.753035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-14 15:10:12.753215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-14 15:10:12.753262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-14 15:10:12.753378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-14 15:10:12.753413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-14 15:10:12.753563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-14 15:10:12.753596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-14 15:10:12.753701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-14 15:10:12.753734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-14 15:10:12.753888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-14 15:10:12.753921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-14 15:10:12.754026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-14 15:10:12.754059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-14 15:10:12.754192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-14 15:10:12.754225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-14 15:10:12.754390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-14 15:10:12.754423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-14 15:10:12.754533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-14 15:10:12.754566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-14 15:10:12.754721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-14 15:10:12.754753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-14 15:10:12.754899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-14 15:10:12.754945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-14 15:10:12.755109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-14 15:10:12.755142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-14 15:10:12.755250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-14 15:10:12.755282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-14 15:10:12.755448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-14 15:10:12.755480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-14 15:10:12.755584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-14 15:10:12.755616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-14 15:10:12.755789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-14 15:10:12.755836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-14 15:10:12.755991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-14 15:10:12.756027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-14 15:10:12.756142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-14 15:10:12.756175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-14 15:10:12.756321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-14 15:10:12.756354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-14 15:10:12.756530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-14 15:10:12.756563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-14 15:10:12.756676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-14 15:10:12.756709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-14 15:10:12.756844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-14 15:10:12.756885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-14 15:10:12.757030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-14 15:10:12.757063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-14 15:10:12.757175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-14 15:10:12.757208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-14 15:10:12.757347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-14 15:10:12.757379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-14 15:10:12.757516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-14 15:10:12.757549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-14 15:10:12.757707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-14 15:10:12.757740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-14 15:10:12.757891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-14 15:10:12.757925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-14 15:10:12.758043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-14 15:10:12.758082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-14 15:10:12.758265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-14 15:10:12.758313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-14 15:10:12.758486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-14 15:10:12.758520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-14 15:10:12.758656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-14 15:10:12.758689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-14 15:10:12.758818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-14 15:10:12.758850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-14 15:10:12.759000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-14 15:10:12.759036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-14 15:10:12.759194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-14 15:10:12.759228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-14 15:10:12.759365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-14 15:10:12.759409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-14 15:10:12.759548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-14 15:10:12.759582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-14 15:10:12.759727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-14 15:10:12.759761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-14 15:10:12.759941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-14 15:10:12.759988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-14 15:10:12.760135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-14 15:10:12.760169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-14 15:10:12.760283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-14 15:10:12.760316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-14 15:10:12.760488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-14 15:10:12.760521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-14 15:10:12.760665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-14 15:10:12.760697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-14 15:10:12.760844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-14 15:10:12.760885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-14 15:10:12.761002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-14 15:10:12.761038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-14 15:10:12.761149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-14 15:10:12.761182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-14 15:10:12.761315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-14 15:10:12.761349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-14 15:10:12.761478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-14 15:10:12.761511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-14 15:10:12.761658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-14 15:10:12.761691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-14 15:10:12.761808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-14 15:10:12.761842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-14 15:10:12.761980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-14 15:10:12.762013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-14 15:10:12.762196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-14 15:10:12.762243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-14 15:10:12.762396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-14 15:10:12.762431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-14 15:10:12.762565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-14 15:10:12.762598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-14 15:10:12.762700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-14 15:10:12.762733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-14 15:10:12.762849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-14 15:10:12.762889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-14 15:10:12.763026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-14 15:10:12.763060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-14 15:10:12.763221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-14 15:10:12.763254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-14 15:10:12.763419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-14 15:10:12.763452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-14 15:10:12.763562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-14 15:10:12.763595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-14 15:10:12.763728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-14 15:10:12.763761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-14 15:10:12.763907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-14 15:10:12.763940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-14 15:10:12.764071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-14 15:10:12.764104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-14 15:10:12.764239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-14 15:10:12.764272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-14 15:10:12.764437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-14 15:10:12.764469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-14 15:10:12.764605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-14 15:10:12.764638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-14 15:10:12.764780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-14 15:10:12.764814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-14 15:10:12.764989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-14 15:10:12.765024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-14 15:10:12.765188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-14 15:10:12.765226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-14 15:10:12.765388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.702 [2024-07-14 15:10:12.765421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.702 qpair failed and we were unable to recover it. 00:37:33.702 [2024-07-14 15:10:12.765557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.702 [2024-07-14 15:10:12.765590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.702 qpair failed and we were unable to recover it. 00:37:33.702 [2024-07-14 15:10:12.765755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.702 [2024-07-14 15:10:12.765791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.702 qpair failed and we were unable to recover it. 00:37:33.702 [2024-07-14 15:10:12.765939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.702 [2024-07-14 15:10:12.765973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.702 qpair failed and we were unable to recover it. 00:37:33.702 [2024-07-14 15:10:12.766115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.702 [2024-07-14 15:10:12.766147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.702 qpair failed and we were unable to recover it. 00:37:33.702 [2024-07-14 15:10:12.766283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.702 [2024-07-14 15:10:12.766316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.702 qpair failed and we were unable to recover it. 00:37:33.702 [2024-07-14 15:10:12.766454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.702 [2024-07-14 15:10:12.766487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.702 qpair failed and we were unable to recover it. 00:37:33.702 [2024-07-14 15:10:12.766629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.702 [2024-07-14 15:10:12.766662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.702 qpair failed and we were unable to recover it. 00:37:33.702 [2024-07-14 15:10:12.766764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.702 [2024-07-14 15:10:12.766797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.702 qpair failed and we were unable to recover it. 00:37:33.702 [2024-07-14 15:10:12.766963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.702 [2024-07-14 15:10:12.766997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.702 qpair failed and we were unable to recover it. 00:37:33.702 [2024-07-14 15:10:12.767138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.702 [2024-07-14 15:10:12.767171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.702 qpair failed and we were unable to recover it. 00:37:33.702 [2024-07-14 15:10:12.767328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.702 [2024-07-14 15:10:12.767361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.702 qpair failed and we were unable to recover it. 00:37:33.702 [2024-07-14 15:10:12.767498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.702 [2024-07-14 15:10:12.767531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.702 qpair failed and we were unable to recover it. 00:37:33.702 [2024-07-14 15:10:12.767671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.702 [2024-07-14 15:10:12.767704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.702 qpair failed and we were unable to recover it. 00:37:33.702 [2024-07-14 15:10:12.767834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.702 [2024-07-14 15:10:12.767867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.702 qpair failed and we were unable to recover it. 00:37:33.702 [2024-07-14 15:10:12.767985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.702 [2024-07-14 15:10:12.768018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.702 qpair failed and we were unable to recover it. 00:37:33.702 [2024-07-14 15:10:12.768163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.702 [2024-07-14 15:10:12.768196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.702 qpair failed and we were unable to recover it. 00:37:33.702 [2024-07-14 15:10:12.768334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.702 [2024-07-14 15:10:12.768367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.702 qpair failed and we were unable to recover it. 00:37:33.702 [2024-07-14 15:10:12.768507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.702 [2024-07-14 15:10:12.768541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.702 qpair failed and we were unable to recover it. 00:37:33.702 [2024-07-14 15:10:12.768681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.702 [2024-07-14 15:10:12.768714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.702 qpair failed and we were unable to recover it. 00:37:33.702 [2024-07-14 15:10:12.768814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.702 [2024-07-14 15:10:12.768846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.702 qpair failed and we were unable to recover it. 00:37:33.702 [2024-07-14 15:10:12.768988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.702 [2024-07-14 15:10:12.769020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.702 qpair failed and we were unable to recover it. 00:37:33.702 [2024-07-14 15:10:12.769129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.702 [2024-07-14 15:10:12.769162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.702 qpair failed and we were unable to recover it. 00:37:33.702 [2024-07-14 15:10:12.769302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.702 [2024-07-14 15:10:12.769334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.702 qpair failed and we were unable to recover it. 00:37:33.702 [2024-07-14 15:10:12.769468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.702 [2024-07-14 15:10:12.769502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.702 qpair failed and we were unable to recover it. 00:37:33.702 [2024-07-14 15:10:12.769620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.702 [2024-07-14 15:10:12.769653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.702 qpair failed and we were unable to recover it. 00:37:33.702 [2024-07-14 15:10:12.769786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.702 [2024-07-14 15:10:12.769819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.702 qpair failed and we were unable to recover it. 00:37:33.702 [2024-07-14 15:10:12.769931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.702 [2024-07-14 15:10:12.769964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.702 qpair failed and we were unable to recover it. 00:37:33.702 [2024-07-14 15:10:12.770099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.702 [2024-07-14 15:10:12.770132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.702 qpair failed and we were unable to recover it. 00:37:33.702 [2024-07-14 15:10:12.770265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.702 [2024-07-14 15:10:12.770297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.702 qpair failed and we were unable to recover it. 00:37:33.702 [2024-07-14 15:10:12.770456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.702 [2024-07-14 15:10:12.770489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.702 qpair failed and we were unable to recover it. 00:37:33.702 [2024-07-14 15:10:12.770631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.702 [2024-07-14 15:10:12.770664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.702 qpair failed and we were unable to recover it. 00:37:33.702 [2024-07-14 15:10:12.770771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.702 [2024-07-14 15:10:12.770803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.702 qpair failed and we were unable to recover it. 00:37:33.702 [2024-07-14 15:10:12.770907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.702 [2024-07-14 15:10:12.770940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.702 qpair failed and we were unable to recover it. 00:37:33.702 [2024-07-14 15:10:12.771080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.702 [2024-07-14 15:10:12.771113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.702 qpair failed and we were unable to recover it. 00:37:33.702 [2024-07-14 15:10:12.771245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.703 [2024-07-14 15:10:12.771277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.703 qpair failed and we were unable to recover it. 00:37:33.703 [2024-07-14 15:10:12.771437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.703 [2024-07-14 15:10:12.771469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.703 qpair failed and we were unable to recover it. 00:37:33.703 [2024-07-14 15:10:12.771576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.703 [2024-07-14 15:10:12.771609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.703 qpair failed and we were unable to recover it. 00:37:33.703 [2024-07-14 15:10:12.771743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.703 [2024-07-14 15:10:12.771775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.703 qpair failed and we were unable to recover it. 00:37:33.703 [2024-07-14 15:10:12.771939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.703 [2024-07-14 15:10:12.771977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.703 qpair failed and we were unable to recover it. 00:37:33.703 [2024-07-14 15:10:12.772116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.703 [2024-07-14 15:10:12.772149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.703 qpair failed and we were unable to recover it. 00:37:33.703 [2024-07-14 15:10:12.772276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.703 [2024-07-14 15:10:12.772315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.703 qpair failed and we were unable to recover it. 00:37:33.703 [2024-07-14 15:10:12.772446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.703 [2024-07-14 15:10:12.772478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.703 qpair failed and we were unable to recover it. 00:37:33.703 [2024-07-14 15:10:12.772593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.703 [2024-07-14 15:10:12.772626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.703 qpair failed and we were unable to recover it. 00:37:33.703 [2024-07-14 15:10:12.772754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.703 [2024-07-14 15:10:12.772801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.703 qpair failed and we were unable to recover it. 00:37:33.703 [2024-07-14 15:10:12.772946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.703 [2024-07-14 15:10:12.772980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.703 qpair failed and we were unable to recover it. 00:37:33.703 [2024-07-14 15:10:12.773130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.703 [2024-07-14 15:10:12.773163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.703 qpair failed and we were unable to recover it. 00:37:33.703 [2024-07-14 15:10:12.773271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.703 [2024-07-14 15:10:12.773303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.703 qpair failed and we were unable to recover it. 00:37:33.703 [2024-07-14 15:10:12.773414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.703 [2024-07-14 15:10:12.773446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.703 qpair failed and we were unable to recover it. 00:37:33.703 [2024-07-14 15:10:12.773557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.703 [2024-07-14 15:10:12.773590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.703 qpair failed and we were unable to recover it. 00:37:33.703 [2024-07-14 15:10:12.773731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.703 [2024-07-14 15:10:12.773765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.703 qpair failed and we were unable to recover it. 00:37:33.703 [2024-07-14 15:10:12.773999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.703 [2024-07-14 15:10:12.774047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.703 qpair failed and we were unable to recover it. 00:37:33.703 [2024-07-14 15:10:12.774220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.703 [2024-07-14 15:10:12.774256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.703 qpair failed and we were unable to recover it. 00:37:33.703 [2024-07-14 15:10:12.774374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.703 [2024-07-14 15:10:12.774408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.703 qpair failed and we were unable to recover it. 00:37:33.703 [2024-07-14 15:10:12.774549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.703 [2024-07-14 15:10:12.774582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.703 qpair failed and we were unable to recover it. 00:37:33.703 [2024-07-14 15:10:12.774725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.703 [2024-07-14 15:10:12.774758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.703 qpair failed and we were unable to recover it. 00:37:33.703 [2024-07-14 15:10:12.774897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.703 [2024-07-14 15:10:12.774932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.703 qpair failed and we were unable to recover it. 00:37:33.703 [2024-07-14 15:10:12.775064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.703 [2024-07-14 15:10:12.775110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.703 qpair failed and we were unable to recover it. 00:37:33.703 [2024-07-14 15:10:12.775285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.703 [2024-07-14 15:10:12.775320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.703 qpair failed and we were unable to recover it. 00:37:33.703 [2024-07-14 15:10:12.775483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.703 [2024-07-14 15:10:12.775517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.703 qpair failed and we were unable to recover it. 00:37:33.703 [2024-07-14 15:10:12.775654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.703 [2024-07-14 15:10:12.775687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.703 qpair failed and we were unable to recover it. 00:37:33.703 [2024-07-14 15:10:12.775849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.703 [2024-07-14 15:10:12.775895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.703 qpair failed and we were unable to recover it. 00:37:33.703 [2024-07-14 15:10:12.776006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.703 [2024-07-14 15:10:12.776040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.703 qpair failed and we were unable to recover it. 00:37:33.703 [2024-07-14 15:10:12.776180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.703 [2024-07-14 15:10:12.776214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.703 qpair failed and we were unable to recover it. 00:37:33.703 [2024-07-14 15:10:12.776360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.703 [2024-07-14 15:10:12.776395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.703 qpair failed and we were unable to recover it. 00:37:33.703 [2024-07-14 15:10:12.776552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.703 [2024-07-14 15:10:12.776585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.703 qpair failed and we were unable to recover it. 00:37:33.703 [2024-07-14 15:10:12.776729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.703 [2024-07-14 15:10:12.776763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.703 qpair failed and we were unable to recover it. 00:37:33.703 [2024-07-14 15:10:12.776905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.703 [2024-07-14 15:10:12.776938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.703 qpair failed and we were unable to recover it. 00:37:33.703 [2024-07-14 15:10:12.777086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.703 [2024-07-14 15:10:12.777132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.703 qpair failed and we were unable to recover it. 00:37:33.703 [2024-07-14 15:10:12.777333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.703 [2024-07-14 15:10:12.777371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.703 qpair failed and we were unable to recover it. 00:37:33.703 [2024-07-14 15:10:12.777527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.703 [2024-07-14 15:10:12.777561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.703 qpair failed and we were unable to recover it. 00:37:33.703 [2024-07-14 15:10:12.777729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.703 [2024-07-14 15:10:12.777767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.703 qpair failed and we were unable to recover it. 00:37:33.703 [2024-07-14 15:10:12.777922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.703 [2024-07-14 15:10:12.777955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.703 qpair failed and we were unable to recover it. 00:37:33.703 [2024-07-14 15:10:12.778097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.704 [2024-07-14 15:10:12.778130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.704 qpair failed and we were unable to recover it. 00:37:33.704 [2024-07-14 15:10:12.778269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.704 [2024-07-14 15:10:12.778306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.704 qpair failed and we were unable to recover it. 00:37:33.704 [2024-07-14 15:10:12.778479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.704 [2024-07-14 15:10:12.778527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.704 qpair failed and we were unable to recover it. 00:37:33.704 [2024-07-14 15:10:12.778683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.704 [2024-07-14 15:10:12.778720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.704 qpair failed and we were unable to recover it. 00:37:33.704 [2024-07-14 15:10:12.778939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.704 [2024-07-14 15:10:12.778987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.704 qpair failed and we were unable to recover it. 00:37:33.704 [2024-07-14 15:10:12.779157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.704 [2024-07-14 15:10:12.779192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.704 qpair failed and we were unable to recover it. 00:37:33.704 [2024-07-14 15:10:12.779358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.704 [2024-07-14 15:10:12.779400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.704 qpair failed and we were unable to recover it. 00:37:33.704 [2024-07-14 15:10:12.779520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.704 [2024-07-14 15:10:12.779557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.704 qpair failed and we were unable to recover it. 00:37:33.704 [2024-07-14 15:10:12.779683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.704 [2024-07-14 15:10:12.779733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.704 qpair failed and we were unable to recover it. 00:37:33.704 [2024-07-14 15:10:12.779887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.704 [2024-07-14 15:10:12.779939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.704 qpair failed and we were unable to recover it. 00:37:33.704 [2024-07-14 15:10:12.780077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.704 [2024-07-14 15:10:12.780121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.704 qpair failed and we were unable to recover it. 00:37:33.704 [2024-07-14 15:10:12.780263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.704 [2024-07-14 15:10:12.780296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.704 qpair failed and we were unable to recover it. 00:37:33.704 [2024-07-14 15:10:12.780425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.704 [2024-07-14 15:10:12.780458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.704 qpair failed and we were unable to recover it. 00:37:33.704 [2024-07-14 15:10:12.780627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.704 [2024-07-14 15:10:12.780660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.704 qpair failed and we were unable to recover it. 00:37:33.704 [2024-07-14 15:10:12.780815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.704 [2024-07-14 15:10:12.780847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.704 qpair failed and we were unable to recover it. 00:37:33.704 [2024-07-14 15:10:12.780994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.704 [2024-07-14 15:10:12.781026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.704 qpair failed and we were unable to recover it. 00:37:33.704 [2024-07-14 15:10:12.781180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.704 [2024-07-14 15:10:12.781216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.704 qpair failed and we were unable to recover it. 00:37:33.704 [2024-07-14 15:10:12.781391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.704 [2024-07-14 15:10:12.781426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.704 qpair failed and we were unable to recover it. 00:37:33.704 [2024-07-14 15:10:12.781602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.704 [2024-07-14 15:10:12.781639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.704 qpair failed and we were unable to recover it. 00:37:33.704 [2024-07-14 15:10:12.781776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.704 [2024-07-14 15:10:12.781812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.704 qpair failed and we were unable to recover it. 00:37:33.704 [2024-07-14 15:10:12.781976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.704 [2024-07-14 15:10:12.782009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.704 qpair failed and we were unable to recover it. 00:37:33.704 [2024-07-14 15:10:12.782146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.704 [2024-07-14 15:10:12.782197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.704 qpair failed and we were unable to recover it. 00:37:33.704 [2024-07-14 15:10:12.782348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.704 [2024-07-14 15:10:12.782384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.704 qpair failed and we were unable to recover it. 00:37:33.704 [2024-07-14 15:10:12.782519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.704 [2024-07-14 15:10:12.782570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.704 qpair failed and we were unable to recover it. 00:37:33.704 [2024-07-14 15:10:12.782683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.704 [2024-07-14 15:10:12.782719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.704 qpair failed and we were unable to recover it. 00:37:33.704 [2024-07-14 15:10:12.782867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.704 [2024-07-14 15:10:12.782926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.704 qpair failed and we were unable to recover it. 00:37:33.704 [2024-07-14 15:10:12.783062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.704 [2024-07-14 15:10:12.783094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.704 qpair failed and we were unable to recover it. 00:37:33.704 [2024-07-14 15:10:12.783192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.704 [2024-07-14 15:10:12.783240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.704 qpair failed and we were unable to recover it. 00:37:33.704 [2024-07-14 15:10:12.783405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.704 [2024-07-14 15:10:12.783441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.704 qpair failed and we were unable to recover it. 00:37:33.704 [2024-07-14 15:10:12.783609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.704 [2024-07-14 15:10:12.783645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.704 qpair failed and we were unable to recover it. 00:37:33.704 [2024-07-14 15:10:12.783821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.704 [2024-07-14 15:10:12.783857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.704 qpair failed and we were unable to recover it. 00:37:33.704 [2024-07-14 15:10:12.784017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.704 [2024-07-14 15:10:12.784049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.704 qpair failed and we were unable to recover it. 00:37:33.704 [2024-07-14 15:10:12.784204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.704 [2024-07-14 15:10:12.784251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.704 qpair failed and we were unable to recover it. 00:37:33.704 [2024-07-14 15:10:12.784452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.704 [2024-07-14 15:10:12.784505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.704 qpair failed and we were unable to recover it. 00:37:33.704 [2024-07-14 15:10:12.784695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.704 [2024-07-14 15:10:12.784747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.704 qpair failed and we were unable to recover it. 00:37:33.704 [2024-07-14 15:10:12.784853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.704 [2024-07-14 15:10:12.784895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.704 qpair failed and we were unable to recover it. 00:37:33.704 [2024-07-14 15:10:12.785066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.704 [2024-07-14 15:10:12.785100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.704 qpair failed and we were unable to recover it. 00:37:33.704 [2024-07-14 15:10:12.785254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.704 [2024-07-14 15:10:12.785304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.704 qpair failed and we were unable to recover it. 00:37:33.704 [2024-07-14 15:10:12.785499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.704 [2024-07-14 15:10:12.785536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.704 qpair failed and we were unable to recover it. 00:37:33.704 [2024-07-14 15:10:12.785690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.704 [2024-07-14 15:10:12.785726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.704 qpair failed and we were unable to recover it. 00:37:33.704 [2024-07-14 15:10:12.785868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.704 [2024-07-14 15:10:12.785933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.704 qpair failed and we were unable to recover it. 00:37:33.704 [2024-07-14 15:10:12.786051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.705 [2024-07-14 15:10:12.786085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.705 qpair failed and we were unable to recover it. 00:37:33.705 [2024-07-14 15:10:12.786225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.705 [2024-07-14 15:10:12.786276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.705 qpair failed and we were unable to recover it. 00:37:33.705 [2024-07-14 15:10:12.786423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.705 [2024-07-14 15:10:12.786460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.705 qpair failed and we were unable to recover it. 00:37:33.705 [2024-07-14 15:10:12.786595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.705 [2024-07-14 15:10:12.786649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.705 qpair failed and we were unable to recover it. 00:37:33.705 [2024-07-14 15:10:12.786799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.705 [2024-07-14 15:10:12.786835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.705 qpair failed and we were unable to recover it. 00:37:33.705 [2024-07-14 15:10:12.787018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.705 [2024-07-14 15:10:12.787074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.705 qpair failed and we were unable to recover it. 00:37:33.705 [2024-07-14 15:10:12.787220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.705 [2024-07-14 15:10:12.787285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.705 qpair failed and we were unable to recover it. 00:37:33.705 [2024-07-14 15:10:12.787451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.705 [2024-07-14 15:10:12.787504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.705 qpair failed and we were unable to recover it. 00:37:33.705 [2024-07-14 15:10:12.787665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.705 [2024-07-14 15:10:12.787719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.705 qpair failed and we were unable to recover it. 00:37:33.705 [2024-07-14 15:10:12.787858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.705 [2024-07-14 15:10:12.787900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.705 qpair failed and we were unable to recover it. 00:37:33.705 [2024-07-14 15:10:12.788054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.705 [2024-07-14 15:10:12.788106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.705 qpair failed and we were unable to recover it. 00:37:33.705 [2024-07-14 15:10:12.788312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.705 [2024-07-14 15:10:12.788349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.705 qpair failed and we were unable to recover it. 00:37:33.705 [2024-07-14 15:10:12.788465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.705 [2024-07-14 15:10:12.788507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.705 qpair failed and we were unable to recover it. 00:37:33.705 [2024-07-14 15:10:12.788671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.705 [2024-07-14 15:10:12.788704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.705 qpair failed and we were unable to recover it. 00:37:33.705 [2024-07-14 15:10:12.788864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.705 [2024-07-14 15:10:12.788906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.705 qpair failed and we were unable to recover it. 00:37:33.705 [2024-07-14 15:10:12.789061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.705 [2024-07-14 15:10:12.789108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.705 qpair failed and we were unable to recover it. 00:37:33.705 [2024-07-14 15:10:12.789264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.705 [2024-07-14 15:10:12.789300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.705 qpair failed and we were unable to recover it. 00:37:33.705 [2024-07-14 15:10:12.789462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.705 [2024-07-14 15:10:12.789514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.705 qpair failed and we were unable to recover it. 00:37:33.705 [2024-07-14 15:10:12.789709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.705 [2024-07-14 15:10:12.789760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.705 qpair failed and we were unable to recover it. 00:37:33.705 [2024-07-14 15:10:12.789889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.705 [2024-07-14 15:10:12.789924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.705 qpair failed and we were unable to recover it. 00:37:33.705 [2024-07-14 15:10:12.790065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.705 [2024-07-14 15:10:12.790098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.705 qpair failed and we were unable to recover it. 00:37:33.705 [2024-07-14 15:10:12.790244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.705 [2024-07-14 15:10:12.790296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.705 qpair failed and we were unable to recover it. 00:37:33.705 [2024-07-14 15:10:12.790487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.705 [2024-07-14 15:10:12.790538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.705 qpair failed and we were unable to recover it. 00:37:33.705 [2024-07-14 15:10:12.790647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.705 [2024-07-14 15:10:12.790680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.705 qpair failed and we were unable to recover it. 00:37:33.705 [2024-07-14 15:10:12.790858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.705 [2024-07-14 15:10:12.790915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.705 qpair failed and we were unable to recover it. 00:37:33.705 [2024-07-14 15:10:12.791043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.705 [2024-07-14 15:10:12.791096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.705 qpair failed and we were unable to recover it. 00:37:33.705 [2024-07-14 15:10:12.791240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.705 [2024-07-14 15:10:12.791276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.705 qpair failed and we were unable to recover it. 00:37:33.705 [2024-07-14 15:10:12.791417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.705 [2024-07-14 15:10:12.791463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.705 qpair failed and we were unable to recover it. 00:37:33.705 [2024-07-14 15:10:12.791620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.705 [2024-07-14 15:10:12.791656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.705 qpair failed and we were unable to recover it. 00:37:33.705 [2024-07-14 15:10:12.791779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.705 [2024-07-14 15:10:12.791816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.705 qpair failed and we were unable to recover it. 00:37:33.705 [2024-07-14 15:10:12.791982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.705 [2024-07-14 15:10:12.792017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.705 qpair failed and we were unable to recover it. 00:37:33.705 [2024-07-14 15:10:12.792199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.705 [2024-07-14 15:10:12.792254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.705 qpair failed and we were unable to recover it. 00:37:33.705 [2024-07-14 15:10:12.792407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.705 [2024-07-14 15:10:12.792458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.705 qpair failed and we were unable to recover it. 00:37:33.705 [2024-07-14 15:10:12.792614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.705 [2024-07-14 15:10:12.792665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.705 qpair failed and we were unable to recover it. 00:37:33.705 [2024-07-14 15:10:12.792841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.705 [2024-07-14 15:10:12.792875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.705 qpair failed and we were unable to recover it. 00:37:33.705 [2024-07-14 15:10:12.793034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.705 [2024-07-14 15:10:12.793068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.705 qpair failed and we were unable to recover it. 00:37:33.705 [2024-07-14 15:10:12.793199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.705 [2024-07-14 15:10:12.793236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.705 qpair failed and we were unable to recover it. 00:37:33.705 [2024-07-14 15:10:12.793357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.705 [2024-07-14 15:10:12.793393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.705 qpair failed and we were unable to recover it. 00:37:33.705 [2024-07-14 15:10:12.793567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.705 [2024-07-14 15:10:12.793603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.705 qpair failed and we were unable to recover it. 00:37:33.705 [2024-07-14 15:10:12.793785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.705 [2024-07-14 15:10:12.793818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.705 qpair failed and we were unable to recover it. 00:37:33.705 [2024-07-14 15:10:12.793990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.705 [2024-07-14 15:10:12.794024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.705 qpair failed and we were unable to recover it. 00:37:33.706 [2024-07-14 15:10:12.794147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.706 [2024-07-14 15:10:12.794195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.706 qpair failed and we were unable to recover it. 00:37:33.706 [2024-07-14 15:10:12.794382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.706 [2024-07-14 15:10:12.794435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.706 qpair failed and we were unable to recover it. 00:37:33.706 [2024-07-14 15:10:12.794590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.706 [2024-07-14 15:10:12.794641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.706 qpair failed and we were unable to recover it. 00:37:33.706 [2024-07-14 15:10:12.794777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.706 [2024-07-14 15:10:12.794812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.706 qpair failed and we were unable to recover it. 00:37:33.706 [2024-07-14 15:10:12.794920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.706 [2024-07-14 15:10:12.794959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.706 qpair failed and we were unable to recover it. 00:37:33.706 [2024-07-14 15:10:12.795079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.706 [2024-07-14 15:10:12.795113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.706 qpair failed and we were unable to recover it. 00:37:33.706 [2024-07-14 15:10:12.795288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.706 [2024-07-14 15:10:12.795326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.706 qpair failed and we were unable to recover it. 00:37:33.706 [2024-07-14 15:10:12.795449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.706 [2024-07-14 15:10:12.795485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.706 qpair failed and we were unable to recover it. 00:37:33.706 [2024-07-14 15:10:12.795624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.706 [2024-07-14 15:10:12.795660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.706 qpair failed and we were unable to recover it. 00:37:33.706 [2024-07-14 15:10:12.795782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.706 [2024-07-14 15:10:12.795815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.706 qpair failed and we were unable to recover it. 00:37:33.706 [2024-07-14 15:10:12.795977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.706 [2024-07-14 15:10:12.796024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.706 qpair failed and we were unable to recover it. 00:37:33.706 [2024-07-14 15:10:12.796164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.706 [2024-07-14 15:10:12.796203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.706 qpair failed and we were unable to recover it. 00:37:33.706 [2024-07-14 15:10:12.796424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.706 [2024-07-14 15:10:12.796461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.706 qpair failed and we were unable to recover it. 00:37:33.706 [2024-07-14 15:10:12.796576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.706 [2024-07-14 15:10:12.796613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.706 qpair failed and we were unable to recover it. 00:37:33.706 [2024-07-14 15:10:12.796735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.706 [2024-07-14 15:10:12.796784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.706 qpair failed and we were unable to recover it. 00:37:33.706 [2024-07-14 15:10:12.796918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.706 [2024-07-14 15:10:12.796952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.706 qpair failed and we were unable to recover it. 00:37:33.706 [2024-07-14 15:10:12.797068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.706 [2024-07-14 15:10:12.797102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.706 qpair failed and we were unable to recover it. 00:37:33.706 [2024-07-14 15:10:12.797233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.706 [2024-07-14 15:10:12.797270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.706 qpair failed and we were unable to recover it. 00:37:33.706 [2024-07-14 15:10:12.797476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.706 [2024-07-14 15:10:12.797513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.706 qpair failed and we were unable to recover it. 00:37:33.706 [2024-07-14 15:10:12.797663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.706 [2024-07-14 15:10:12.797700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.706 qpair failed and we were unable to recover it. 00:37:33.706 [2024-07-14 15:10:12.797849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.706 [2024-07-14 15:10:12.797904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.706 qpair failed and we were unable to recover it. 00:37:33.706 [2024-07-14 15:10:12.798035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.706 [2024-07-14 15:10:12.798068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.706 qpair failed and we were unable to recover it. 00:37:33.706 [2024-07-14 15:10:12.798203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.706 [2024-07-14 15:10:12.798253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.706 qpair failed and we were unable to recover it. 00:37:33.706 [2024-07-14 15:10:12.798370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.706 [2024-07-14 15:10:12.798406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.706 qpair failed and we were unable to recover it. 00:37:33.706 [2024-07-14 15:10:12.798582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.706 [2024-07-14 15:10:12.798618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.706 qpair failed and we were unable to recover it. 00:37:33.706 [2024-07-14 15:10:12.798773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.706 [2024-07-14 15:10:12.798811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.706 qpair failed and we were unable to recover it. 00:37:33.706 [2024-07-14 15:10:12.798959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.706 [2024-07-14 15:10:12.798993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.706 qpair failed and we were unable to recover it. 00:37:33.706 [2024-07-14 15:10:12.799128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.706 [2024-07-14 15:10:12.799161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.706 qpair failed and we were unable to recover it. 00:37:33.706 [2024-07-14 15:10:12.799312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.706 [2024-07-14 15:10:12.799348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.706 qpair failed and we were unable to recover it. 00:37:33.706 [2024-07-14 15:10:12.799491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.706 [2024-07-14 15:10:12.799527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.706 qpair failed and we were unable to recover it. 00:37:33.706 [2024-07-14 15:10:12.799657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.706 [2024-07-14 15:10:12.799708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.706 qpair failed and we were unable to recover it. 00:37:33.706 [2024-07-14 15:10:12.799891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.706 [2024-07-14 15:10:12.799955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.706 qpair failed and we were unable to recover it. 00:37:33.706 [2024-07-14 15:10:12.800105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.706 [2024-07-14 15:10:12.800139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.706 qpair failed and we were unable to recover it. 00:37:33.706 [2024-07-14 15:10:12.800335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.706 [2024-07-14 15:10:12.800372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.706 qpair failed and we were unable to recover it. 00:37:33.706 [2024-07-14 15:10:12.800544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.706 [2024-07-14 15:10:12.800581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.706 qpair failed and we were unable to recover it. 00:37:33.706 [2024-07-14 15:10:12.800729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.707 [2024-07-14 15:10:12.800765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.707 qpair failed and we were unable to recover it. 00:37:33.707 [2024-07-14 15:10:12.800930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.707 [2024-07-14 15:10:12.800964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.707 qpair failed and we were unable to recover it. 00:37:33.707 [2024-07-14 15:10:12.801098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.707 [2024-07-14 15:10:12.801131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.707 qpair failed and we were unable to recover it. 00:37:33.707 [2024-07-14 15:10:12.801267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.707 [2024-07-14 15:10:12.801304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.707 qpair failed and we were unable to recover it. 00:37:33.707 [2024-07-14 15:10:12.801439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.707 [2024-07-14 15:10:12.801490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.707 qpair failed and we were unable to recover it. 00:37:33.707 [2024-07-14 15:10:12.801609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.707 [2024-07-14 15:10:12.801659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.707 qpair failed and we were unable to recover it. 00:37:33.707 [2024-07-14 15:10:12.801791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.707 [2024-07-14 15:10:12.801827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.707 qpair failed and we were unable to recover it. 00:37:33.707 [2024-07-14 15:10:12.801998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.707 [2024-07-14 15:10:12.802031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.707 qpair failed and we were unable to recover it. 00:37:33.707 [2024-07-14 15:10:12.802143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.707 [2024-07-14 15:10:12.802193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.707 qpair failed and we were unable to recover it. 00:37:33.707 [2024-07-14 15:10:12.802354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.707 [2024-07-14 15:10:12.802398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.707 qpair failed and we were unable to recover it. 00:37:33.707 [2024-07-14 15:10:12.802570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.707 [2024-07-14 15:10:12.802606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.707 qpair failed and we were unable to recover it. 00:37:33.707 [2024-07-14 15:10:12.802723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.707 [2024-07-14 15:10:12.802771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.707 qpair failed and we were unable to recover it. 00:37:33.707 [2024-07-14 15:10:12.802917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.707 [2024-07-14 15:10:12.802951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.707 qpair failed and we were unable to recover it. 00:37:33.707 [2024-07-14 15:10:12.803078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.707 [2024-07-14 15:10:12.803111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.707 qpair failed and we were unable to recover it. 00:37:33.707 [2024-07-14 15:10:12.803249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.707 [2024-07-14 15:10:12.803300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.707 qpair failed and we were unable to recover it. 00:37:33.707 [2024-07-14 15:10:12.803445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.707 [2024-07-14 15:10:12.803481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.707 qpair failed and we were unable to recover it. 00:37:33.707 [2024-07-14 15:10:12.803656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.707 [2024-07-14 15:10:12.803692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.707 qpair failed and we were unable to recover it. 00:37:33.707 [2024-07-14 15:10:12.803873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.707 [2024-07-14 15:10:12.803931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.707 qpair failed and we were unable to recover it. 00:37:33.707 [2024-07-14 15:10:12.804041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.707 [2024-07-14 15:10:12.804074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.707 qpair failed and we were unable to recover it. 00:37:33.707 [2024-07-14 15:10:12.804242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.707 [2024-07-14 15:10:12.804275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.707 qpair failed and we were unable to recover it. 00:37:33.707 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2064807 Killed "${NVMF_APP[@]}" "$@" 00:37:33.707 [2024-07-14 15:10:12.804454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.707 [2024-07-14 15:10:12.804490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.707 qpair failed and we were unable to recover it. 00:37:33.707 [2024-07-14 15:10:12.804638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.707 15:10:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:37:33.707 [2024-07-14 15:10:12.804675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.707 qpair failed and we were unable to recover it. 00:37:33.707 15:10:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:37:33.707 [2024-07-14 15:10:12.804845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.707 [2024-07-14 15:10:12.804885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.707 qpair failed and we were unable to recover it. 00:37:33.707 15:10:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:33.707 [2024-07-14 15:10:12.805058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.707 [2024-07-14 15:10:12.805092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.707 15:10:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:33.707 qpair failed and we were unable to recover it. 00:37:33.707 15:10:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:33.707 [2024-07-14 15:10:12.805278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.707 [2024-07-14 15:10:12.805315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.707 qpair failed and we were unable to recover it. 00:37:33.707 [2024-07-14 15:10:12.805442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.707 [2024-07-14 15:10:12.805492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.707 qpair failed and we were unable to recover it. 00:37:33.707 [2024-07-14 15:10:12.805669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.707 [2024-07-14 15:10:12.805705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.707 qpair failed and we were unable to recover it. 00:37:33.707 [2024-07-14 15:10:12.805852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.707 [2024-07-14 15:10:12.805891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.707 qpair failed and we were unable to recover it. 00:37:33.707 [2024-07-14 15:10:12.806023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.707 [2024-07-14 15:10:12.806056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.707 qpair failed and we were unable to recover it. 00:37:33.707 [2024-07-14 15:10:12.806187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.707 [2024-07-14 15:10:12.806239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.707 qpair failed and we were unable to recover it. 00:37:33.707 [2024-07-14 15:10:12.806393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.707 [2024-07-14 15:10:12.806426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.707 qpair failed and we were unable to recover it. 00:37:33.707 [2024-07-14 15:10:12.806588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.707 [2024-07-14 15:10:12.806624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.707 qpair failed and we were unable to recover it. 00:37:33.707 [2024-07-14 15:10:12.806760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.707 [2024-07-14 15:10:12.806796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.707 qpair failed and we were unable to recover it. 00:37:33.707 [2024-07-14 15:10:12.806934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.707 [2024-07-14 15:10:12.806968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.707 qpair failed and we were unable to recover it. 00:37:33.707 [2024-07-14 15:10:12.807147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.707 [2024-07-14 15:10:12.807195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.707 qpair failed and we were unable to recover it. 00:37:33.707 [2024-07-14 15:10:12.807366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.707 [2024-07-14 15:10:12.807418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.707 qpair failed and we were unable to recover it. 00:37:33.707 [2024-07-14 15:10:12.807644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.707 [2024-07-14 15:10:12.807678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.707 qpair failed and we were unable to recover it. 00:37:33.707 [2024-07-14 15:10:12.807894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.707 [2024-07-14 15:10:12.807929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.707 qpair failed and we were unable to recover it. 00:37:33.707 [2024-07-14 15:10:12.808088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.708 [2024-07-14 15:10:12.808135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.708 qpair failed and we were unable to recover it. 00:37:33.708 [2024-07-14 15:10:12.808338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.708 [2024-07-14 15:10:12.808377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.708 qpair failed and we were unable to recover it. 00:37:33.708 [2024-07-14 15:10:12.808503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.708 [2024-07-14 15:10:12.808540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.708 qpair failed and we were unable to recover it. 00:37:33.708 [2024-07-14 15:10:12.808701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.708 [2024-07-14 15:10:12.808738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.708 qpair failed and we were unable to recover it. 00:37:33.708 15:10:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2065479 00:37:33.708 15:10:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:37:33.708 [2024-07-14 15:10:12.808908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.708 [2024-07-14 15:10:12.808959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.708 qpair failed and we were unable to recover it. 00:37:33.708 15:10:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2065479 00:37:33.708 [2024-07-14 15:10:12.809100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.708 [2024-07-14 15:10:12.809133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.708 qpair failed and we were unable to recover it. 00:37:33.708 15:10:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2065479 ']' 00:37:33.708 [2024-07-14 15:10:12.809283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.708 [2024-07-14 15:10:12.809334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.708 qpair failed and we were unable to recover it. 00:37:33.708 15:10:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:33.708 [2024-07-14 15:10:12.809468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.708 15:10:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:33.708 [2024-07-14 15:10:12.809506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.708 qpair failed and we were unable to recover it. 00:37:33.708 15:10:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:33.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:33.708 [2024-07-14 15:10:12.809622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.708 [2024-07-14 15:10:12.809659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.708 qpair failed and we were unable to recover it. 00:37:33.708 15:10:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:33.708 [2024-07-14 15:10:12.809827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.708 [2024-07-14 15:10:12.809860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe8 15:10:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:33.708 0 with addr=10.0.0.2, port=4420 00:37:33.708 qpair failed and we were unable to recover it. 00:37:33.708 [2024-07-14 15:10:12.810007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.708 [2024-07-14 15:10:12.810040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.708 qpair failed and we were unable to recover it. 00:37:33.708 [2024-07-14 15:10:12.810144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.708 [2024-07-14 15:10:12.810196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.708 qpair failed and we were unable to recover it. 00:37:33.708 [2024-07-14 15:10:12.810380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.708 [2024-07-14 15:10:12.810417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.708 qpair failed and we were unable to recover it. 00:37:33.708 [2024-07-14 15:10:12.810539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.708 [2024-07-14 15:10:12.810575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.708 qpair failed and we were unable to recover it. 00:37:33.708 [2024-07-14 15:10:12.810725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.708 [2024-07-14 15:10:12.810761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.708 qpair failed and we were unable to recover it. 00:37:33.708 [2024-07-14 15:10:12.810928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.708 [2024-07-14 15:10:12.810962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.708 qpair failed and we were unable to recover it. 00:37:33.708 [2024-07-14 15:10:12.811103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.708 [2024-07-14 15:10:12.811136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.708 qpair failed and we were unable to recover it. 00:37:33.708 [2024-07-14 15:10:12.811318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.708 [2024-07-14 15:10:12.811355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.708 qpair failed and we were unable to recover it. 00:37:33.708 [2024-07-14 15:10:12.811545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.708 [2024-07-14 15:10:12.811582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.708 qpair failed and we were unable to recover it. 00:37:33.708 [2024-07-14 15:10:12.811713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.708 [2024-07-14 15:10:12.811750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.708 qpair failed and we were unable to recover it. 00:37:33.708 [2024-07-14 15:10:12.811919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.708 [2024-07-14 15:10:12.811952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.708 qpair failed and we were unable to recover it. 00:37:33.708 [2024-07-14 15:10:12.812089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.708 [2024-07-14 15:10:12.812122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.708 qpair failed and we were unable to recover it. 00:37:33.708 [2024-07-14 15:10:12.812228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.708 [2024-07-14 15:10:12.812280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.708 qpair failed and we were unable to recover it. 00:37:33.708 [2024-07-14 15:10:12.812418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.708 [2024-07-14 15:10:12.812454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.708 qpair failed and we were unable to recover it. 00:37:33.708 [2024-07-14 15:10:12.812601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.708 [2024-07-14 15:10:12.812638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.708 qpair failed and we were unable to recover it. 00:37:33.708 [2024-07-14 15:10:12.812812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.708 [2024-07-14 15:10:12.812850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.708 qpair failed and we were unable to recover it. 00:37:33.708 [2024-07-14 15:10:12.812982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.708 [2024-07-14 15:10:12.813015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.708 qpair failed and we were unable to recover it. 00:37:33.708 [2024-07-14 15:10:12.813151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.708 [2024-07-14 15:10:12.813183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.708 qpair failed and we were unable to recover it. 00:37:33.708 [2024-07-14 15:10:12.813341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.708 [2024-07-14 15:10:12.813378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.708 qpair failed and we were unable to recover it. 00:37:33.708 [2024-07-14 15:10:12.813565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.708 [2024-07-14 15:10:12.813601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.708 qpair failed and we were unable to recover it. 00:37:33.708 [2024-07-14 15:10:12.813811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.708 [2024-07-14 15:10:12.813848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.708 qpair failed and we were unable to recover it. 00:37:33.708 [2024-07-14 15:10:12.813999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.708 [2024-07-14 15:10:12.814038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.708 qpair failed and we were unable to recover it. 00:37:33.708 [2024-07-14 15:10:12.814187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.708 [2024-07-14 15:10:12.814224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.708 qpair failed and we were unable to recover it. 00:37:33.708 [2024-07-14 15:10:12.814397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.708 [2024-07-14 15:10:12.814434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.708 qpair failed and we were unable to recover it. 00:37:33.708 [2024-07-14 15:10:12.814585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.708 [2024-07-14 15:10:12.814623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.708 qpair failed and we were unable to recover it. 00:37:33.708 [2024-07-14 15:10:12.814768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.708 [2024-07-14 15:10:12.814805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.708 qpair failed and we were unable to recover it. 00:37:33.708 [2024-07-14 15:10:12.814990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.708 [2024-07-14 15:10:12.815023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.708 qpair failed and we were unable to recover it. 00:37:33.708 [2024-07-14 15:10:12.815182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.709 [2024-07-14 15:10:12.815219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.709 qpair failed and we were unable to recover it. 00:37:33.709 [2024-07-14 15:10:12.815362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.709 [2024-07-14 15:10:12.815398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.709 qpair failed and we were unable to recover it. 00:37:33.709 [2024-07-14 15:10:12.815627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.709 [2024-07-14 15:10:12.815663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.709 qpair failed and we were unable to recover it. 00:37:33.709 [2024-07-14 15:10:12.815814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.709 [2024-07-14 15:10:12.815851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.709 qpair failed and we were unable to recover it. 00:37:33.709 [2024-07-14 15:10:12.816012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.709 [2024-07-14 15:10:12.816045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.709 qpair failed and we were unable to recover it. 00:37:33.709 [2024-07-14 15:10:12.816183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.709 [2024-07-14 15:10:12.816217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.709 qpair failed and we were unable to recover it. 00:37:33.709 [2024-07-14 15:10:12.816396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.709 [2024-07-14 15:10:12.816444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.709 qpair failed and we were unable to recover it. 00:37:33.709 [2024-07-14 15:10:12.816589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.709 [2024-07-14 15:10:12.816625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.709 qpair failed and we were unable to recover it. 00:37:33.709 [2024-07-14 15:10:12.816767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.709 [2024-07-14 15:10:12.816820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.709 qpair failed and we were unable to recover it. 00:37:33.709 [2024-07-14 15:10:12.816968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.709 [2024-07-14 15:10:12.817001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.709 qpair failed and we were unable to recover it. 00:37:33.709 [2024-07-14 15:10:12.817102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.709 [2024-07-14 15:10:12.817135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.709 qpair failed and we were unable to recover it. 00:37:33.709 [2024-07-14 15:10:12.817309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.709 [2024-07-14 15:10:12.817341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.709 qpair failed and we were unable to recover it. 00:37:33.709 [2024-07-14 15:10:12.817479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.709 [2024-07-14 15:10:12.817529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.709 qpair failed and we were unable to recover it. 00:37:33.709 [2024-07-14 15:10:12.817672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.709 [2024-07-14 15:10:12.817708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.709 qpair failed and we were unable to recover it. 00:37:33.709 [2024-07-14 15:10:12.817891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.709 [2024-07-14 15:10:12.817924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.709 qpair failed and we were unable to recover it. 00:37:33.709 [2024-07-14 15:10:12.818036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.709 [2024-07-14 15:10:12.818068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.709 qpair failed and we were unable to recover it. 00:37:33.709 [2024-07-14 15:10:12.818220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.709 [2024-07-14 15:10:12.818258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.709 qpair failed and we were unable to recover it. 00:37:33.709 [2024-07-14 15:10:12.818442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.709 [2024-07-14 15:10:12.818474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.709 qpair failed and we were unable to recover it. 00:37:33.709 [2024-07-14 15:10:12.818642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.709 [2024-07-14 15:10:12.818692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.709 qpair failed and we were unable to recover it. 00:37:33.709 [2024-07-14 15:10:12.818841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.709 [2024-07-14 15:10:12.818891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.709 qpair failed and we were unable to recover it. 00:37:33.709 [2024-07-14 15:10:12.819041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.709 [2024-07-14 15:10:12.819074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.709 qpair failed and we were unable to recover it. 00:37:33.709 [2024-07-14 15:10:12.819236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.709 [2024-07-14 15:10:12.819273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.709 qpair failed and we were unable to recover it. 00:37:33.709 [2024-07-14 15:10:12.819464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.709 [2024-07-14 15:10:12.819497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.709 qpair failed and we were unable to recover it. 00:37:33.709 [2024-07-14 15:10:12.819630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.709 [2024-07-14 15:10:12.819663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.709 qpair failed and we were unable to recover it. 00:37:33.709 [2024-07-14 15:10:12.819798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.709 [2024-07-14 15:10:12.819850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.709 qpair failed and we were unable to recover it. 00:37:33.709 [2024-07-14 15:10:12.820019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.709 [2024-07-14 15:10:12.820052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.709 qpair failed and we were unable to recover it. 00:37:33.709 [2024-07-14 15:10:12.820197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.709 [2024-07-14 15:10:12.820229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.709 qpair failed and we were unable to recover it. 00:37:33.709 [2024-07-14 15:10:12.820345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.709 [2024-07-14 15:10:12.820378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.709 qpair failed and we were unable to recover it. 00:37:33.709 [2024-07-14 15:10:12.820509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.709 [2024-07-14 15:10:12.820542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.709 qpair failed and we were unable to recover it. 00:37:33.709 [2024-07-14 15:10:12.820678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.709 [2024-07-14 15:10:12.820711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.709 qpair failed and we were unable to recover it. 00:37:33.709 [2024-07-14 15:10:12.820868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.709 [2024-07-14 15:10:12.820910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.709 qpair failed and we were unable to recover it. 00:37:33.709 [2024-07-14 15:10:12.821082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.709 [2024-07-14 15:10:12.821119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.709 qpair failed and we were unable to recover it. 00:37:33.709 [2024-07-14 15:10:12.821249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.709 [2024-07-14 15:10:12.821282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.709 qpair failed and we were unable to recover it. 00:37:33.709 [2024-07-14 15:10:12.821427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.709 [2024-07-14 15:10:12.821476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.709 qpair failed and we were unable to recover it. 00:37:33.709 [2024-07-14 15:10:12.821650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.709 [2024-07-14 15:10:12.821690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.709 qpair failed and we were unable to recover it. 00:37:33.709 [2024-07-14 15:10:12.821846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.709 [2024-07-14 15:10:12.821884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.709 qpair failed and we were unable to recover it. 00:37:33.709 [2024-07-14 15:10:12.822042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.709 [2024-07-14 15:10:12.822078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.709 qpair failed and we were unable to recover it. 00:37:33.709 [2024-07-14 15:10:12.822189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.709 [2024-07-14 15:10:12.822225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.709 qpair failed and we were unable to recover it. 00:37:33.709 [2024-07-14 15:10:12.822382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.709 [2024-07-14 15:10:12.822415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.709 qpair failed and we were unable to recover it. 00:37:33.709 [2024-07-14 15:10:12.822528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.709 [2024-07-14 15:10:12.822577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.709 qpair failed and we were unable to recover it. 00:37:33.709 [2024-07-14 15:10:12.822725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.709 [2024-07-14 15:10:12.822762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.709 qpair failed and we were unable to recover it. 00:37:33.709 [2024-07-14 15:10:12.822922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.710 [2024-07-14 15:10:12.822955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.710 qpair failed and we were unable to recover it. 00:37:33.710 [2024-07-14 15:10:12.823072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.710 [2024-07-14 15:10:12.823123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.710 qpair failed and we were unable to recover it. 00:37:33.710 [2024-07-14 15:10:12.823267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.710 [2024-07-14 15:10:12.823303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.710 qpair failed and we were unable to recover it. 00:37:33.710 [2024-07-14 15:10:12.823459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.710 [2024-07-14 15:10:12.823491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.710 qpair failed and we were unable to recover it. 00:37:33.710 [2024-07-14 15:10:12.823605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.710 [2024-07-14 15:10:12.823638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.710 qpair failed and we were unable to recover it. 00:37:33.710 [2024-07-14 15:10:12.823799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.710 [2024-07-14 15:10:12.823849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.710 qpair failed and we were unable to recover it. 00:37:33.710 [2024-07-14 15:10:12.824021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.710 [2024-07-14 15:10:12.824053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.710 qpair failed and we were unable to recover it. 00:37:33.710 [2024-07-14 15:10:12.824221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.710 [2024-07-14 15:10:12.824256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.710 qpair failed and we were unable to recover it. 00:37:33.710 [2024-07-14 15:10:12.824405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.710 [2024-07-14 15:10:12.824437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.710 qpair failed and we were unable to recover it. 00:37:33.710 [2024-07-14 15:10:12.824557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.710 [2024-07-14 15:10:12.824590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.710 qpair failed and we were unable to recover it. 00:37:33.710 [2024-07-14 15:10:12.824767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.710 [2024-07-14 15:10:12.824801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.710 qpair failed and we were unable to recover it. 00:37:33.710 [2024-07-14 15:10:12.824947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.710 [2024-07-14 15:10:12.824980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.710 qpair failed and we were unable to recover it. 00:37:33.710 [2024-07-14 15:10:12.825120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.710 [2024-07-14 15:10:12.825153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.710 qpair failed and we were unable to recover it. 00:37:33.710 [2024-07-14 15:10:12.825289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.710 [2024-07-14 15:10:12.825327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.710 qpair failed and we were unable to recover it. 00:37:33.710 [2024-07-14 15:10:12.825486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.710 [2024-07-14 15:10:12.825519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.710 qpair failed and we were unable to recover it. 00:37:33.710 [2024-07-14 15:10:12.825653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.710 [2024-07-14 15:10:12.825686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.710 qpair failed and we were unable to recover it. 00:37:33.710 [2024-07-14 15:10:12.825793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.710 [2024-07-14 15:10:12.825827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.710 qpair failed and we were unable to recover it. 00:37:33.710 [2024-07-14 15:10:12.825943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.710 [2024-07-14 15:10:12.825976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.710 qpair failed and we were unable to recover it. 00:37:33.710 [2024-07-14 15:10:12.826137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.710 [2024-07-14 15:10:12.826170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.710 qpair failed and we were unable to recover it. 00:37:33.710 [2024-07-14 15:10:12.826342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.710 [2024-07-14 15:10:12.826376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.710 qpair failed and we were unable to recover it. 00:37:33.710 [2024-07-14 15:10:12.826520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.710 [2024-07-14 15:10:12.826553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.710 qpair failed and we were unable to recover it. 00:37:33.710 [2024-07-14 15:10:12.826690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.710 [2024-07-14 15:10:12.826722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.710 qpair failed and we were unable to recover it. 00:37:33.710 [2024-07-14 15:10:12.826890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.710 [2024-07-14 15:10:12.826923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.710 qpair failed and we were unable to recover it. 00:37:33.710 [2024-07-14 15:10:12.827033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.710 [2024-07-14 15:10:12.827065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.710 qpair failed and we were unable to recover it. 00:37:33.710 [2024-07-14 15:10:12.827173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.710 [2024-07-14 15:10:12.827206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.710 qpair failed and we were unable to recover it. 00:37:33.710 [2024-07-14 15:10:12.827346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.710 [2024-07-14 15:10:12.827378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.710 qpair failed and we were unable to recover it. 00:37:33.710 [2024-07-14 15:10:12.827508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.710 [2024-07-14 15:10:12.827540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.710 qpair failed and we were unable to recover it. 00:37:33.710 [2024-07-14 15:10:12.827647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.710 [2024-07-14 15:10:12.827678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.710 qpair failed and we were unable to recover it. 00:37:33.710 [2024-07-14 15:10:12.827840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.710 [2024-07-14 15:10:12.827873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.710 qpair failed and we were unable to recover it. 00:37:33.710 [2024-07-14 15:10:12.828020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.710 [2024-07-14 15:10:12.828062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.710 qpair failed and we were unable to recover it. 00:37:33.710 [2024-07-14 15:10:12.828172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.710 [2024-07-14 15:10:12.828204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.710 qpair failed and we were unable to recover it. 00:37:33.710 [2024-07-14 15:10:12.828337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.710 [2024-07-14 15:10:12.828370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.710 qpair failed and we were unable to recover it. 00:37:33.710 [2024-07-14 15:10:12.828509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.710 [2024-07-14 15:10:12.828542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.710 qpair failed and we were unable to recover it. 00:37:33.710 [2024-07-14 15:10:12.828646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.710 [2024-07-14 15:10:12.828684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.710 qpair failed and we were unable to recover it. 00:37:33.710 [2024-07-14 15:10:12.828831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.710 [2024-07-14 15:10:12.828865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.710 qpair failed and we were unable to recover it. 00:37:33.710 [2024-07-14 15:10:12.828987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.710 [2024-07-14 15:10:12.829021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.710 qpair failed and we were unable to recover it. 00:37:33.710 [2024-07-14 15:10:12.829159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.710 [2024-07-14 15:10:12.829195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.710 qpair failed and we were unable to recover it. 00:37:33.710 [2024-07-14 15:10:12.829339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.710 [2024-07-14 15:10:12.829377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.710 qpair failed and we were unable to recover it. 00:37:33.710 [2024-07-14 15:10:12.829524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.710 [2024-07-14 15:10:12.829557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.710 qpair failed and we were unable to recover it. 00:37:33.710 [2024-07-14 15:10:12.829676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.710 [2024-07-14 15:10:12.829713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.710 qpair failed and we were unable to recover it. 00:37:33.710 [2024-07-14 15:10:12.829859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.710 [2024-07-14 15:10:12.829906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.710 qpair failed and we were unable to recover it. 00:37:33.711 [2024-07-14 15:10:12.830031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.711 [2024-07-14 15:10:12.830063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.711 qpair failed and we were unable to recover it. 00:37:33.711 [2024-07-14 15:10:12.830210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.711 [2024-07-14 15:10:12.830244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.711 qpair failed and we were unable to recover it. 00:37:33.711 [2024-07-14 15:10:12.830357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.711 [2024-07-14 15:10:12.830398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.711 qpair failed and we were unable to recover it. 00:37:33.711 [2024-07-14 15:10:12.830508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.711 [2024-07-14 15:10:12.830542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.711 qpair failed and we were unable to recover it. 00:37:33.711 [2024-07-14 15:10:12.830682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.711 [2024-07-14 15:10:12.830715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.711 qpair failed and we were unable to recover it. 00:37:33.711 [2024-07-14 15:10:12.830869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.711 [2024-07-14 15:10:12.830917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.711 qpair failed and we were unable to recover it. 00:37:33.711 [2024-07-14 15:10:12.831044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.711 [2024-07-14 15:10:12.831078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.711 qpair failed and we were unable to recover it. 00:37:33.711 [2024-07-14 15:10:12.831201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.711 [2024-07-14 15:10:12.831238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.711 qpair failed and we were unable to recover it. 00:37:33.711 [2024-07-14 15:10:12.831373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.711 [2024-07-14 15:10:12.831409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.711 qpair failed and we were unable to recover it. 00:37:33.711 [2024-07-14 15:10:12.831528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.711 [2024-07-14 15:10:12.831566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.711 qpair failed and we were unable to recover it. 00:37:33.711 [2024-07-14 15:10:12.831729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.711 [2024-07-14 15:10:12.831763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.711 qpair failed and we were unable to recover it. 00:37:33.711 [2024-07-14 15:10:12.831901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.711 [2024-07-14 15:10:12.831936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.711 qpair failed and we were unable to recover it. 00:37:33.711 [2024-07-14 15:10:12.832070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.711 [2024-07-14 15:10:12.832111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.711 qpair failed and we were unable to recover it. 00:37:33.711 [2024-07-14 15:10:12.832260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.711 [2024-07-14 15:10:12.832292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.711 qpair failed and we were unable to recover it. 00:37:33.711 [2024-07-14 15:10:12.832439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.711 [2024-07-14 15:10:12.832473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.711 qpair failed and we were unable to recover it. 00:37:33.711 [2024-07-14 15:10:12.832606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.711 [2024-07-14 15:10:12.832638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.711 qpair failed and we were unable to recover it. 00:37:33.711 [2024-07-14 15:10:12.832755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.711 [2024-07-14 15:10:12.832788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.711 qpair failed and we were unable to recover it. 00:37:33.711 [2024-07-14 15:10:12.832905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.711 [2024-07-14 15:10:12.832938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.711 qpair failed and we were unable to recover it. 00:37:33.711 [2024-07-14 15:10:12.833069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.711 [2024-07-14 15:10:12.833101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.711 qpair failed and we were unable to recover it. 00:37:33.711 [2024-07-14 15:10:12.833214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.711 [2024-07-14 15:10:12.833247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.711 qpair failed and we were unable to recover it. 00:37:33.711 [2024-07-14 15:10:12.833387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.711 [2024-07-14 15:10:12.833421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.711 qpair failed and we were unable to recover it. 00:37:33.711 [2024-07-14 15:10:12.833558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.711 [2024-07-14 15:10:12.833592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.711 qpair failed and we were unable to recover it. 00:37:33.711 [2024-07-14 15:10:12.833703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.711 [2024-07-14 15:10:12.833737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.711 qpair failed and we were unable to recover it. 00:37:33.711 [2024-07-14 15:10:12.833885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.711 [2024-07-14 15:10:12.833919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.711 qpair failed and we were unable to recover it. 00:37:33.711 [2024-07-14 15:10:12.834058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.711 [2024-07-14 15:10:12.834091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.711 qpair failed and we were unable to recover it. 00:37:33.711 [2024-07-14 15:10:12.834201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.711 [2024-07-14 15:10:12.834234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.711 qpair failed and we were unable to recover it. 00:37:33.711 [2024-07-14 15:10:12.834379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.711 [2024-07-14 15:10:12.834419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.711 qpair failed and we were unable to recover it. 00:37:33.711 [2024-07-14 15:10:12.834586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.711 [2024-07-14 15:10:12.834626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.711 qpair failed and we were unable to recover it. 00:37:33.711 [2024-07-14 15:10:12.834778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.711 [2024-07-14 15:10:12.834819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.711 qpair failed and we were unable to recover it. 00:37:33.711 [2024-07-14 15:10:12.835007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.711 [2024-07-14 15:10:12.835047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.711 qpair failed and we were unable to recover it. 00:37:33.711 [2024-07-14 15:10:12.835172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.711 [2024-07-14 15:10:12.835206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.711 qpair failed and we were unable to recover it. 00:37:33.711 [2024-07-14 15:10:12.835318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.711 [2024-07-14 15:10:12.835354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.711 qpair failed and we were unable to recover it. 00:37:33.711 [2024-07-14 15:10:12.835475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.711 [2024-07-14 15:10:12.835513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.711 qpair failed and we were unable to recover it. 00:37:33.711 [2024-07-14 15:10:12.835648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.711 [2024-07-14 15:10:12.835682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.711 qpair failed and we were unable to recover it. 00:37:33.711 [2024-07-14 15:10:12.835831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.711 [2024-07-14 15:10:12.835865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.711 qpair failed and we were unable to recover it. 00:37:33.711 [2024-07-14 15:10:12.835997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.711 [2024-07-14 15:10:12.836034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.711 qpair failed and we were unable to recover it. 00:37:33.712 [2024-07-14 15:10:12.836176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.712 [2024-07-14 15:10:12.836209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.712 qpair failed and we were unable to recover it. 00:37:33.712 [2024-07-14 15:10:12.836319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.712 [2024-07-14 15:10:12.836357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.712 qpair failed and we were unable to recover it. 00:37:33.712 [2024-07-14 15:10:12.836506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.712 [2024-07-14 15:10:12.836542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.712 qpair failed and we were unable to recover it. 00:37:33.712 [2024-07-14 15:10:12.836655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.712 [2024-07-14 15:10:12.836697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.712 qpair failed and we were unable to recover it. 00:37:33.712 [2024-07-14 15:10:12.836803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.712 [2024-07-14 15:10:12.836837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.712 qpair failed and we were unable to recover it. 00:37:33.712 [2024-07-14 15:10:12.836966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.712 [2024-07-14 15:10:12.837000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.712 qpair failed and we were unable to recover it. 00:37:33.712 [2024-07-14 15:10:12.837140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.712 [2024-07-14 15:10:12.837173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.712 qpair failed and we were unable to recover it. 00:37:33.712 [2024-07-14 15:10:12.837341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.712 [2024-07-14 15:10:12.837374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.712 qpair failed and we were unable to recover it. 00:37:33.712 [2024-07-14 15:10:12.837508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.712 [2024-07-14 15:10:12.837541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.712 qpair failed and we were unable to recover it. 00:37:33.712 [2024-07-14 15:10:12.837657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.712 [2024-07-14 15:10:12.837693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.712 qpair failed and we were unable to recover it. 00:37:33.712 [2024-07-14 15:10:12.837825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.712 [2024-07-14 15:10:12.837858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.712 qpair failed and we were unable to recover it. 00:37:33.712 [2024-07-14 15:10:12.837980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.712 [2024-07-14 15:10:12.838014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.712 qpair failed and we were unable to recover it. 00:37:33.712 [2024-07-14 15:10:12.838132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.712 [2024-07-14 15:10:12.838165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.712 qpair failed and we were unable to recover it. 00:37:33.712 [2024-07-14 15:10:12.838273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.712 [2024-07-14 15:10:12.838306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.712 qpair failed and we were unable to recover it. 00:37:33.712 [2024-07-14 15:10:12.838436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.712 [2024-07-14 15:10:12.838470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.712 qpair failed and we were unable to recover it. 00:37:33.712 [2024-07-14 15:10:12.838588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.712 [2024-07-14 15:10:12.838621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.712 qpair failed and we were unable to recover it. 00:37:33.712 [2024-07-14 15:10:12.838752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.712 [2024-07-14 15:10:12.838802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.712 qpair failed and we were unable to recover it. 00:37:33.712 [2024-07-14 15:10:12.838957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.712 [2024-07-14 15:10:12.838994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.712 qpair failed and we were unable to recover it. 00:37:33.712 [2024-07-14 15:10:12.839130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.712 [2024-07-14 15:10:12.839168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.712 qpair failed and we were unable to recover it. 00:37:33.712 [2024-07-14 15:10:12.839283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.712 [2024-07-14 15:10:12.839316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.712 qpair failed and we were unable to recover it. 00:37:33.712 [2024-07-14 15:10:12.839446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.712 [2024-07-14 15:10:12.839482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.712 qpair failed and we were unable to recover it. 00:37:33.712 [2024-07-14 15:10:12.839602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.712 [2024-07-14 15:10:12.839636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.712 qpair failed and we were unable to recover it. 00:37:33.712 [2024-07-14 15:10:12.839749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.712 [2024-07-14 15:10:12.839786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.712 qpair failed and we were unable to recover it. 00:37:33.712 [2024-07-14 15:10:12.839936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.712 [2024-07-14 15:10:12.839977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.712 qpair failed and we were unable to recover it. 00:37:33.712 [2024-07-14 15:10:12.840088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.712 [2024-07-14 15:10:12.840124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.712 qpair failed and we were unable to recover it. 00:37:33.712 [2024-07-14 15:10:12.840274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.712 [2024-07-14 15:10:12.840313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.712 qpair failed and we were unable to recover it. 00:37:33.712 [2024-07-14 15:10:12.840431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.712 [2024-07-14 15:10:12.840464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.712 qpair failed and we were unable to recover it. 00:37:33.712 [2024-07-14 15:10:12.840603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.712 [2024-07-14 15:10:12.840636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.712 qpair failed and we were unable to recover it. 00:37:33.712 [2024-07-14 15:10:12.840768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.712 [2024-07-14 15:10:12.840805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.712 qpair failed and we were unable to recover it. 00:37:33.712 [2024-07-14 15:10:12.840953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.712 [2024-07-14 15:10:12.840986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.712 qpair failed and we were unable to recover it. 00:37:33.712 [2024-07-14 15:10:12.841127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.712 [2024-07-14 15:10:12.841161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.712 qpair failed and we were unable to recover it. 00:37:33.712 [2024-07-14 15:10:12.841281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.712 [2024-07-14 15:10:12.841317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.712 qpair failed and we were unable to recover it. 00:37:33.712 [2024-07-14 15:10:12.841457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.712 [2024-07-14 15:10:12.841499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.712 qpair failed and we were unable to recover it. 00:37:33.712 [2024-07-14 15:10:12.841636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.712 [2024-07-14 15:10:12.841669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.712 qpair failed and we were unable to recover it. 00:37:33.712 [2024-07-14 15:10:12.841815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.712 [2024-07-14 15:10:12.841851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.712 qpair failed and we were unable to recover it. 00:37:33.712 [2024-07-14 15:10:12.841974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.712 [2024-07-14 15:10:12.842007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.712 qpair failed and we were unable to recover it. 00:37:33.712 [2024-07-14 15:10:12.842120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.712 [2024-07-14 15:10:12.842158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.712 qpair failed and we were unable to recover it. 00:37:33.712 [2024-07-14 15:10:12.842297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.712 [2024-07-14 15:10:12.842337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.712 qpair failed and we were unable to recover it. 00:37:33.712 [2024-07-14 15:10:12.842485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.712 [2024-07-14 15:10:12.842527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.712 qpair failed and we were unable to recover it. 00:37:33.712 [2024-07-14 15:10:12.842670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.712 [2024-07-14 15:10:12.842705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.712 qpair failed and we were unable to recover it. 00:37:33.712 [2024-07-14 15:10:12.842810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.713 [2024-07-14 15:10:12.842843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.713 qpair failed and we were unable to recover it. 00:37:33.713 [2024-07-14 15:10:12.842980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.713 [2024-07-14 15:10:12.843016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.713 qpair failed and we were unable to recover it. 00:37:33.713 [2024-07-14 15:10:12.843127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.713 [2024-07-14 15:10:12.843160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.713 qpair failed and we were unable to recover it. 00:37:33.713 [2024-07-14 15:10:12.843283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.713 [2024-07-14 15:10:12.843318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.713 qpair failed and we were unable to recover it. 00:37:33.713 [2024-07-14 15:10:12.843457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.713 [2024-07-14 15:10:12.843493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.713 qpair failed and we were unable to recover it. 00:37:33.713 [2024-07-14 15:10:12.843631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.713 [2024-07-14 15:10:12.843664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.713 qpair failed and we were unable to recover it. 00:37:33.713 [2024-07-14 15:10:12.843807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.713 [2024-07-14 15:10:12.843840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.713 qpair failed and we were unable to recover it. 00:37:33.713 [2024-07-14 15:10:12.843966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.713 [2024-07-14 15:10:12.844000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.713 qpair failed and we were unable to recover it. 00:37:33.713 [2024-07-14 15:10:12.844165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.713 [2024-07-14 15:10:12.844198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.713 qpair failed and we were unable to recover it. 00:37:33.713 [2024-07-14 15:10:12.844311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.713 [2024-07-14 15:10:12.844345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.713 qpair failed and we were unable to recover it. 00:37:33.713 [2024-07-14 15:10:12.844460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.713 [2024-07-14 15:10:12.844494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.713 qpair failed and we were unable to recover it. 00:37:33.713 [2024-07-14 15:10:12.844653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.713 [2024-07-14 15:10:12.844689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.713 qpair failed and we were unable to recover it. 00:37:33.713 [2024-07-14 15:10:12.844831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.713 [2024-07-14 15:10:12.844872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.713 qpair failed and we were unable to recover it. 00:37:33.713 [2024-07-14 15:10:12.845001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.713 [2024-07-14 15:10:12.845033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.713 qpair failed and we were unable to recover it. 00:37:33.713 [2024-07-14 15:10:12.845172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.713 [2024-07-14 15:10:12.845208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.713 qpair failed and we were unable to recover it. 00:37:33.713 [2024-07-14 15:10:12.845356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.713 [2024-07-14 15:10:12.845389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.713 qpair failed and we were unable to recover it. 00:37:33.713 [2024-07-14 15:10:12.845508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.713 [2024-07-14 15:10:12.845542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.713 qpair failed and we were unable to recover it. 00:37:33.713 [2024-07-14 15:10:12.845683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.713 [2024-07-14 15:10:12.845724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.713 qpair failed and we were unable to recover it. 00:37:33.713 [2024-07-14 15:10:12.845848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.713 [2024-07-14 15:10:12.845889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.713 qpair failed and we were unable to recover it. 00:37:33.713 [2024-07-14 15:10:12.846019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.713 [2024-07-14 15:10:12.846061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.713 qpair failed and we were unable to recover it. 00:37:33.713 [2024-07-14 15:10:12.846175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.713 [2024-07-14 15:10:12.846208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.713 qpair failed and we were unable to recover it. 00:37:33.713 [2024-07-14 15:10:12.846320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.713 [2024-07-14 15:10:12.846353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.713 qpair failed and we were unable to recover it. 00:37:33.713 [2024-07-14 15:10:12.846471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.713 [2024-07-14 15:10:12.846504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.713 qpair failed and we were unable to recover it. 00:37:33.713 [2024-07-14 15:10:12.846648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.713 [2024-07-14 15:10:12.846688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.713 qpair failed and we were unable to recover it. 00:37:33.713 [2024-07-14 15:10:12.846794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.713 [2024-07-14 15:10:12.846828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.713 qpair failed and we were unable to recover it. 00:37:33.713 [2024-07-14 15:10:12.846974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.713 [2024-07-14 15:10:12.847018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.713 qpair failed and we were unable to recover it. 00:37:33.713 [2024-07-14 15:10:12.847137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.713 [2024-07-14 15:10:12.847170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.713 qpair failed and we were unable to recover it. 00:37:33.713 [2024-07-14 15:10:12.847297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.713 [2024-07-14 15:10:12.847329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.713 qpair failed and we were unable to recover it. 00:37:33.713 [2024-07-14 15:10:12.847475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.713 [2024-07-14 15:10:12.847509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.713 qpair failed and we were unable to recover it. 00:37:33.713 [2024-07-14 15:10:12.847678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.713 [2024-07-14 15:10:12.847713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.713 qpair failed and we were unable to recover it. 00:37:33.713 [2024-07-14 15:10:12.847831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.713 [2024-07-14 15:10:12.847864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.713 qpair failed and we were unable to recover it. 00:37:33.713 [2024-07-14 15:10:12.847992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.713 [2024-07-14 15:10:12.848030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.713 qpair failed and we were unable to recover it. 00:37:33.713 [2024-07-14 15:10:12.848170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.713 [2024-07-14 15:10:12.848203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.713 qpair failed and we were unable to recover it. 00:37:33.713 [2024-07-14 15:10:12.848334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.713 [2024-07-14 15:10:12.848367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.713 qpair failed and we were unable to recover it. 00:37:33.713 [2024-07-14 15:10:12.848477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.713 [2024-07-14 15:10:12.848510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.713 qpair failed and we were unable to recover it. 00:37:33.713 [2024-07-14 15:10:12.848642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.713 [2024-07-14 15:10:12.848677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.713 qpair failed and we were unable to recover it. 00:37:33.713 [2024-07-14 15:10:12.848812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.713 [2024-07-14 15:10:12.848849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.713 qpair failed and we were unable to recover it. 00:37:33.713 [2024-07-14 15:10:12.848978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.713 [2024-07-14 15:10:12.849011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.713 qpair failed and we were unable to recover it. 00:37:33.713 [2024-07-14 15:10:12.849154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.713 [2024-07-14 15:10:12.849187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.713 qpair failed and we were unable to recover it. 00:37:33.713 [2024-07-14 15:10:12.849326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.713 [2024-07-14 15:10:12.849360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.713 qpair failed and we were unable to recover it. 00:37:33.713 [2024-07-14 15:10:12.849507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.713 [2024-07-14 15:10:12.849552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.713 qpair failed and we were unable to recover it. 00:37:33.713 [2024-07-14 15:10:12.849666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.714 [2024-07-14 15:10:12.849702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.714 qpair failed and we were unable to recover it. 00:37:33.714 [2024-07-14 15:10:12.849844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.714 [2024-07-14 15:10:12.849885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.714 qpair failed and we were unable to recover it. 00:37:33.714 [2024-07-14 15:10:12.850035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.714 [2024-07-14 15:10:12.850069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.714 qpair failed and we were unable to recover it. 00:37:33.714 [2024-07-14 15:10:12.850185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.714 [2024-07-14 15:10:12.850217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.714 qpair failed and we were unable to recover it. 00:37:33.714 [2024-07-14 15:10:12.850373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.714 [2024-07-14 15:10:12.850415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.714 qpair failed and we were unable to recover it. 00:37:33.714 [2024-07-14 15:10:12.850534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.714 [2024-07-14 15:10:12.850567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.714 qpair failed and we were unable to recover it. 00:37:33.714 [2024-07-14 15:10:12.850709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.714 [2024-07-14 15:10:12.850742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.714 qpair failed and we were unable to recover it. 00:37:33.714 [2024-07-14 15:10:12.850858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.714 [2024-07-14 15:10:12.850901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.714 qpair failed and we were unable to recover it. 00:37:33.714 [2024-07-14 15:10:12.851044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.714 [2024-07-14 15:10:12.851079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.714 qpair failed and we were unable to recover it. 00:37:33.714 [2024-07-14 15:10:12.851200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.714 [2024-07-14 15:10:12.851234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.714 qpair failed and we were unable to recover it. 00:37:33.714 [2024-07-14 15:10:12.851374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.714 [2024-07-14 15:10:12.851407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.714 qpair failed and we were unable to recover it. 00:37:33.714 [2024-07-14 15:10:12.851516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.714 [2024-07-14 15:10:12.851550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.714 qpair failed and we were unable to recover it. 00:37:33.714 [2024-07-14 15:10:12.851665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.714 [2024-07-14 15:10:12.851698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.714 qpair failed and we were unable to recover it. 00:37:33.714 [2024-07-14 15:10:12.851842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.714 [2024-07-14 15:10:12.851892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.714 qpair failed and we were unable to recover it. 00:37:33.714 [2024-07-14 15:10:12.852034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.714 [2024-07-14 15:10:12.852069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.714 qpair failed and we were unable to recover it. 00:37:33.714 [2024-07-14 15:10:12.852196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.714 [2024-07-14 15:10:12.852229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.714 qpair failed and we were unable to recover it. 00:37:33.714 [2024-07-14 15:10:12.852363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.714 [2024-07-14 15:10:12.852396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.714 qpair failed and we were unable to recover it. 00:37:33.714 [2024-07-14 15:10:12.852513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.714 [2024-07-14 15:10:12.852546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.714 qpair failed and we were unable to recover it. 00:37:33.714 [2024-07-14 15:10:12.852658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.714 [2024-07-14 15:10:12.852694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.714 qpair failed and we were unable to recover it. 00:37:33.714 [2024-07-14 15:10:12.852846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.714 [2024-07-14 15:10:12.852887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.714 qpair failed and we were unable to recover it. 00:37:33.714 [2024-07-14 15:10:12.853014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.714 [2024-07-14 15:10:12.853048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.714 qpair failed and we were unable to recover it. 00:37:33.714 [2024-07-14 15:10:12.853158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.714 [2024-07-14 15:10:12.853195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.714 qpair failed and we were unable to recover it. 00:37:33.714 [2024-07-14 15:10:12.853330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.714 [2024-07-14 15:10:12.853363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.714 qpair failed and we were unable to recover it. 00:37:33.714 [2024-07-14 15:10:12.853504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.714 [2024-07-14 15:10:12.853537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.714 qpair failed and we were unable to recover it. 00:37:33.714 [2024-07-14 15:10:12.853677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.714 [2024-07-14 15:10:12.853711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.714 qpair failed and we were unable to recover it. 00:37:33.714 [2024-07-14 15:10:12.853846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.714 [2024-07-14 15:10:12.853894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.714 qpair failed and we were unable to recover it. 00:37:33.714 [2024-07-14 15:10:12.854022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.714 [2024-07-14 15:10:12.854058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.714 qpair failed and we were unable to recover it. 00:37:33.714 [2024-07-14 15:10:12.854199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.714 [2024-07-14 15:10:12.854232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.714 qpair failed and we were unable to recover it. 00:37:33.714 [2024-07-14 15:10:12.854356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.714 [2024-07-14 15:10:12.854390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.714 qpair failed and we were unable to recover it. 00:37:33.714 [2024-07-14 15:10:12.854554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.714 [2024-07-14 15:10:12.854596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.714 qpair failed and we were unable to recover it. 00:37:33.714 [2024-07-14 15:10:12.854714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.714 [2024-07-14 15:10:12.854748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.714 qpair failed and we were unable to recover it. 00:37:33.714 [2024-07-14 15:10:12.854864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.714 [2024-07-14 15:10:12.854907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.714 qpair failed and we were unable to recover it. 00:37:33.714 [2024-07-14 15:10:12.855053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.714 [2024-07-14 15:10:12.855087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.714 qpair failed and we were unable to recover it. 00:37:33.714 [2024-07-14 15:10:12.855248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.714 [2024-07-14 15:10:12.855281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.714 qpair failed and we were unable to recover it. 00:37:33.714 [2024-07-14 15:10:12.855421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.714 [2024-07-14 15:10:12.855457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.714 qpair failed and we were unable to recover it. 00:37:33.714 [2024-07-14 15:10:12.855575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.714 [2024-07-14 15:10:12.855612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.714 qpair failed and we were unable to recover it. 00:37:33.714 [2024-07-14 15:10:12.855759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.714 [2024-07-14 15:10:12.855792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.714 qpair failed and we were unable to recover it. 00:37:33.714 [2024-07-14 15:10:12.855930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.714 [2024-07-14 15:10:12.855963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.714 qpair failed and we were unable to recover it. 00:37:33.714 [2024-07-14 15:10:12.856109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.714 [2024-07-14 15:10:12.856142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.714 qpair failed and we were unable to recover it. 00:37:33.714 [2024-07-14 15:10:12.856273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.714 [2024-07-14 15:10:12.856307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.714 qpair failed and we were unable to recover it. 00:37:33.714 [2024-07-14 15:10:12.856452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.714 [2024-07-14 15:10:12.856491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.714 qpair failed and we were unable to recover it. 00:37:33.714 [2024-07-14 15:10:12.856611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.715 [2024-07-14 15:10:12.856644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.715 qpair failed and we were unable to recover it. 00:37:33.715 [2024-07-14 15:10:12.856754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.715 [2024-07-14 15:10:12.856793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.715 qpair failed and we were unable to recover it. 00:37:33.715 [2024-07-14 15:10:12.856935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.715 [2024-07-14 15:10:12.856971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.715 qpair failed and we were unable to recover it. 00:37:33.715 [2024-07-14 15:10:12.857119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.715 [2024-07-14 15:10:12.857151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.715 qpair failed and we were unable to recover it. 00:37:33.715 [2024-07-14 15:10:12.857266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.715 [2024-07-14 15:10:12.857301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.715 qpair failed and we were unable to recover it. 00:37:33.715 [2024-07-14 15:10:12.857435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.715 [2024-07-14 15:10:12.857468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.715 qpair failed and we were unable to recover it. 00:37:33.715 [2024-07-14 15:10:12.857614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.715 [2024-07-14 15:10:12.857648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.715 qpair failed and we were unable to recover it. 00:37:33.715 [2024-07-14 15:10:12.857790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.715 [2024-07-14 15:10:12.857823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.715 qpair failed and we were unable to recover it. 00:37:33.715 [2024-07-14 15:10:12.857952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.715 [2024-07-14 15:10:12.857986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.715 qpair failed and we were unable to recover it. 00:37:33.715 [2024-07-14 15:10:12.858126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.715 [2024-07-14 15:10:12.858159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.715 qpair failed and we were unable to recover it. 00:37:33.715 [2024-07-14 15:10:12.858267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.715 [2024-07-14 15:10:12.858303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.715 qpair failed and we were unable to recover it. 00:37:33.715 [2024-07-14 15:10:12.858427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.715 [2024-07-14 15:10:12.858460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.715 qpair failed and we were unable to recover it. 00:37:33.715 [2024-07-14 15:10:12.858623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.715 [2024-07-14 15:10:12.858656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.715 qpair failed and we were unable to recover it. 00:37:33.715 [2024-07-14 15:10:12.858768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.715 [2024-07-14 15:10:12.858801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.715 qpair failed and we were unable to recover it. 00:37:33.715 [2024-07-14 15:10:12.858908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.715 [2024-07-14 15:10:12.858942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.715 qpair failed and we were unable to recover it. 00:37:33.715 [2024-07-14 15:10:12.859055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.715 [2024-07-14 15:10:12.859089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.715 qpair failed and we were unable to recover it. 00:37:33.715 [2024-07-14 15:10:12.859286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.715 [2024-07-14 15:10:12.859337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.715 qpair failed and we were unable to recover it. 00:37:33.715 [2024-07-14 15:10:12.859461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.715 [2024-07-14 15:10:12.859496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.715 qpair failed and we were unable to recover it. 00:37:33.715 [2024-07-14 15:10:12.859604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.715 [2024-07-14 15:10:12.859638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.715 qpair failed and we were unable to recover it. 00:37:33.715 [2024-07-14 15:10:12.859752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.715 [2024-07-14 15:10:12.859785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.715 qpair failed and we were unable to recover it. 00:37:33.715 [2024-07-14 15:10:12.859906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.715 [2024-07-14 15:10:12.859940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.715 qpair failed and we were unable to recover it. 00:37:33.715 [2024-07-14 15:10:12.860081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.715 [2024-07-14 15:10:12.860114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.715 qpair failed and we were unable to recover it. 00:37:33.715 [2024-07-14 15:10:12.860255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.715 [2024-07-14 15:10:12.860291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.715 qpair failed and we were unable to recover it. 00:37:33.715 [2024-07-14 15:10:12.860401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.715 [2024-07-14 15:10:12.860434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.715 qpair failed and we were unable to recover it. 00:37:33.715 [2024-07-14 15:10:12.860550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.715 [2024-07-14 15:10:12.860584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.715 qpair failed and we were unable to recover it. 00:37:33.715 [2024-07-14 15:10:12.860725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.715 [2024-07-14 15:10:12.860759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.715 qpair failed and we were unable to recover it. 00:37:33.715 [2024-07-14 15:10:12.860882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.715 [2024-07-14 15:10:12.860919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.715 qpair failed and we were unable to recover it. 00:37:33.715 [2024-07-14 15:10:12.861094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.715 [2024-07-14 15:10:12.861128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.715 qpair failed and we were unable to recover it. 00:37:33.715 [2024-07-14 15:10:12.861263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.715 [2024-07-14 15:10:12.861330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.715 qpair failed and we were unable to recover it. 00:37:33.715 [2024-07-14 15:10:12.861442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.715 [2024-07-14 15:10:12.861475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.715 qpair failed and we were unable to recover it. 00:37:33.715 [2024-07-14 15:10:12.861592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.715 [2024-07-14 15:10:12.861626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.715 qpair failed and we were unable to recover it. 00:37:33.715 [2024-07-14 15:10:12.861740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.715 [2024-07-14 15:10:12.861777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.715 qpair failed and we were unable to recover it. 00:37:33.715 [2024-07-14 15:10:12.861911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.715 [2024-07-14 15:10:12.861947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.715 qpair failed and we were unable to recover it. 00:37:33.715 [2024-07-14 15:10:12.862067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.715 [2024-07-14 15:10:12.862100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.715 qpair failed and we were unable to recover it. 00:37:33.715 [2024-07-14 15:10:12.862231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.715 [2024-07-14 15:10:12.862272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.715 qpair failed and we were unable to recover it. 00:37:33.715 [2024-07-14 15:10:12.862396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.715 [2024-07-14 15:10:12.862430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.715 qpair failed and we were unable to recover it. 00:37:33.715 [2024-07-14 15:10:12.862605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.716 [2024-07-14 15:10:12.862638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.716 qpair failed and we were unable to recover it. 00:37:33.716 [2024-07-14 15:10:12.862746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.716 [2024-07-14 15:10:12.862780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.716 qpair failed and we were unable to recover it. 00:37:33.716 [2024-07-14 15:10:12.862911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.716 [2024-07-14 15:10:12.862945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.716 qpair failed and we were unable to recover it. 00:37:33.716 [2024-07-14 15:10:12.863060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.716 [2024-07-14 15:10:12.863096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.716 qpair failed and we were unable to recover it. 00:37:33.716 [2024-07-14 15:10:12.863223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.716 [2024-07-14 15:10:12.863256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.716 qpair failed and we were unable to recover it. 00:37:33.716 [2024-07-14 15:10:12.863427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.716 [2024-07-14 15:10:12.863461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.716 qpair failed and we were unable to recover it. 00:37:33.716 [2024-07-14 15:10:12.863574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.716 [2024-07-14 15:10:12.863607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.716 qpair failed and we were unable to recover it. 00:37:33.716 [2024-07-14 15:10:12.863746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.716 [2024-07-14 15:10:12.863778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.716 qpair failed and we were unable to recover it. 00:37:33.716 [2024-07-14 15:10:12.863905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.716 [2024-07-14 15:10:12.863939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.716 qpair failed and we were unable to recover it. 00:37:33.716 [2024-07-14 15:10:12.864084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.716 [2024-07-14 15:10:12.864117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.716 qpair failed and we were unable to recover it. 00:37:33.716 [2024-07-14 15:10:12.864223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.716 [2024-07-14 15:10:12.864256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.716 qpair failed and we were unable to recover it. 00:37:33.716 [2024-07-14 15:10:12.864364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.716 [2024-07-14 15:10:12.864403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.716 qpair failed and we were unable to recover it. 00:37:33.716 [2024-07-14 15:10:12.864564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.716 [2024-07-14 15:10:12.864600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.716 qpair failed and we were unable to recover it. 00:37:33.716 [2024-07-14 15:10:12.864719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.716 [2024-07-14 15:10:12.864752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.716 qpair failed and we were unable to recover it. 00:37:33.716 [2024-07-14 15:10:12.864892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.716 [2024-07-14 15:10:12.864930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.716 qpair failed and we were unable to recover it. 00:37:33.716 [2024-07-14 15:10:12.865045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.716 [2024-07-14 15:10:12.865079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.716 qpair failed and we were unable to recover it. 00:37:33.716 [2024-07-14 15:10:12.865209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.716 [2024-07-14 15:10:12.865246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.716 qpair failed and we were unable to recover it. 00:37:33.716 [2024-07-14 15:10:12.865382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.716 [2024-07-14 15:10:12.865415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.716 qpair failed and we were unable to recover it. 00:37:33.716 [2024-07-14 15:10:12.865556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.716 [2024-07-14 15:10:12.865590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.716 qpair failed and we were unable to recover it. 00:37:33.716 [2024-07-14 15:10:12.865724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.716 [2024-07-14 15:10:12.865760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.716 qpair failed and we were unable to recover it. 00:37:33.716 [2024-07-14 15:10:12.865893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.716 [2024-07-14 15:10:12.865926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.716 qpair failed and we were unable to recover it. 00:37:33.716 [2024-07-14 15:10:12.866032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.716 [2024-07-14 15:10:12.866066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.716 qpair failed and we were unable to recover it. 00:37:33.716 [2024-07-14 15:10:12.866205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.716 [2024-07-14 15:10:12.866238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.716 qpair failed and we were unable to recover it. 00:37:33.716 [2024-07-14 15:10:12.866344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.716 [2024-07-14 15:10:12.866378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.716 qpair failed and we were unable to recover it. 00:37:33.716 [2024-07-14 15:10:12.866524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.716 [2024-07-14 15:10:12.866558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.716 qpair failed and we were unable to recover it. 00:37:33.716 [2024-07-14 15:10:12.866674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.716 [2024-07-14 15:10:12.866707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.716 qpair failed and we were unable to recover it. 00:37:33.716 [2024-07-14 15:10:12.866841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.716 [2024-07-14 15:10:12.866891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.716 qpair failed and we were unable to recover it. 00:37:33.716 [2024-07-14 15:10:12.867033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.716 [2024-07-14 15:10:12.867067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.716 qpair failed and we were unable to recover it. 00:37:33.716 [2024-07-14 15:10:12.867176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.716 [2024-07-14 15:10:12.867209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.716 qpair failed and we were unable to recover it. 00:37:33.716 [2024-07-14 15:10:12.867339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.716 [2024-07-14 15:10:12.867375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.716 qpair failed and we were unable to recover it. 00:37:33.716 [2024-07-14 15:10:12.867521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.716 [2024-07-14 15:10:12.867556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.716 qpair failed and we were unable to recover it. 00:37:33.716 [2024-07-14 15:10:12.867699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.716 [2024-07-14 15:10:12.867732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.716 qpair failed and we were unable to recover it. 00:37:33.716 [2024-07-14 15:10:12.867857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.716 [2024-07-14 15:10:12.867898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.716 qpair failed and we were unable to recover it. 00:37:33.716 [2024-07-14 15:10:12.868028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.716 [2024-07-14 15:10:12.868067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.716 qpair failed and we were unable to recover it. 00:37:33.716 [2024-07-14 15:10:12.868184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.716 [2024-07-14 15:10:12.868216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.716 qpair failed and we were unable to recover it. 00:37:33.716 [2024-07-14 15:10:12.868353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.716 [2024-07-14 15:10:12.868386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.716 qpair failed and we were unable to recover it. 00:37:33.716 [2024-07-14 15:10:12.868552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.716 [2024-07-14 15:10:12.868585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.716 qpair failed and we were unable to recover it. 00:37:33.716 [2024-07-14 15:10:12.868703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.716 [2024-07-14 15:10:12.868737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.716 qpair failed and we were unable to recover it. 00:37:33.716 [2024-07-14 15:10:12.868869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.716 [2024-07-14 15:10:12.868913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.716 qpair failed and we were unable to recover it. 00:37:33.716 [2024-07-14 15:10:12.869029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.716 [2024-07-14 15:10:12.869069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.716 qpair failed and we were unable to recover it. 00:37:33.716 [2024-07-14 15:10:12.869210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.716 [2024-07-14 15:10:12.869243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.716 qpair failed and we were unable to recover it. 00:37:33.717 [2024-07-14 15:10:12.869368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.717 [2024-07-14 15:10:12.869401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.717 qpair failed and we were unable to recover it. 00:37:33.717 [2024-07-14 15:10:12.869566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.717 [2024-07-14 15:10:12.869599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.717 qpair failed and we were unable to recover it. 00:37:33.717 [2024-07-14 15:10:12.869704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.717 [2024-07-14 15:10:12.869741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.717 qpair failed and we were unable to recover it. 00:37:33.717 [2024-07-14 15:10:12.869846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.717 [2024-07-14 15:10:12.869884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.717 qpair failed and we were unable to recover it. 00:37:33.717 [2024-07-14 15:10:12.870026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.717 [2024-07-14 15:10:12.870060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.717 qpair failed and we were unable to recover it. 00:37:33.717 [2024-07-14 15:10:12.870168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.717 [2024-07-14 15:10:12.870202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.717 qpair failed and we were unable to recover it. 00:37:33.717 [2024-07-14 15:10:12.870323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.717 [2024-07-14 15:10:12.870358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.717 qpair failed and we were unable to recover it. 00:37:33.717 [2024-07-14 15:10:12.870501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.717 [2024-07-14 15:10:12.870534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.717 qpair failed and we were unable to recover it. 00:37:33.717 [2024-07-14 15:10:12.870663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.717 [2024-07-14 15:10:12.870697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.717 qpair failed and we were unable to recover it. 00:37:33.717 [2024-07-14 15:10:12.870805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.717 [2024-07-14 15:10:12.870841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.717 qpair failed and we were unable to recover it. 00:37:33.717 [2024-07-14 15:10:12.870995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.717 [2024-07-14 15:10:12.871028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.717 qpair failed and we were unable to recover it. 00:37:33.717 [2024-07-14 15:10:12.871165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.717 [2024-07-14 15:10:12.871212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.717 qpair failed and we were unable to recover it. 00:37:33.717 [2024-07-14 15:10:12.871331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.717 [2024-07-14 15:10:12.871365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.717 qpair failed and we were unable to recover it. 00:37:33.717 [2024-07-14 15:10:12.871515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.717 [2024-07-14 15:10:12.871549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.717 qpair failed and we were unable to recover it. 00:37:33.717 [2024-07-14 15:10:12.871688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.717 [2024-07-14 15:10:12.871721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.717 qpair failed and we were unable to recover it. 00:37:33.717 [2024-07-14 15:10:12.871842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.717 [2024-07-14 15:10:12.871884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.717 qpair failed and we were unable to recover it. 00:37:33.717 [2024-07-14 15:10:12.872026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.717 [2024-07-14 15:10:12.872059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.717 qpair failed and we were unable to recover it. 00:37:33.717 [2024-07-14 15:10:12.872162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.717 [2024-07-14 15:10:12.872194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.717 qpair failed and we were unable to recover it. 00:37:33.717 [2024-07-14 15:10:12.872328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.717 [2024-07-14 15:10:12.872361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.717 qpair failed and we were unable to recover it. 00:37:33.717 [2024-07-14 15:10:12.872477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.717 [2024-07-14 15:10:12.872510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.717 qpair failed and we were unable to recover it. 00:37:33.717 [2024-07-14 15:10:12.872643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.717 [2024-07-14 15:10:12.872676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.717 qpair failed and we were unable to recover it. 00:37:33.717 [2024-07-14 15:10:12.872783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.717 [2024-07-14 15:10:12.872816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.717 qpair failed and we were unable to recover it. 00:37:33.717 [2024-07-14 15:10:12.872962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.717 [2024-07-14 15:10:12.872995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.717 qpair failed and we were unable to recover it. 00:37:33.717 [2024-07-14 15:10:12.873101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.717 [2024-07-14 15:10:12.873133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.717 qpair failed and we were unable to recover it. 00:37:33.717 [2024-07-14 15:10:12.873284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.717 [2024-07-14 15:10:12.873332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.717 qpair failed and we were unable to recover it. 00:37:33.717 [2024-07-14 15:10:12.873475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.717 [2024-07-14 15:10:12.873511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.717 qpair failed and we were unable to recover it. 00:37:33.717 [2024-07-14 15:10:12.873625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.717 [2024-07-14 15:10:12.873659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.717 qpair failed and we were unable to recover it. 00:37:33.717 [2024-07-14 15:10:12.873765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.717 [2024-07-14 15:10:12.873798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.717 qpair failed and we were unable to recover it. 00:37:33.717 [2024-07-14 15:10:12.873934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.717 [2024-07-14 15:10:12.873970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.717 qpair failed and we were unable to recover it. 00:37:33.717 [2024-07-14 15:10:12.874111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.717 [2024-07-14 15:10:12.874145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.717 qpair failed and we were unable to recover it. 00:37:33.717 [2024-07-14 15:10:12.874263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.717 [2024-07-14 15:10:12.874298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.717 qpair failed and we were unable to recover it. 00:37:33.717 [2024-07-14 15:10:12.874460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.717 [2024-07-14 15:10:12.874493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.717 qpair failed and we were unable to recover it. 00:37:33.717 [2024-07-14 15:10:12.874632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.717 [2024-07-14 15:10:12.874665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.717 qpair failed and we were unable to recover it. 00:37:33.717 [2024-07-14 15:10:12.874779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.717 [2024-07-14 15:10:12.874812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.717 qpair failed and we were unable to recover it. 00:37:33.717 [2024-07-14 15:10:12.874925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.717 [2024-07-14 15:10:12.874959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.717 qpair failed and we were unable to recover it. 00:37:33.717 [2024-07-14 15:10:12.875065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.717 [2024-07-14 15:10:12.875098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.717 qpair failed and we were unable to recover it. 00:37:33.717 [2024-07-14 15:10:12.875206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.717 [2024-07-14 15:10:12.875238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.717 qpair failed and we were unable to recover it. 00:37:33.717 [2024-07-14 15:10:12.875350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.717 [2024-07-14 15:10:12.875387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.717 qpair failed and we were unable to recover it. 00:37:33.717 [2024-07-14 15:10:12.875533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.717 [2024-07-14 15:10:12.875569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.717 qpair failed and we were unable to recover it. 00:37:33.717 [2024-07-14 15:10:12.875685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.717 [2024-07-14 15:10:12.875720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.717 qpair failed and we were unable to recover it. 00:37:33.717 [2024-07-14 15:10:12.875858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.717 [2024-07-14 15:10:12.875915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.717 qpair failed and we were unable to recover it. 00:37:33.718 [2024-07-14 15:10:12.876058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.718 [2024-07-14 15:10:12.876091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.718 qpair failed and we were unable to recover it. 00:37:33.718 [2024-07-14 15:10:12.876195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.718 [2024-07-14 15:10:12.876228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.718 qpair failed and we were unable to recover it. 00:37:33.718 [2024-07-14 15:10:12.876360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.718 [2024-07-14 15:10:12.876392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.718 qpair failed and we were unable to recover it. 00:37:33.718 [2024-07-14 15:10:12.876507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.718 [2024-07-14 15:10:12.876540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.718 qpair failed and we were unable to recover it. 00:37:33.718 [2024-07-14 15:10:12.876651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.718 [2024-07-14 15:10:12.876699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.718 qpair failed and we were unable to recover it. 00:37:33.718 [2024-07-14 15:10:12.876846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.718 [2024-07-14 15:10:12.876887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.718 qpair failed and we were unable to recover it. 00:37:33.718 [2024-07-14 15:10:12.876997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.718 [2024-07-14 15:10:12.877030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.718 qpair failed and we were unable to recover it. 00:37:33.718 [2024-07-14 15:10:12.877155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.718 [2024-07-14 15:10:12.877189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.718 qpair failed and we were unable to recover it. 00:37:33.718 [2024-07-14 15:10:12.877354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.718 [2024-07-14 15:10:12.877387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.718 qpair failed and we were unable to recover it. 00:37:33.718 [2024-07-14 15:10:12.877488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.718 [2024-07-14 15:10:12.877522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.718 qpair failed and we were unable to recover it. 00:37:33.718 [2024-07-14 15:10:12.877640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.718 [2024-07-14 15:10:12.877675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.718 qpair failed and we were unable to recover it. 00:37:33.718 [2024-07-14 15:10:12.877787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.718 [2024-07-14 15:10:12.877820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.718 qpair failed and we were unable to recover it. 00:37:33.718 [2024-07-14 15:10:12.877968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.718 [2024-07-14 15:10:12.878001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.718 qpair failed and we were unable to recover it. 00:37:33.718 [2024-07-14 15:10:12.878119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.718 [2024-07-14 15:10:12.878151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.718 qpair failed and we were unable to recover it. 00:37:33.718 [2024-07-14 15:10:12.878278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.718 [2024-07-14 15:10:12.878323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.718 qpair failed and we were unable to recover it. 00:37:33.718 [2024-07-14 15:10:12.878434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.718 [2024-07-14 15:10:12.878466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.718 qpair failed and we were unable to recover it. 00:37:33.718 [2024-07-14 15:10:12.878584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.718 [2024-07-14 15:10:12.878616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.718 qpair failed and we were unable to recover it. 00:37:33.718 [2024-07-14 15:10:12.878756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.718 [2024-07-14 15:10:12.878788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.718 qpair failed and we were unable to recover it. 00:37:33.718 [2024-07-14 15:10:12.878902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.718 [2024-07-14 15:10:12.878935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.718 qpair failed and we were unable to recover it. 00:37:33.718 [2024-07-14 15:10:12.879070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.718 [2024-07-14 15:10:12.879104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.718 qpair failed and we were unable to recover it. 00:37:33.718 [2024-07-14 15:10:12.879253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.718 [2024-07-14 15:10:12.879286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.718 qpair failed and we were unable to recover it. 00:37:33.718 [2024-07-14 15:10:12.879421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.718 [2024-07-14 15:10:12.879454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.718 qpair failed and we were unable to recover it. 00:37:33.718 [2024-07-14 15:10:12.879557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.718 [2024-07-14 15:10:12.879597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.718 qpair failed and we were unable to recover it. 00:37:33.718 [2024-07-14 15:10:12.879732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.718 [2024-07-14 15:10:12.879765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.718 qpair failed and we were unable to recover it. 00:37:33.718 [2024-07-14 15:10:12.879899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.718 [2024-07-14 15:10:12.879932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.718 qpair failed and we were unable to recover it. 00:37:33.718 [2024-07-14 15:10:12.880046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.718 [2024-07-14 15:10:12.880079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.718 qpair failed and we were unable to recover it. 00:37:33.718 [2024-07-14 15:10:12.880200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.718 [2024-07-14 15:10:12.880234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.718 qpair failed and we were unable to recover it. 00:37:33.718 [2024-07-14 15:10:12.880339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.718 [2024-07-14 15:10:12.880372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.718 qpair failed and we were unable to recover it. 00:37:33.718 [2024-07-14 15:10:12.880483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.718 [2024-07-14 15:10:12.880516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.718 qpair failed and we were unable to recover it. 00:37:33.718 [2024-07-14 15:10:12.880652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.718 [2024-07-14 15:10:12.880685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.718 qpair failed and we were unable to recover it. 00:37:33.718 [2024-07-14 15:10:12.880790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.718 [2024-07-14 15:10:12.880823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.718 qpair failed and we were unable to recover it. 00:37:33.718 [2024-07-14 15:10:12.880946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.718 [2024-07-14 15:10:12.880979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.718 qpair failed and we were unable to recover it. 00:37:33.718 [2024-07-14 15:10:12.881112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.742 [2024-07-14 15:10:12.881146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.742 qpair failed and we were unable to recover it. 00:37:33.742 [2024-07-14 15:10:12.881267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.742 [2024-07-14 15:10:12.881301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.742 qpair failed and we were unable to recover it. 00:37:33.742 [2024-07-14 15:10:12.881434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.742 [2024-07-14 15:10:12.881467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.742 qpair failed and we were unable to recover it. 00:37:33.742 [2024-07-14 15:10:12.881574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.742 [2024-07-14 15:10:12.881607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.742 qpair failed and we were unable to recover it. 00:37:33.742 [2024-07-14 15:10:12.881710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.742 [2024-07-14 15:10:12.881746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.742 qpair failed and we were unable to recover it. 00:37:33.742 [2024-07-14 15:10:12.881865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.742 [2024-07-14 15:10:12.881910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.742 qpair failed and we were unable to recover it. 00:37:33.742 [2024-07-14 15:10:12.882051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.742 [2024-07-14 15:10:12.882085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.742 qpair failed and we were unable to recover it. 00:37:33.742 [2024-07-14 15:10:12.882223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.742 [2024-07-14 15:10:12.882256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.742 qpair failed and we were unable to recover it. 00:37:33.742 [2024-07-14 15:10:12.882358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.742 [2024-07-14 15:10:12.882392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.742 qpair failed and we were unable to recover it. 00:37:33.742 [2024-07-14 15:10:12.882493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.742 [2024-07-14 15:10:12.882526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.742 qpair failed and we were unable to recover it. 00:37:33.742 [2024-07-14 15:10:12.882691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.742 [2024-07-14 15:10:12.882739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.742 qpair failed and we were unable to recover it. 00:37:33.743 [2024-07-14 15:10:12.882855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.743 [2024-07-14 15:10:12.882897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.743 qpair failed and we were unable to recover it. 00:37:33.743 [2024-07-14 15:10:12.883061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.743 [2024-07-14 15:10:12.883095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.743 qpair failed and we were unable to recover it. 00:37:33.743 [2024-07-14 15:10:12.883208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.743 [2024-07-14 15:10:12.883241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.743 qpair failed and we were unable to recover it. 00:37:33.743 [2024-07-14 15:10:12.883352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.743 [2024-07-14 15:10:12.883385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.743 qpair failed and we were unable to recover it. 00:37:33.743 [2024-07-14 15:10:12.883504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.743 [2024-07-14 15:10:12.883537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.743 qpair failed and we were unable to recover it. 00:37:33.743 [2024-07-14 15:10:12.883674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.743 [2024-07-14 15:10:12.883709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.743 qpair failed and we were unable to recover it. 00:37:33.743 [2024-07-14 15:10:12.883854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.743 [2024-07-14 15:10:12.883915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.743 qpair failed and we were unable to recover it. 00:37:33.743 [2024-07-14 15:10:12.884067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.743 [2024-07-14 15:10:12.884101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.743 qpair failed and we were unable to recover it. 00:37:33.743 [2024-07-14 15:10:12.884232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.743 [2024-07-14 15:10:12.884268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.743 qpair failed and we were unable to recover it. 00:37:33.743 [2024-07-14 15:10:12.884380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.743 [2024-07-14 15:10:12.884413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.743 qpair failed and we were unable to recover it. 00:37:33.743 [2024-07-14 15:10:12.884535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.743 [2024-07-14 15:10:12.884569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.743 qpair failed and we were unable to recover it. 00:37:33.743 [2024-07-14 15:10:12.884733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.743 [2024-07-14 15:10:12.884767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.743 qpair failed and we were unable to recover it. 00:37:33.743 [2024-07-14 15:10:12.884899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.743 [2024-07-14 15:10:12.884947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.743 qpair failed and we were unable to recover it. 00:37:33.743 [2024-07-14 15:10:12.885104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.743 [2024-07-14 15:10:12.885140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.743 qpair failed and we were unable to recover it. 00:37:33.743 [2024-07-14 15:10:12.885286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.743 [2024-07-14 15:10:12.885322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.743 qpair failed and we were unable to recover it. 00:37:33.743 [2024-07-14 15:10:12.885455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.743 [2024-07-14 15:10:12.885489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.743 qpair failed and we were unable to recover it. 00:37:33.743 [2024-07-14 15:10:12.885610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.743 [2024-07-14 15:10:12.885644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.743 qpair failed and we were unable to recover it. 00:37:33.743 [2024-07-14 15:10:12.885755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.743 [2024-07-14 15:10:12.885790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.743 qpair failed and we were unable to recover it. 00:37:33.743 [2024-07-14 15:10:12.885962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.743 [2024-07-14 15:10:12.885996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.743 qpair failed and we were unable to recover it. 00:37:33.743 [2024-07-14 15:10:12.886109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.743 [2024-07-14 15:10:12.886143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.743 qpair failed and we were unable to recover it. 00:37:33.743 [2024-07-14 15:10:12.886294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.743 [2024-07-14 15:10:12.886342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.743 qpair failed and we were unable to recover it. 00:37:33.743 [2024-07-14 15:10:12.886467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.743 [2024-07-14 15:10:12.886502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.743 qpair failed and we were unable to recover it. 00:37:33.743 [2024-07-14 15:10:12.886637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.743 [2024-07-14 15:10:12.886671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.743 qpair failed and we were unable to recover it. 00:37:33.743 [2024-07-14 15:10:12.886789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.743 [2024-07-14 15:10:12.886823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.743 qpair failed and we were unable to recover it. 00:37:33.743 [2024-07-14 15:10:12.886951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.743 [2024-07-14 15:10:12.886986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.743 qpair failed and we were unable to recover it. 00:37:33.743 [2024-07-14 15:10:12.887104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.743 [2024-07-14 15:10:12.887138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.743 qpair failed and we were unable to recover it. 00:37:33.743 [2024-07-14 15:10:12.887250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.743 [2024-07-14 15:10:12.887283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.743 qpair failed and we were unable to recover it. 00:37:33.743 [2024-07-14 15:10:12.887417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.743 [2024-07-14 15:10:12.887450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.743 qpair failed and we were unable to recover it. 00:37:33.743 [2024-07-14 15:10:12.887617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.743 [2024-07-14 15:10:12.887650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.743 qpair failed and we were unable to recover it. 00:37:33.743 [2024-07-14 15:10:12.887765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.743 [2024-07-14 15:10:12.887799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.743 qpair failed and we were unable to recover it. 00:37:33.743 [2024-07-14 15:10:12.887911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.743 [2024-07-14 15:10:12.887945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.743 qpair failed and we were unable to recover it. 00:37:33.743 [2024-07-14 15:10:12.888086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.743 [2024-07-14 15:10:12.888119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.743 qpair failed and we were unable to recover it. 00:37:33.743 [2024-07-14 15:10:12.888233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.743 [2024-07-14 15:10:12.888266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.743 qpair failed and we were unable to recover it. 00:37:33.743 [2024-07-14 15:10:12.888401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.744 [2024-07-14 15:10:12.888441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.744 qpair failed and we were unable to recover it. 00:37:33.744 [2024-07-14 15:10:12.888556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.744 [2024-07-14 15:10:12.888589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.744 qpair failed and we were unable to recover it. 00:37:33.744 [2024-07-14 15:10:12.888693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.744 [2024-07-14 15:10:12.888728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.744 qpair failed and we were unable to recover it. 00:37:33.744 [2024-07-14 15:10:12.888872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.744 [2024-07-14 15:10:12.888912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.744 qpair failed and we were unable to recover it. 00:37:33.744 [2024-07-14 15:10:12.889051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.744 [2024-07-14 15:10:12.889085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.744 qpair failed and we were unable to recover it. 00:37:33.744 [2024-07-14 15:10:12.889220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.744 [2024-07-14 15:10:12.889253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.744 qpair failed and we were unable to recover it. 00:37:33.744 [2024-07-14 15:10:12.889391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.744 [2024-07-14 15:10:12.889425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.744 qpair failed and we were unable to recover it. 00:37:33.744 [2024-07-14 15:10:12.889548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.744 [2024-07-14 15:10:12.889581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.744 qpair failed and we were unable to recover it. 00:37:33.744 [2024-07-14 15:10:12.889699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.744 [2024-07-14 15:10:12.889739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.744 qpair failed and we were unable to recover it. 00:37:33.744 [2024-07-14 15:10:12.889860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.744 [2024-07-14 15:10:12.889917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.744 qpair failed and we were unable to recover it. 00:37:33.744 [2024-07-14 15:10:12.890045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.744 [2024-07-14 15:10:12.890080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.744 qpair failed and we were unable to recover it. 00:37:33.744 [2024-07-14 15:10:12.890194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.744 [2024-07-14 15:10:12.890229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.744 qpair failed and we were unable to recover it. 00:37:33.744 [2024-07-14 15:10:12.890394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.744 [2024-07-14 15:10:12.890427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.744 qpair failed and we were unable to recover it. 00:37:33.744 [2024-07-14 15:10:12.890541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.744 [2024-07-14 15:10:12.890576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.744 qpair failed and we were unable to recover it. 00:37:33.744 [2024-07-14 15:10:12.890716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.744 [2024-07-14 15:10:12.890749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.744 qpair failed and we were unable to recover it. 00:37:33.744 [2024-07-14 15:10:12.890897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.744 [2024-07-14 15:10:12.890931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.744 qpair failed and we were unable to recover it. 00:37:33.744 [2024-07-14 15:10:12.891072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.744 [2024-07-14 15:10:12.891106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.744 qpair failed and we were unable to recover it. 00:37:33.744 [2024-07-14 15:10:12.891213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.744 [2024-07-14 15:10:12.891246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.744 qpair failed and we were unable to recover it. 00:37:33.744 [2024-07-14 15:10:12.891385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.744 [2024-07-14 15:10:12.891419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.744 qpair failed and we were unable to recover it. 00:37:33.744 [2024-07-14 15:10:12.891559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.744 [2024-07-14 15:10:12.891599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.744 qpair failed and we were unable to recover it. 00:37:33.744 [2024-07-14 15:10:12.891734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.744 [2024-07-14 15:10:12.891767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.744 qpair failed and we were unable to recover it. 00:37:33.744 [2024-07-14 15:10:12.891903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.744 [2024-07-14 15:10:12.891938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.744 qpair failed and we were unable to recover it. 00:37:33.744 [2024-07-14 15:10:12.892081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.744 [2024-07-14 15:10:12.892114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.744 qpair failed and we were unable to recover it. 00:37:33.744 [2024-07-14 15:10:12.892244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.744 [2024-07-14 15:10:12.892278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.744 qpair failed and we were unable to recover it. 00:37:33.744 [2024-07-14 15:10:12.892388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.744 [2024-07-14 15:10:12.892421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.744 qpair failed and we were unable to recover it. 00:37:33.744 [2024-07-14 15:10:12.892535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.744 [2024-07-14 15:10:12.892568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.744 qpair failed and we were unable to recover it. 00:37:33.744 [2024-07-14 15:10:12.892701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.744 [2024-07-14 15:10:12.892734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.744 qpair failed and we were unable to recover it. 00:37:33.744 [2024-07-14 15:10:12.892867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.744 [2024-07-14 15:10:12.892922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.744 qpair failed and we were unable to recover it. 00:37:33.744 [2024-07-14 15:10:12.893068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.744 [2024-07-14 15:10:12.893102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.744 qpair failed and we were unable to recover it. 00:37:33.744 [2024-07-14 15:10:12.893211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.744 [2024-07-14 15:10:12.893244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.744 qpair failed and we were unable to recover it. 00:37:33.744 [2024-07-14 15:10:12.893340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.744 [2024-07-14 15:10:12.893374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.744 qpair failed and we were unable to recover it. 00:37:33.744 [2024-07-14 15:10:12.893517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.744 [2024-07-14 15:10:12.893550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.744 qpair failed and we were unable to recover it. 00:37:33.744 [2024-07-14 15:10:12.893662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.744 [2024-07-14 15:10:12.893695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.744 qpair failed and we were unable to recover it. 00:37:33.744 [2024-07-14 15:10:12.893808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.744 [2024-07-14 15:10:12.893841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.744 qpair failed and we were unable to recover it. 00:37:33.744 [2024-07-14 15:10:12.893969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.744 [2024-07-14 15:10:12.894003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.744 qpair failed and we were unable to recover it. 00:37:33.744 [2024-07-14 15:10:12.894132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.744 [2024-07-14 15:10:12.894179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.744 qpair failed and we were unable to recover it. 00:37:33.744 [2024-07-14 15:10:12.894299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.744 [2024-07-14 15:10:12.894334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.744 qpair failed and we were unable to recover it. 00:37:33.744 [2024-07-14 15:10:12.894476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.744 [2024-07-14 15:10:12.894510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.744 qpair failed and we were unable to recover it. 00:37:33.744 [2024-07-14 15:10:12.894622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.744 [2024-07-14 15:10:12.894656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.744 qpair failed and we were unable to recover it. 00:37:33.744 [2024-07-14 15:10:12.894794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.744 [2024-07-14 15:10:12.894830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.744 qpair failed and we were unable to recover it. 00:37:33.744 [2024-07-14 15:10:12.894978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.744 [2024-07-14 15:10:12.895017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.744 qpair failed and we were unable to recover it. 00:37:33.745 [2024-07-14 15:10:12.895156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.745 [2024-07-14 15:10:12.895189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.745 qpair failed and we were unable to recover it. 00:37:33.745 [2024-07-14 15:10:12.895331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.745 [2024-07-14 15:10:12.895364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.745 qpair failed and we were unable to recover it. 00:37:33.745 [2024-07-14 15:10:12.895470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.745 [2024-07-14 15:10:12.895502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.745 qpair failed and we were unable to recover it. 00:37:33.745 [2024-07-14 15:10:12.895630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.745 [2024-07-14 15:10:12.895662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.745 qpair failed and we were unable to recover it. 00:37:33.745 [2024-07-14 15:10:12.895767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.745 [2024-07-14 15:10:12.895799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.745 qpair failed and we were unable to recover it. 00:37:33.745 [2024-07-14 15:10:12.895936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.745 [2024-07-14 15:10:12.895970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.745 qpair failed and we were unable to recover it. 00:37:33.745 [2024-07-14 15:10:12.896079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.745 [2024-07-14 15:10:12.896112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.745 qpair failed and we were unable to recover it. 00:37:33.745 [2024-07-14 15:10:12.896213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.745 [2024-07-14 15:10:12.896246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.745 qpair failed and we were unable to recover it. 00:37:33.745 [2024-07-14 15:10:12.896384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.745 [2024-07-14 15:10:12.896417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.745 qpair failed and we were unable to recover it. 00:37:33.745 [2024-07-14 15:10:12.896568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.745 [2024-07-14 15:10:12.896603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.745 qpair failed and we were unable to recover it. 00:37:33.745 [2024-07-14 15:10:12.896771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.745 [2024-07-14 15:10:12.896808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.745 qpair failed and we were unable to recover it. 00:37:33.745 [2024-07-14 15:10:12.896957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.745 [2024-07-14 15:10:12.896991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.745 qpair failed and we were unable to recover it. 00:37:33.745 [2024-07-14 15:10:12.897133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.745 [2024-07-14 15:10:12.897166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.745 qpair failed and we were unable to recover it. 00:37:33.745 [2024-07-14 15:10:12.897298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.745 [2024-07-14 15:10:12.897331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.745 qpair failed and we were unable to recover it. 00:37:33.745 [2024-07-14 15:10:12.897445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.745 [2024-07-14 15:10:12.897479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.745 qpair failed and we were unable to recover it. 00:37:33.745 [2024-07-14 15:10:12.897595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.745 [2024-07-14 15:10:12.897630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.745 qpair failed and we were unable to recover it. 00:37:33.745 [2024-07-14 15:10:12.897742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.745 [2024-07-14 15:10:12.897775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.745 qpair failed and we were unable to recover it. 00:37:33.745 [2024-07-14 15:10:12.897915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.745 [2024-07-14 15:10:12.897948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.745 qpair failed and we were unable to recover it. 00:37:33.745 [2024-07-14 15:10:12.898073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.745 [2024-07-14 15:10:12.898106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.745 qpair failed and we were unable to recover it. 00:37:33.745 [2024-07-14 15:10:12.898227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.745 [2024-07-14 15:10:12.898260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.745 qpair failed and we were unable to recover it. 00:37:33.745 [2024-07-14 15:10:12.898374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.745 [2024-07-14 15:10:12.898407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.745 qpair failed and we were unable to recover it. 00:37:33.745 [2024-07-14 15:10:12.898544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.745 [2024-07-14 15:10:12.898577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.745 qpair failed and we were unable to recover it. 00:37:33.745 [2024-07-14 15:10:12.898696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.745 [2024-07-14 15:10:12.898728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.745 qpair failed and we were unable to recover it. 00:37:33.745 [2024-07-14 15:10:12.898864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.745 [2024-07-14 15:10:12.898904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.745 qpair failed and we were unable to recover it. 00:37:33.745 [2024-07-14 15:10:12.899006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.745 [2024-07-14 15:10:12.899050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.745 qpair failed and we were unable to recover it. 00:37:33.745 [2024-07-14 15:10:12.899153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.745 [2024-07-14 15:10:12.899186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.745 qpair failed and we were unable to recover it. 00:37:33.745 [2024-07-14 15:10:12.899334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.745 [2024-07-14 15:10:12.899367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.745 qpair failed and we were unable to recover it. 00:37:33.745 [2024-07-14 15:10:12.899469] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:37:33.745 [2024-07-14 15:10:12.899536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.745 [2024-07-14 15:10:12.899570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.745 [2024-07-14 15:10:12.899580] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:33.745 qpair failed and we were unable to recover it. 00:37:33.745 [2024-07-14 15:10:12.899679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.745 [2024-07-14 15:10:12.899711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.745 qpair failed and we were unable to recover it. 00:37:33.745 [2024-07-14 15:10:12.899849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.745 [2024-07-14 15:10:12.899897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.745 qpair failed and we were unable to recover it. 00:37:33.745 [2024-07-14 15:10:12.900015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.745 [2024-07-14 15:10:12.900047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.745 qpair failed and we were unable to recover it. 00:37:33.745 [2024-07-14 15:10:12.900201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.745 [2024-07-14 15:10:12.900235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.745 qpair failed and we were unable to recover it. 00:37:33.745 [2024-07-14 15:10:12.900382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.745 [2024-07-14 15:10:12.900415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.745 qpair failed and we were unable to recover it. 00:37:33.745 [2024-07-14 15:10:12.900551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.745 [2024-07-14 15:10:12.900584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.745 qpair failed and we were unable to recover it. 00:37:33.745 [2024-07-14 15:10:12.900697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.745 [2024-07-14 15:10:12.900730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.745 qpair failed and we were unable to recover it. 00:37:33.745 [2024-07-14 15:10:12.900835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.745 [2024-07-14 15:10:12.900868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.745 qpair failed and we were unable to recover it. 00:37:33.745 [2024-07-14 15:10:12.900984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.745 [2024-07-14 15:10:12.901017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.745 qpair failed and we were unable to recover it. 00:37:33.745 [2024-07-14 15:10:12.901158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.745 [2024-07-14 15:10:12.901191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.745 qpair failed and we were unable to recover it. 00:37:33.745 [2024-07-14 15:10:12.901303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.745 [2024-07-14 15:10:12.901341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.745 qpair failed and we were unable to recover it. 00:37:33.745 [2024-07-14 15:10:12.901448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.746 [2024-07-14 15:10:12.901481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.746 qpair failed and we were unable to recover it. 00:37:33.746 [2024-07-14 15:10:12.901620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.746 [2024-07-14 15:10:12.901653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.746 qpair failed and we were unable to recover it. 00:37:33.746 [2024-07-14 15:10:12.901788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.746 [2024-07-14 15:10:12.901821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.746 qpair failed and we were unable to recover it. 00:37:33.746 [2024-07-14 15:10:12.901938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.746 [2024-07-14 15:10:12.901971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.746 qpair failed and we were unable to recover it. 00:37:33.746 [2024-07-14 15:10:12.902087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.746 [2024-07-14 15:10:12.902119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.746 qpair failed and we were unable to recover it. 00:37:33.746 [2024-07-14 15:10:12.902251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.746 [2024-07-14 15:10:12.902284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.746 qpair failed and we were unable to recover it. 00:37:33.746 [2024-07-14 15:10:12.902443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.746 [2024-07-14 15:10:12.902476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.746 qpair failed and we were unable to recover it. 00:37:33.746 [2024-07-14 15:10:12.902603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.746 [2024-07-14 15:10:12.902636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.746 qpair failed and we were unable to recover it. 00:37:33.746 [2024-07-14 15:10:12.902800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.746 [2024-07-14 15:10:12.902833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.746 qpair failed and we were unable to recover it. 00:37:33.746 [2024-07-14 15:10:12.902945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.746 [2024-07-14 15:10:12.902978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.746 qpair failed and we were unable to recover it. 00:37:33.746 [2024-07-14 15:10:12.903092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.746 [2024-07-14 15:10:12.903125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.746 qpair failed and we were unable to recover it. 00:37:33.746 [2024-07-14 15:10:12.903234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.746 [2024-07-14 15:10:12.903267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.746 qpair failed and we were unable to recover it. 00:37:33.746 [2024-07-14 15:10:12.903378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.746 [2024-07-14 15:10:12.903410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.746 qpair failed and we were unable to recover it. 00:37:33.746 [2024-07-14 15:10:12.903522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.746 [2024-07-14 15:10:12.903555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.746 qpair failed and we were unable to recover it. 00:37:33.746 [2024-07-14 15:10:12.903659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.746 [2024-07-14 15:10:12.903692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.746 qpair failed and we were unable to recover it. 00:37:33.746 [2024-07-14 15:10:12.903822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.746 [2024-07-14 15:10:12.903855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.746 qpair failed and we were unable to recover it. 00:37:33.746 [2024-07-14 15:10:12.903984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.746 [2024-07-14 15:10:12.904018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.746 qpair failed and we were unable to recover it. 00:37:33.746 [2024-07-14 15:10:12.904197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.746 [2024-07-14 15:10:12.904244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.746 qpair failed and we were unable to recover it. 00:37:33.746 [2024-07-14 15:10:12.904361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.746 [2024-07-14 15:10:12.904396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.746 qpair failed and we were unable to recover it. 00:37:33.746 [2024-07-14 15:10:12.904517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.746 [2024-07-14 15:10:12.904551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.746 qpair failed and we were unable to recover it. 00:37:33.746 [2024-07-14 15:10:12.904711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.746 [2024-07-14 15:10:12.904743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.746 qpair failed and we were unable to recover it. 00:37:33.746 [2024-07-14 15:10:12.904882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.746 [2024-07-14 15:10:12.904916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.746 qpair failed and we were unable to recover it. 00:37:33.746 [2024-07-14 15:10:12.905036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.746 [2024-07-14 15:10:12.905069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.746 qpair failed and we were unable to recover it. 00:37:33.746 [2024-07-14 15:10:12.905204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.746 [2024-07-14 15:10:12.905238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.746 qpair failed and we were unable to recover it. 00:37:33.746 [2024-07-14 15:10:12.905347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.746 [2024-07-14 15:10:12.905381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.746 qpair failed and we were unable to recover it. 00:37:33.746 [2024-07-14 15:10:12.905491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.746 [2024-07-14 15:10:12.905524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.746 qpair failed and we were unable to recover it. 00:37:33.746 [2024-07-14 15:10:12.905642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.746 [2024-07-14 15:10:12.905675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.746 qpair failed and we were unable to recover it. 00:37:33.746 [2024-07-14 15:10:12.905809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.746 [2024-07-14 15:10:12.905842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.746 qpair failed and we were unable to recover it. 00:37:33.746 [2024-07-14 15:10:12.905956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.746 [2024-07-14 15:10:12.905990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.746 qpair failed and we were unable to recover it. 00:37:33.746 [2024-07-14 15:10:12.906105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.746 [2024-07-14 15:10:12.906138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.746 qpair failed and we were unable to recover it. 00:37:33.746 [2024-07-14 15:10:12.906244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.746 [2024-07-14 15:10:12.906276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.746 qpair failed and we were unable to recover it. 00:37:33.746 [2024-07-14 15:10:12.906382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.746 [2024-07-14 15:10:12.906415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.746 qpair failed and we were unable to recover it. 00:37:33.746 [2024-07-14 15:10:12.906552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.746 [2024-07-14 15:10:12.906585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.746 qpair failed and we were unable to recover it. 00:37:33.746 [2024-07-14 15:10:12.906716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.746 [2024-07-14 15:10:12.906748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.746 qpair failed and we were unable to recover it. 00:37:33.746 [2024-07-14 15:10:12.906890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.746 [2024-07-14 15:10:12.906924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.746 qpair failed and we were unable to recover it. 00:37:33.746 [2024-07-14 15:10:12.907040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.746 [2024-07-14 15:10:12.907075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.746 qpair failed and we were unable to recover it. 00:37:33.746 [2024-07-14 15:10:12.907229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.746 [2024-07-14 15:10:12.907277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.746 qpair failed and we were unable to recover it. 00:37:33.746 [2024-07-14 15:10:12.907426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.746 [2024-07-14 15:10:12.907463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.746 qpair failed and we were unable to recover it. 00:37:33.746 [2024-07-14 15:10:12.907599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.746 [2024-07-14 15:10:12.907634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.746 qpair failed and we were unable to recover it. 00:37:33.746 [2024-07-14 15:10:12.907748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.746 [2024-07-14 15:10:12.907787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.746 qpair failed and we were unable to recover it. 00:37:33.746 [2024-07-14 15:10:12.907932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.746 [2024-07-14 15:10:12.907967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.746 qpair failed and we were unable to recover it. 00:37:33.746 [2024-07-14 15:10:12.908080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.747 [2024-07-14 15:10:12.908112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.747 qpair failed and we were unable to recover it. 00:37:33.747 [2024-07-14 15:10:12.908232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.747 [2024-07-14 15:10:12.908265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.747 qpair failed and we were unable to recover it. 00:37:33.747 [2024-07-14 15:10:12.908380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.747 [2024-07-14 15:10:12.908414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.747 qpair failed and we were unable to recover it. 00:37:33.747 [2024-07-14 15:10:12.908559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.747 [2024-07-14 15:10:12.908593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.747 qpair failed and we were unable to recover it. 00:37:33.747 [2024-07-14 15:10:12.908733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.747 [2024-07-14 15:10:12.908768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.747 qpair failed and we were unable to recover it. 00:37:33.747 [2024-07-14 15:10:12.908887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.747 [2024-07-14 15:10:12.908924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.747 qpair failed and we were unable to recover it. 00:37:33.747 [2024-07-14 15:10:12.909061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.747 [2024-07-14 15:10:12.909097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.747 qpair failed and we were unable to recover it. 00:37:33.747 [2024-07-14 15:10:12.909229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.747 [2024-07-14 15:10:12.909262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.747 qpair failed and we were unable to recover it. 00:37:33.747 [2024-07-14 15:10:12.909426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.747 [2024-07-14 15:10:12.909462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.747 qpair failed and we were unable to recover it. 00:37:33.747 [2024-07-14 15:10:12.909595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.747 [2024-07-14 15:10:12.909630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.747 qpair failed and we were unable to recover it. 00:37:33.747 [2024-07-14 15:10:12.909765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.747 [2024-07-14 15:10:12.909797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.747 qpair failed and we were unable to recover it. 00:37:33.747 [2024-07-14 15:10:12.909933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.747 [2024-07-14 15:10:12.909966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.747 qpair failed and we were unable to recover it. 00:37:33.747 [2024-07-14 15:10:12.910083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.747 [2024-07-14 15:10:12.910119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.747 qpair failed and we were unable to recover it. 00:37:33.747 [2024-07-14 15:10:12.910263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.747 [2024-07-14 15:10:12.910297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.747 qpair failed and we were unable to recover it. 00:37:33.747 [2024-07-14 15:10:12.910442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.747 [2024-07-14 15:10:12.910475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.747 qpair failed and we were unable to recover it. 00:37:33.747 [2024-07-14 15:10:12.910609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.747 [2024-07-14 15:10:12.910642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.747 qpair failed and we were unable to recover it. 00:37:33.747 [2024-07-14 15:10:12.910746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.747 [2024-07-14 15:10:12.910779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.747 qpair failed and we were unable to recover it. 00:37:33.747 [2024-07-14 15:10:12.910894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.747 [2024-07-14 15:10:12.910927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.747 qpair failed and we were unable to recover it. 00:37:33.747 [2024-07-14 15:10:12.911067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.747 [2024-07-14 15:10:12.911102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.747 qpair failed and we were unable to recover it. 00:37:33.747 [2024-07-14 15:10:12.911240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.747 [2024-07-14 15:10:12.911275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.747 qpair failed and we were unable to recover it. 00:37:33.747 [2024-07-14 15:10:12.911391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.747 [2024-07-14 15:10:12.911425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.747 qpair failed and we were unable to recover it. 00:37:33.747 [2024-07-14 15:10:12.911556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.747 [2024-07-14 15:10:12.911589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.747 qpair failed and we were unable to recover it. 00:37:33.747 [2024-07-14 15:10:12.911723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.747 [2024-07-14 15:10:12.911756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.747 qpair failed and we were unable to recover it. 00:37:33.747 [2024-07-14 15:10:12.911895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.747 [2024-07-14 15:10:12.911929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.747 qpair failed and we were unable to recover it. 00:37:33.747 [2024-07-14 15:10:12.912070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.747 [2024-07-14 15:10:12.912104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.747 qpair failed and we were unable to recover it. 00:37:33.747 [2024-07-14 15:10:12.912225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.747 [2024-07-14 15:10:12.912259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.747 qpair failed and we were unable to recover it. 00:37:33.747 [2024-07-14 15:10:12.912397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.747 [2024-07-14 15:10:12.912430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.747 qpair failed and we were unable to recover it. 00:37:33.747 [2024-07-14 15:10:12.912569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.747 [2024-07-14 15:10:12.912604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.747 qpair failed and we were unable to recover it. 00:37:33.747 [2024-07-14 15:10:12.912713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.747 [2024-07-14 15:10:12.912746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.747 qpair failed and we were unable to recover it. 00:37:33.747 [2024-07-14 15:10:12.912885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.747 [2024-07-14 15:10:12.912919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.747 qpair failed and we were unable to recover it. 00:37:33.747 [2024-07-14 15:10:12.913032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.747 [2024-07-14 15:10:12.913066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.747 qpair failed and we were unable to recover it. 00:37:33.747 [2024-07-14 15:10:12.913206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.747 [2024-07-14 15:10:12.913239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.747 qpair failed and we were unable to recover it. 00:37:33.747 [2024-07-14 15:10:12.913408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.747 [2024-07-14 15:10:12.913442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.747 qpair failed and we were unable to recover it. 00:37:33.747 [2024-07-14 15:10:12.913550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.747 [2024-07-14 15:10:12.913584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.747 qpair failed and we were unable to recover it. 00:37:33.747 [2024-07-14 15:10:12.913695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.747 [2024-07-14 15:10:12.913727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.747 qpair failed and we were unable to recover it. 00:37:33.747 [2024-07-14 15:10:12.913869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.747 [2024-07-14 15:10:12.913908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.747 qpair failed and we were unable to recover it. 00:37:33.747 [2024-07-14 15:10:12.914045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.747 [2024-07-14 15:10:12.914078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.747 qpair failed and we were unable to recover it. 00:37:33.747 [2024-07-14 15:10:12.914209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.748 [2024-07-14 15:10:12.914242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.748 qpair failed and we were unable to recover it. 00:37:33.748 [2024-07-14 15:10:12.914357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.748 [2024-07-14 15:10:12.914394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.748 qpair failed and we were unable to recover it. 00:37:33.748 [2024-07-14 15:10:12.914531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.748 [2024-07-14 15:10:12.914565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.748 qpair failed and we were unable to recover it. 00:37:33.748 [2024-07-14 15:10:12.914687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.748 [2024-07-14 15:10:12.914722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.748 qpair failed and we were unable to recover it. 00:37:33.748 [2024-07-14 15:10:12.914860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.748 [2024-07-14 15:10:12.914912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.748 qpair failed and we were unable to recover it. 00:37:33.748 [2024-07-14 15:10:12.915024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.748 [2024-07-14 15:10:12.915057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.748 qpair failed and we were unable to recover it. 00:37:33.748 [2024-07-14 15:10:12.915192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.748 [2024-07-14 15:10:12.915225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.748 qpair failed and we were unable to recover it. 00:37:33.748 [2024-07-14 15:10:12.915369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.748 [2024-07-14 15:10:12.915404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.748 qpair failed and we were unable to recover it. 00:37:33.748 [2024-07-14 15:10:12.915538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.748 [2024-07-14 15:10:12.915573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.748 qpair failed and we were unable to recover it. 00:37:33.748 [2024-07-14 15:10:12.915712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.748 [2024-07-14 15:10:12.915745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.748 qpair failed and we were unable to recover it. 00:37:33.748 [2024-07-14 15:10:12.915884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.748 [2024-07-14 15:10:12.915917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.748 qpair failed and we were unable to recover it. 00:37:33.748 [2024-07-14 15:10:12.916050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.748 [2024-07-14 15:10:12.916083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.748 qpair failed and we were unable to recover it. 00:37:33.748 [2024-07-14 15:10:12.916189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.748 [2024-07-14 15:10:12.916222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.748 qpair failed and we were unable to recover it. 00:37:33.748 [2024-07-14 15:10:12.916331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.748 [2024-07-14 15:10:12.916366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.748 qpair failed and we were unable to recover it. 00:37:33.748 [2024-07-14 15:10:12.916532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.748 [2024-07-14 15:10:12.916566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.748 qpair failed and we were unable to recover it. 00:37:33.748 [2024-07-14 15:10:12.916685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.748 [2024-07-14 15:10:12.916718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.748 qpair failed and we were unable to recover it. 00:37:33.748 [2024-07-14 15:10:12.916859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.748 [2024-07-14 15:10:12.916899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.748 qpair failed and we were unable to recover it. 00:37:33.748 [2024-07-14 15:10:12.917039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.748 [2024-07-14 15:10:12.917072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.748 qpair failed and we were unable to recover it. 00:37:33.748 [2024-07-14 15:10:12.917235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.748 [2024-07-14 15:10:12.917282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.748 qpair failed and we were unable to recover it. 00:37:33.748 [2024-07-14 15:10:12.917404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.748 [2024-07-14 15:10:12.917440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.748 qpair failed and we were unable to recover it. 00:37:33.748 [2024-07-14 15:10:12.917556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.748 [2024-07-14 15:10:12.917589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.748 qpair failed and we were unable to recover it. 00:37:33.748 [2024-07-14 15:10:12.917723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.748 [2024-07-14 15:10:12.917755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.748 qpair failed and we were unable to recover it. 00:37:33.748 [2024-07-14 15:10:12.917860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.748 [2024-07-14 15:10:12.917901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.748 qpair failed and we were unable to recover it. 00:37:33.748 [2024-07-14 15:10:12.918062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.748 [2024-07-14 15:10:12.918095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.748 qpair failed and we were unable to recover it. 00:37:33.748 [2024-07-14 15:10:12.918209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.748 [2024-07-14 15:10:12.918243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.748 qpair failed and we were unable to recover it. 00:37:33.748 [2024-07-14 15:10:12.918386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.748 [2024-07-14 15:10:12.918420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.748 qpair failed and we were unable to recover it. 00:37:33.748 [2024-07-14 15:10:12.918556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.748 [2024-07-14 15:10:12.918589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.748 qpair failed and we were unable to recover it. 00:37:33.748 [2024-07-14 15:10:12.918723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.748 [2024-07-14 15:10:12.918756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.748 qpair failed and we were unable to recover it. 00:37:33.748 [2024-07-14 15:10:12.918901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.748 [2024-07-14 15:10:12.918935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.748 qpair failed and we were unable to recover it. 00:37:33.748 [2024-07-14 15:10:12.919045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.748 [2024-07-14 15:10:12.919079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.748 qpair failed and we were unable to recover it. 00:37:33.748 [2024-07-14 15:10:12.919218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.748 [2024-07-14 15:10:12.919252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.748 qpair failed and we were unable to recover it. 00:37:33.748 [2024-07-14 15:10:12.919425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.748 [2024-07-14 15:10:12.919458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.748 qpair failed and we were unable to recover it. 00:37:33.748 [2024-07-14 15:10:12.919567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.748 [2024-07-14 15:10:12.919600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.748 qpair failed and we were unable to recover it. 00:37:33.748 [2024-07-14 15:10:12.919716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.748 [2024-07-14 15:10:12.919762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.748 qpair failed and we were unable to recover it. 00:37:33.748 [2024-07-14 15:10:12.919894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.748 [2024-07-14 15:10:12.919927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.748 qpair failed and we were unable to recover it. 00:37:33.748 [2024-07-14 15:10:12.920042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.748 [2024-07-14 15:10:12.920076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.748 qpair failed and we were unable to recover it. 00:37:33.748 [2024-07-14 15:10:12.920211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.748 [2024-07-14 15:10:12.920244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.748 qpair failed and we were unable to recover it. 00:37:33.748 [2024-07-14 15:10:12.920350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.748 [2024-07-14 15:10:12.920383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.748 qpair failed and we were unable to recover it. 00:37:33.748 [2024-07-14 15:10:12.920514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.748 [2024-07-14 15:10:12.920547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.748 qpair failed and we were unable to recover it. 00:37:33.748 [2024-07-14 15:10:12.920684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.748 [2024-07-14 15:10:12.920719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.749 qpair failed and we were unable to recover it. 00:37:33.749 [2024-07-14 15:10:12.920856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.749 [2024-07-14 15:10:12.920896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.749 qpair failed and we were unable to recover it. 00:37:33.749 [2024-07-14 15:10:12.921028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.749 [2024-07-14 15:10:12.921069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.749 qpair failed and we were unable to recover it. 00:37:33.749 [2024-07-14 15:10:12.921178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.749 [2024-07-14 15:10:12.921211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.749 qpair failed and we were unable to recover it. 00:37:33.749 [2024-07-14 15:10:12.921352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.749 [2024-07-14 15:10:12.921386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.749 qpair failed and we were unable to recover it. 00:37:33.749 [2024-07-14 15:10:12.921498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.749 [2024-07-14 15:10:12.921531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.749 qpair failed and we were unable to recover it. 00:37:33.749 [2024-07-14 15:10:12.921643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.749 [2024-07-14 15:10:12.921677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.749 qpair failed and we were unable to recover it. 00:37:33.749 [2024-07-14 15:10:12.921813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.749 [2024-07-14 15:10:12.921846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.749 qpair failed and we were unable to recover it. 00:37:33.749 [2024-07-14 15:10:12.922007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.749 [2024-07-14 15:10:12.922053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.749 qpair failed and we were unable to recover it. 00:37:33.749 [2024-07-14 15:10:12.922192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.749 [2024-07-14 15:10:12.922226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.749 qpair failed and we were unable to recover it. 00:37:33.749 [2024-07-14 15:10:12.922390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.749 [2024-07-14 15:10:12.922423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.749 qpair failed and we were unable to recover it. 00:37:33.749 [2024-07-14 15:10:12.922531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.749 [2024-07-14 15:10:12.922563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.749 qpair failed and we were unable to recover it. 00:37:33.749 [2024-07-14 15:10:12.922725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.749 [2024-07-14 15:10:12.922759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.749 qpair failed and we were unable to recover it. 00:37:33.749 [2024-07-14 15:10:12.922928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.749 [2024-07-14 15:10:12.922962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.749 qpair failed and we were unable to recover it. 00:37:33.749 [2024-07-14 15:10:12.923068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.749 [2024-07-14 15:10:12.923102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.749 qpair failed and we were unable to recover it. 00:37:33.749 [2024-07-14 15:10:12.923233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.749 [2024-07-14 15:10:12.923267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.749 qpair failed and we were unable to recover it. 00:37:33.749 [2024-07-14 15:10:12.923435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.749 [2024-07-14 15:10:12.923468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.749 qpair failed and we were unable to recover it. 00:37:33.749 [2024-07-14 15:10:12.923635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.749 [2024-07-14 15:10:12.923669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.749 qpair failed and we were unable to recover it. 00:37:33.749 [2024-07-14 15:10:12.923780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.749 [2024-07-14 15:10:12.923814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.749 qpair failed and we were unable to recover it. 00:37:33.749 [2024-07-14 15:10:12.923967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.749 [2024-07-14 15:10:12.924001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.749 qpair failed and we were unable to recover it. 00:37:33.749 [2024-07-14 15:10:12.924137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.749 [2024-07-14 15:10:12.924170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.749 qpair failed and we were unable to recover it. 00:37:33.749 [2024-07-14 15:10:12.924301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.749 [2024-07-14 15:10:12.924334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.749 qpair failed and we were unable to recover it. 00:37:33.749 [2024-07-14 15:10:12.924446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.749 [2024-07-14 15:10:12.924479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.749 qpair failed and we were unable to recover it. 00:37:33.749 [2024-07-14 15:10:12.924617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.749 [2024-07-14 15:10:12.924651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.749 qpair failed and we were unable to recover it. 00:37:33.749 [2024-07-14 15:10:12.924811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.749 [2024-07-14 15:10:12.924845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.749 qpair failed and we were unable to recover it. 00:37:33.749 [2024-07-14 15:10:12.924964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.749 [2024-07-14 15:10:12.924998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.749 qpair failed and we were unable to recover it. 00:37:33.749 [2024-07-14 15:10:12.925098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.749 [2024-07-14 15:10:12.925131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.749 qpair failed and we were unable to recover it. 00:37:33.749 [2024-07-14 15:10:12.925257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.749 [2024-07-14 15:10:12.925290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.749 qpair failed and we were unable to recover it. 00:37:33.749 [2024-07-14 15:10:12.925494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.749 [2024-07-14 15:10:12.925528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.749 qpair failed and we were unable to recover it. 00:37:33.749 [2024-07-14 15:10:12.925662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.749 [2024-07-14 15:10:12.925697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.749 qpair failed and we were unable to recover it. 00:37:33.749 [2024-07-14 15:10:12.925832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.749 [2024-07-14 15:10:12.925866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.749 qpair failed and we were unable to recover it. 00:37:33.749 [2024-07-14 15:10:12.925988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.749 [2024-07-14 15:10:12.926022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.749 qpair failed and we were unable to recover it. 00:37:33.749 [2024-07-14 15:10:12.926127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.749 [2024-07-14 15:10:12.926160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.749 qpair failed and we were unable to recover it. 00:37:33.749 [2024-07-14 15:10:12.926325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.749 [2024-07-14 15:10:12.926359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.749 qpair failed and we were unable to recover it. 00:37:33.749 [2024-07-14 15:10:12.926475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.749 [2024-07-14 15:10:12.926509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.749 qpair failed and we were unable to recover it. 00:37:33.749 [2024-07-14 15:10:12.926653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.749 [2024-07-14 15:10:12.926688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.749 qpair failed and we were unable to recover it. 00:37:33.749 [2024-07-14 15:10:12.926824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.749 [2024-07-14 15:10:12.926859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.749 qpair failed and we were unable to recover it. 00:37:33.749 [2024-07-14 15:10:12.927032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.749 [2024-07-14 15:10:12.927066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.749 qpair failed and we were unable to recover it. 00:37:33.749 [2024-07-14 15:10:12.927224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.749 [2024-07-14 15:10:12.927258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.749 qpair failed and we were unable to recover it. 00:37:33.749 [2024-07-14 15:10:12.927362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.749 [2024-07-14 15:10:12.927396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.749 qpair failed and we were unable to recover it. 00:37:33.749 [2024-07-14 15:10:12.927561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.749 [2024-07-14 15:10:12.927595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.749 qpair failed and we were unable to recover it. 00:37:33.749 [2024-07-14 15:10:12.927708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.749 [2024-07-14 15:10:12.927742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.750 qpair failed and we were unable to recover it. 00:37:33.750 [2024-07-14 15:10:12.927845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.750 [2024-07-14 15:10:12.927887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.750 qpair failed and we were unable to recover it. 00:37:33.750 [2024-07-14 15:10:12.928032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.750 [2024-07-14 15:10:12.928065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.750 qpair failed and we were unable to recover it. 00:37:33.750 [2024-07-14 15:10:12.928182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.750 [2024-07-14 15:10:12.928215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.750 qpair failed and we were unable to recover it. 00:37:33.750 [2024-07-14 15:10:12.928324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.750 [2024-07-14 15:10:12.928357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.750 qpair failed and we were unable to recover it. 00:37:33.750 [2024-07-14 15:10:12.928495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.750 [2024-07-14 15:10:12.928528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.750 qpair failed and we were unable to recover it. 00:37:33.750 [2024-07-14 15:10:12.928631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.750 [2024-07-14 15:10:12.928666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.750 qpair failed and we were unable to recover it. 00:37:33.750 [2024-07-14 15:10:12.928798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.750 [2024-07-14 15:10:12.928832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.750 qpair failed and we were unable to recover it. 00:37:33.750 [2024-07-14 15:10:12.928973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.750 [2024-07-14 15:10:12.929007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.750 qpair failed and we were unable to recover it. 00:37:33.750 [2024-07-14 15:10:12.929140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.750 [2024-07-14 15:10:12.929174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.750 qpair failed and we were unable to recover it. 00:37:33.750 [2024-07-14 15:10:12.929314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.750 [2024-07-14 15:10:12.929347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.750 qpair failed and we were unable to recover it. 00:37:33.750 [2024-07-14 15:10:12.929487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.750 [2024-07-14 15:10:12.929521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.750 qpair failed and we were unable to recover it. 00:37:33.750 [2024-07-14 15:10:12.929699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.750 [2024-07-14 15:10:12.929733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.750 qpair failed and we were unable to recover it. 00:37:33.750 [2024-07-14 15:10:12.929895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.750 [2024-07-14 15:10:12.929929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.750 qpair failed and we were unable to recover it. 00:37:33.750 [2024-07-14 15:10:12.930034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.750 [2024-07-14 15:10:12.930068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.750 qpair failed and we were unable to recover it. 00:37:33.750 [2024-07-14 15:10:12.930202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.750 [2024-07-14 15:10:12.930235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.750 qpair failed and we were unable to recover it. 00:37:33.750 [2024-07-14 15:10:12.930394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.750 [2024-07-14 15:10:12.930432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.750 qpair failed and we were unable to recover it. 00:37:33.750 [2024-07-14 15:10:12.930572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.750 [2024-07-14 15:10:12.930605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.750 qpair failed and we were unable to recover it. 00:37:33.750 [2024-07-14 15:10:12.930738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.750 [2024-07-14 15:10:12.930773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.750 qpair failed and we were unable to recover it. 00:37:33.750 [2024-07-14 15:10:12.930891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.750 [2024-07-14 15:10:12.930925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.750 qpair failed and we were unable to recover it. 00:37:33.750 [2024-07-14 15:10:12.931041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.750 [2024-07-14 15:10:12.931075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.750 qpair failed and we were unable to recover it. 00:37:33.750 [2024-07-14 15:10:12.931213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.750 [2024-07-14 15:10:12.931246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.750 qpair failed and we were unable to recover it. 00:37:33.750 [2024-07-14 15:10:12.931356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.750 [2024-07-14 15:10:12.931390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.750 qpair failed and we were unable to recover it. 00:37:33.750 [2024-07-14 15:10:12.931556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.750 [2024-07-14 15:10:12.931589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.750 qpair failed and we were unable to recover it. 00:37:33.750 [2024-07-14 15:10:12.931694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.750 [2024-07-14 15:10:12.931728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.750 qpair failed and we were unable to recover it. 00:37:33.750 [2024-07-14 15:10:12.931837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.750 [2024-07-14 15:10:12.931870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.750 qpair failed and we were unable to recover it. 00:37:33.750 [2024-07-14 15:10:12.932011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.750 [2024-07-14 15:10:12.932044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.750 qpair failed and we were unable to recover it. 00:37:33.750 [2024-07-14 15:10:12.932175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.750 [2024-07-14 15:10:12.932208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.750 qpair failed and we were unable to recover it. 00:37:33.750 [2024-07-14 15:10:12.932348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.750 [2024-07-14 15:10:12.932382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.750 qpair failed and we were unable to recover it. 00:37:33.750 [2024-07-14 15:10:12.932527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.750 [2024-07-14 15:10:12.932560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.750 qpair failed and we were unable to recover it. 00:37:33.750 [2024-07-14 15:10:12.932731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.750 [2024-07-14 15:10:12.932766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.750 qpair failed and we were unable to recover it. 00:37:33.750 [2024-07-14 15:10:12.932916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.750 [2024-07-14 15:10:12.932950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.750 qpair failed and we were unable to recover it. 00:37:33.750 [2024-07-14 15:10:12.933116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.750 [2024-07-14 15:10:12.933150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.750 qpair failed and we were unable to recover it. 00:37:33.750 [2024-07-14 15:10:12.933252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.750 [2024-07-14 15:10:12.933286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.750 qpair failed and we were unable to recover it. 00:37:33.750 [2024-07-14 15:10:12.933423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.750 [2024-07-14 15:10:12.933457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.750 qpair failed and we were unable to recover it. 00:37:33.750 [2024-07-14 15:10:12.933572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.750 [2024-07-14 15:10:12.933606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.750 qpair failed and we were unable to recover it. 00:37:33.750 [2024-07-14 15:10:12.933725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.750 [2024-07-14 15:10:12.933759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.750 qpair failed and we were unable to recover it. 00:37:33.750 [2024-07-14 15:10:12.933883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.750 [2024-07-14 15:10:12.933917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.750 qpair failed and we were unable to recover it. 00:37:33.750 [2024-07-14 15:10:12.934083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.750 [2024-07-14 15:10:12.934116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.750 qpair failed and we were unable to recover it. 00:37:33.750 [2024-07-14 15:10:12.934225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.750 [2024-07-14 15:10:12.934258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.750 qpair failed and we were unable to recover it. 00:37:33.750 [2024-07-14 15:10:12.934380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.750 [2024-07-14 15:10:12.934416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.750 qpair failed and we were unable to recover it. 00:37:33.750 [2024-07-14 15:10:12.934581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.751 [2024-07-14 15:10:12.934618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.751 qpair failed and we were unable to recover it. 00:37:33.751 [2024-07-14 15:10:12.934724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.751 [2024-07-14 15:10:12.934757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.751 qpair failed and we were unable to recover it. 00:37:33.751 [2024-07-14 15:10:12.934863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.751 [2024-07-14 15:10:12.934906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.751 qpair failed and we were unable to recover it. 00:37:33.751 [2024-07-14 15:10:12.935055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.751 [2024-07-14 15:10:12.935089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.751 qpair failed and we were unable to recover it. 00:37:33.751 [2024-07-14 15:10:12.935226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.751 [2024-07-14 15:10:12.935259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.751 qpair failed and we were unable to recover it. 00:37:33.751 [2024-07-14 15:10:12.935392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.751 [2024-07-14 15:10:12.935425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.751 qpair failed and we were unable to recover it. 00:37:33.751 [2024-07-14 15:10:12.935528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.751 [2024-07-14 15:10:12.935561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.751 qpair failed and we were unable to recover it. 00:37:33.751 [2024-07-14 15:10:12.935707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.751 [2024-07-14 15:10:12.935740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.751 qpair failed and we were unable to recover it. 00:37:33.751 [2024-07-14 15:10:12.935908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.751 [2024-07-14 15:10:12.935956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.751 qpair failed and we were unable to recover it. 00:37:33.751 [2024-07-14 15:10:12.936083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.751 [2024-07-14 15:10:12.936119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.751 qpair failed and we were unable to recover it. 00:37:33.751 [2024-07-14 15:10:12.936268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.751 [2024-07-14 15:10:12.936302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.751 qpair failed and we were unable to recover it. 00:37:33.751 [2024-07-14 15:10:12.936414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.751 [2024-07-14 15:10:12.936447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.751 qpair failed and we were unable to recover it. 00:37:33.751 [2024-07-14 15:10:12.936584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.751 [2024-07-14 15:10:12.936617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.751 qpair failed and we were unable to recover it. 00:37:33.751 [2024-07-14 15:10:12.936764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.751 [2024-07-14 15:10:12.936812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.751 qpair failed and we were unable to recover it. 00:37:33.751 [2024-07-14 15:10:12.936972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.751 [2024-07-14 15:10:12.937006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.751 qpair failed and we were unable to recover it. 00:37:33.751 [2024-07-14 15:10:12.937122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.751 [2024-07-14 15:10:12.937155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.751 qpair failed and we were unable to recover it. 00:37:33.751 [2024-07-14 15:10:12.937317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.751 [2024-07-14 15:10:12.937350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.751 qpair failed and we were unable to recover it. 00:37:33.751 [2024-07-14 15:10:12.937510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.751 [2024-07-14 15:10:12.937543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.751 qpair failed and we were unable to recover it. 00:37:33.751 [2024-07-14 15:10:12.937683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.751 [2024-07-14 15:10:12.937717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.751 qpair failed and we were unable to recover it. 00:37:33.751 [2024-07-14 15:10:12.937829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.751 [2024-07-14 15:10:12.937870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.751 qpair failed and we were unable to recover it. 00:37:33.751 [2024-07-14 15:10:12.938004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.751 [2024-07-14 15:10:12.938051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.751 qpair failed and we were unable to recover it. 00:37:33.751 [2024-07-14 15:10:12.938217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.751 [2024-07-14 15:10:12.938254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.751 qpair failed and we were unable to recover it. 00:37:33.751 [2024-07-14 15:10:12.938417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.751 [2024-07-14 15:10:12.938463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.751 qpair failed and we were unable to recover it. 00:37:33.751 [2024-07-14 15:10:12.938573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.751 [2024-07-14 15:10:12.938608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.751 qpair failed and we were unable to recover it. 00:37:33.751 [2024-07-14 15:10:12.938746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.751 [2024-07-14 15:10:12.938781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.751 qpair failed and we were unable to recover it. 00:37:33.751 [2024-07-14 15:10:12.938899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.751 [2024-07-14 15:10:12.938935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.751 qpair failed and we were unable to recover it. 00:37:33.751 [2024-07-14 15:10:12.939089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.751 [2024-07-14 15:10:12.939136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.751 qpair failed and we were unable to recover it. 00:37:33.751 [2024-07-14 15:10:12.939324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.751 [2024-07-14 15:10:12.939360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.751 qpair failed and we were unable to recover it. 00:37:33.751 [2024-07-14 15:10:12.939534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.751 [2024-07-14 15:10:12.939568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.751 qpair failed and we were unable to recover it. 00:37:33.751 [2024-07-14 15:10:12.939669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.751 [2024-07-14 15:10:12.939703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.751 qpair failed and we were unable to recover it. 00:37:33.751 [2024-07-14 15:10:12.939841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.751 [2024-07-14 15:10:12.939885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.751 qpair failed and we were unable to recover it. 00:37:33.751 [2024-07-14 15:10:12.940001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.751 [2024-07-14 15:10:12.940035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.751 qpair failed and we were unable to recover it. 00:37:33.751 [2024-07-14 15:10:12.940176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.751 [2024-07-14 15:10:12.940224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.751 qpair failed and we were unable to recover it. 00:37:33.751 [2024-07-14 15:10:12.940348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.751 [2024-07-14 15:10:12.940383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.751 qpair failed and we were unable to recover it. 00:37:33.751 [2024-07-14 15:10:12.940521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.751 [2024-07-14 15:10:12.940554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.751 qpair failed and we were unable to recover it. 00:37:33.751 [2024-07-14 15:10:12.940699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.751 [2024-07-14 15:10:12.940733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.751 qpair failed and we were unable to recover it. 00:37:33.751 [2024-07-14 15:10:12.940887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.751 [2024-07-14 15:10:12.940922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.751 qpair failed and we were unable to recover it. 00:37:33.751 [2024-07-14 15:10:12.941024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.751 [2024-07-14 15:10:12.941058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.751 qpair failed and we were unable to recover it. 00:37:33.751 [2024-07-14 15:10:12.941167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.751 [2024-07-14 15:10:12.941202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.751 qpair failed and we were unable to recover it. 00:37:33.751 [2024-07-14 15:10:12.941319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.751 [2024-07-14 15:10:12.941354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.751 qpair failed and we were unable to recover it. 00:37:33.751 [2024-07-14 15:10:12.941493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.752 [2024-07-14 15:10:12.941531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.752 qpair failed and we were unable to recover it. 00:37:33.752 [2024-07-14 15:10:12.941697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.752 [2024-07-14 15:10:12.941730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.752 qpair failed and we were unable to recover it. 00:37:33.752 [2024-07-14 15:10:12.941870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.752 [2024-07-14 15:10:12.941910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.752 qpair failed and we were unable to recover it. 00:37:33.752 [2024-07-14 15:10:12.942017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.752 [2024-07-14 15:10:12.942050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.752 qpair failed and we were unable to recover it. 00:37:33.752 [2024-07-14 15:10:12.942194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.752 [2024-07-14 15:10:12.942227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.752 qpair failed and we were unable to recover it. 00:37:33.752 [2024-07-14 15:10:12.942397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.752 [2024-07-14 15:10:12.942430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.752 qpair failed and we were unable to recover it. 00:37:33.752 [2024-07-14 15:10:12.942594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.752 [2024-07-14 15:10:12.942629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.752 qpair failed and we were unable to recover it. 00:37:33.752 [2024-07-14 15:10:12.942742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.752 [2024-07-14 15:10:12.942774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.752 qpair failed and we were unable to recover it. 00:37:33.752 [2024-07-14 15:10:12.942881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.752 [2024-07-14 15:10:12.942927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.752 qpair failed and we were unable to recover it. 00:37:33.752 [2024-07-14 15:10:12.943046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.752 [2024-07-14 15:10:12.943080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.752 qpair failed and we were unable to recover it. 00:37:33.752 [2024-07-14 15:10:12.943195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.752 [2024-07-14 15:10:12.943227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.752 qpair failed and we were unable to recover it. 00:37:33.752 [2024-07-14 15:10:12.943334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.752 [2024-07-14 15:10:12.943367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.752 qpair failed and we were unable to recover it. 00:37:33.752 [2024-07-14 15:10:12.943474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.752 [2024-07-14 15:10:12.943508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.752 qpair failed and we were unable to recover it. 00:37:33.752 [2024-07-14 15:10:12.943641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.752 [2024-07-14 15:10:12.943674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.752 qpair failed and we were unable to recover it. 00:37:33.752 [2024-07-14 15:10:12.943819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.752 [2024-07-14 15:10:12.943852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.752 qpair failed and we were unable to recover it. 00:37:33.752 [2024-07-14 15:10:12.944013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.752 [2024-07-14 15:10:12.944049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.752 qpair failed and we were unable to recover it. 00:37:33.752 [2024-07-14 15:10:12.944197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.752 [2024-07-14 15:10:12.944232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.752 qpair failed and we were unable to recover it. 00:37:33.752 [2024-07-14 15:10:12.944410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.752 [2024-07-14 15:10:12.944444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.752 qpair failed and we were unable to recover it. 00:37:33.752 [2024-07-14 15:10:12.944581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.752 [2024-07-14 15:10:12.944625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.752 qpair failed and we were unable to recover it. 00:37:33.752 [2024-07-14 15:10:12.944729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.752 [2024-07-14 15:10:12.944762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.752 qpair failed and we were unable to recover it. 00:37:33.752 [2024-07-14 15:10:12.944918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.752 [2024-07-14 15:10:12.944952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.752 qpair failed and we were unable to recover it. 00:37:33.752 [2024-07-14 15:10:12.945070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.752 [2024-07-14 15:10:12.945102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.752 qpair failed and we were unable to recover it. 00:37:33.752 [2024-07-14 15:10:12.945273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.752 [2024-07-14 15:10:12.945305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.752 qpair failed and we were unable to recover it. 00:37:33.752 [2024-07-14 15:10:12.945432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.752 [2024-07-14 15:10:12.945465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.752 qpair failed and we were unable to recover it. 00:37:33.752 [2024-07-14 15:10:12.945605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.752 [2024-07-14 15:10:12.945640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.752 qpair failed and we were unable to recover it. 00:37:33.752 [2024-07-14 15:10:12.945749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.752 [2024-07-14 15:10:12.945782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.752 qpair failed and we were unable to recover it. 00:37:33.752 [2024-07-14 15:10:12.945902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.752 [2024-07-14 15:10:12.945936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.752 qpair failed and we were unable to recover it. 00:37:33.752 [2024-07-14 15:10:12.946049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.752 [2024-07-14 15:10:12.946083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.753 qpair failed and we were unable to recover it. 00:37:33.753 [2024-07-14 15:10:12.946252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.753 [2024-07-14 15:10:12.946289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.753 qpair failed and we were unable to recover it. 00:37:33.753 [2024-07-14 15:10:12.946404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.753 [2024-07-14 15:10:12.946438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.753 qpair failed and we were unable to recover it. 00:37:33.753 [2024-07-14 15:10:12.946582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.753 [2024-07-14 15:10:12.946616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.753 qpair failed and we were unable to recover it. 00:37:33.753 [2024-07-14 15:10:12.946723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.753 [2024-07-14 15:10:12.946756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.753 qpair failed and we were unable to recover it. 00:37:33.753 [2024-07-14 15:10:12.946907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.753 [2024-07-14 15:10:12.946941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.753 qpair failed and we were unable to recover it. 00:37:33.753 [2024-07-14 15:10:12.947051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.753 [2024-07-14 15:10:12.947085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.753 qpair failed and we were unable to recover it. 00:37:33.753 [2024-07-14 15:10:12.947224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.753 [2024-07-14 15:10:12.947257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.753 qpair failed and we were unable to recover it. 00:37:33.753 [2024-07-14 15:10:12.947391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.753 [2024-07-14 15:10:12.947424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.753 qpair failed and we were unable to recover it. 00:37:33.753 [2024-07-14 15:10:12.947524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.753 [2024-07-14 15:10:12.947558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.753 qpair failed and we were unable to recover it. 00:37:33.753 [2024-07-14 15:10:12.947722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.753 [2024-07-14 15:10:12.947760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.753 qpair failed and we were unable to recover it. 00:37:33.753 [2024-07-14 15:10:12.947923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.753 [2024-07-14 15:10:12.947957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.753 qpair failed and we were unable to recover it. 00:37:33.753 [2024-07-14 15:10:12.948066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.753 [2024-07-14 15:10:12.948098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.753 qpair failed and we were unable to recover it. 00:37:33.753 [2024-07-14 15:10:12.948237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.753 [2024-07-14 15:10:12.948276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.753 qpair failed and we were unable to recover it. 00:37:33.753 [2024-07-14 15:10:12.948443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.753 [2024-07-14 15:10:12.948477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.753 qpair failed and we were unable to recover it. 00:37:33.753 [2024-07-14 15:10:12.948613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.753 [2024-07-14 15:10:12.948647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.753 qpair failed and we were unable to recover it. 00:37:33.753 [2024-07-14 15:10:12.948798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.753 [2024-07-14 15:10:12.948832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.753 qpair failed and we were unable to recover it. 00:37:33.753 [2024-07-14 15:10:12.948960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.753 [2024-07-14 15:10:12.948995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.753 qpair failed and we were unable to recover it. 00:37:33.753 [2024-07-14 15:10:12.949133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.753 [2024-07-14 15:10:12.949166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.753 qpair failed and we were unable to recover it. 00:37:33.753 [2024-07-14 15:10:12.949273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.753 [2024-07-14 15:10:12.949308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.753 qpair failed and we were unable to recover it. 00:37:33.753 [2024-07-14 15:10:12.949448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.753 [2024-07-14 15:10:12.949482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.753 qpair failed and we were unable to recover it. 00:37:33.753 [2024-07-14 15:10:12.949594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.753 [2024-07-14 15:10:12.949636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.753 qpair failed and we were unable to recover it. 00:37:33.753 [2024-07-14 15:10:12.949781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.753 [2024-07-14 15:10:12.949815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.753 qpair failed and we were unable to recover it. 00:37:33.753 [2024-07-14 15:10:12.949942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.753 [2024-07-14 15:10:12.949977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.753 qpair failed and we were unable to recover it. 00:37:33.753 [2024-07-14 15:10:12.950085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.753 [2024-07-14 15:10:12.950117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.753 qpair failed and we were unable to recover it. 00:37:33.753 [2024-07-14 15:10:12.950253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.753 [2024-07-14 15:10:12.950287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.753 qpair failed and we were unable to recover it. 00:37:33.753 [2024-07-14 15:10:12.950407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.753 [2024-07-14 15:10:12.950440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.753 qpair failed and we were unable to recover it. 00:37:33.753 [2024-07-14 15:10:12.950588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.753 [2024-07-14 15:10:12.950621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.753 qpair failed and we were unable to recover it. 00:37:33.753 [2024-07-14 15:10:12.950756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.753 [2024-07-14 15:10:12.950790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.753 qpair failed and we were unable to recover it. 00:37:33.753 [2024-07-14 15:10:12.950933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.753 [2024-07-14 15:10:12.950967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.753 qpair failed and we were unable to recover it. 00:37:33.753 [2024-07-14 15:10:12.951074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.753 [2024-07-14 15:10:12.951107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.753 qpair failed and we were unable to recover it. 00:37:33.753 [2024-07-14 15:10:12.951248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.753 [2024-07-14 15:10:12.951282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.753 qpair failed and we were unable to recover it. 00:37:33.753 [2024-07-14 15:10:12.951416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.753 [2024-07-14 15:10:12.951450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.753 qpair failed and we were unable to recover it. 00:37:33.753 [2024-07-14 15:10:12.951555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.753 [2024-07-14 15:10:12.951588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.753 qpair failed and we were unable to recover it. 00:37:33.753 [2024-07-14 15:10:12.951697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.753 [2024-07-14 15:10:12.951730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.753 qpair failed and we were unable to recover it. 00:37:33.753 [2024-07-14 15:10:12.951854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.753 [2024-07-14 15:10:12.951904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.753 qpair failed and we were unable to recover it. 00:37:33.753 [2024-07-14 15:10:12.952012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.753 [2024-07-14 15:10:12.952046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.753 qpair failed and we were unable to recover it. 00:37:33.753 [2024-07-14 15:10:12.952170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.753 [2024-07-14 15:10:12.952204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.754 qpair failed and we were unable to recover it. 00:37:33.754 [2024-07-14 15:10:12.952357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.754 [2024-07-14 15:10:12.952390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.754 qpair failed and we were unable to recover it. 00:37:33.754 [2024-07-14 15:10:12.952528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.754 [2024-07-14 15:10:12.952561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.754 qpair failed and we were unable to recover it. 00:37:33.754 [2024-07-14 15:10:12.952707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.754 [2024-07-14 15:10:12.952744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.754 qpair failed and we were unable to recover it. 00:37:33.754 [2024-07-14 15:10:12.952910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.754 [2024-07-14 15:10:12.952956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.754 qpair failed and we were unable to recover it. 00:37:33.754 [2024-07-14 15:10:12.953107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.754 [2024-07-14 15:10:12.953143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.754 qpair failed and we were unable to recover it. 00:37:33.754 [2024-07-14 15:10:12.953319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.754 [2024-07-14 15:10:12.953352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.754 qpair failed and we were unable to recover it. 00:37:33.754 [2024-07-14 15:10:12.953458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.754 [2024-07-14 15:10:12.953491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.754 qpair failed and we were unable to recover it. 00:37:33.754 [2024-07-14 15:10:12.953622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.754 [2024-07-14 15:10:12.953655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.754 qpair failed and we were unable to recover it. 00:37:33.754 [2024-07-14 15:10:12.953980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.754 [2024-07-14 15:10:12.954026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.754 qpair failed and we were unable to recover it. 00:37:33.754 [2024-07-14 15:10:12.954181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.754 [2024-07-14 15:10:12.954215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.754 qpair failed and we were unable to recover it. 00:37:33.754 [2024-07-14 15:10:12.954353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.754 [2024-07-14 15:10:12.954386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.754 qpair failed and we were unable to recover it. 00:37:33.754 [2024-07-14 15:10:12.954492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.754 [2024-07-14 15:10:12.954525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.754 qpair failed and we were unable to recover it. 00:37:33.754 [2024-07-14 15:10:12.954634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.754 [2024-07-14 15:10:12.954667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.754 qpair failed and we were unable to recover it. 00:37:33.754 [2024-07-14 15:10:12.954801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.754 [2024-07-14 15:10:12.954835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.754 qpair failed and we were unable to recover it. 00:37:33.754 [2024-07-14 15:10:12.954963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.754 [2024-07-14 15:10:12.954996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.754 qpair failed and we were unable to recover it. 00:37:33.754 [2024-07-14 15:10:12.955112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.754 [2024-07-14 15:10:12.955145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.754 qpair failed and we were unable to recover it. 00:37:33.754 [2024-07-14 15:10:12.955280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.754 [2024-07-14 15:10:12.955313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.754 qpair failed and we were unable to recover it. 00:37:33.754 [2024-07-14 15:10:12.955463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.754 [2024-07-14 15:10:12.955498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.754 qpair failed and we were unable to recover it. 00:37:33.754 [2024-07-14 15:10:12.955617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.754 [2024-07-14 15:10:12.955657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.754 qpair failed and we were unable to recover it. 00:37:33.754 [2024-07-14 15:10:12.955803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.754 [2024-07-14 15:10:12.955837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.754 qpair failed and we were unable to recover it. 00:37:33.754 [2024-07-14 15:10:12.955970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.754 [2024-07-14 15:10:12.956004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.754 qpair failed and we were unable to recover it. 00:37:33.754 [2024-07-14 15:10:12.956117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.754 [2024-07-14 15:10:12.956152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.754 qpair failed and we were unable to recover it. 00:37:33.754 [2024-07-14 15:10:12.956309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.754 [2024-07-14 15:10:12.956356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.754 qpair failed and we were unable to recover it. 00:37:33.754 [2024-07-14 15:10:12.956525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.754 [2024-07-14 15:10:12.956559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.754 qpair failed and we were unable to recover it. 00:37:33.754 [2024-07-14 15:10:12.956668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.754 [2024-07-14 15:10:12.956702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.754 qpair failed and we were unable to recover it. 00:37:33.754 [2024-07-14 15:10:12.956816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.754 [2024-07-14 15:10:12.956850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.754 qpair failed and we were unable to recover it. 00:37:33.754 [2024-07-14 15:10:12.957024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.754 [2024-07-14 15:10:12.957057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.754 qpair failed and we were unable to recover it. 00:37:33.754 [2024-07-14 15:10:12.957165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.754 [2024-07-14 15:10:12.957199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.754 qpair failed and we were unable to recover it. 00:37:33.754 [2024-07-14 15:10:12.957317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.754 [2024-07-14 15:10:12.957352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.754 qpair failed and we were unable to recover it. 00:37:33.754 [2024-07-14 15:10:12.957486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.754 [2024-07-14 15:10:12.957521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.754 qpair failed and we were unable to recover it. 00:37:33.754 [2024-07-14 15:10:12.957710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.754 [2024-07-14 15:10:12.957758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.754 qpair failed and we were unable to recover it. 00:37:33.754 [2024-07-14 15:10:12.957881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.754 [2024-07-14 15:10:12.957917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.754 qpair failed and we were unable to recover it. 00:37:33.754 [2024-07-14 15:10:12.958050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.754 [2024-07-14 15:10:12.958084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.754 qpair failed and we were unable to recover it. 00:37:33.754 [2024-07-14 15:10:12.958230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.754 [2024-07-14 15:10:12.958273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.754 qpair failed and we were unable to recover it. 00:37:33.754 [2024-07-14 15:10:12.958410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.754 [2024-07-14 15:10:12.958445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.754 qpair failed and we were unable to recover it. 00:37:33.754 [2024-07-14 15:10:12.958555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.754 [2024-07-14 15:10:12.958588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.754 qpair failed and we were unable to recover it. 00:37:33.754 [2024-07-14 15:10:12.958727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.754 [2024-07-14 15:10:12.958759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.754 qpair failed and we were unable to recover it. 00:37:33.754 [2024-07-14 15:10:12.958920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.754 [2024-07-14 15:10:12.958967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.754 qpair failed and we were unable to recover it. 00:37:33.754 [2024-07-14 15:10:12.959087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.754 [2024-07-14 15:10:12.959122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.754 qpair failed and we were unable to recover it. 00:37:33.754 [2024-07-14 15:10:12.959242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.755 [2024-07-14 15:10:12.959276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.755 qpair failed and we were unable to recover it. 00:37:33.755 [2024-07-14 15:10:12.959411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.755 [2024-07-14 15:10:12.959445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.755 qpair failed and we were unable to recover it. 00:37:33.755 [2024-07-14 15:10:12.959589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.755 [2024-07-14 15:10:12.959622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.755 qpair failed and we were unable to recover it. 00:37:33.755 [2024-07-14 15:10:12.959729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.755 [2024-07-14 15:10:12.959770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.755 qpair failed and we were unable to recover it. 00:37:33.755 [2024-07-14 15:10:12.959950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.755 [2024-07-14 15:10:12.959983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.755 qpair failed and we were unable to recover it. 00:37:33.755 [2024-07-14 15:10:12.960138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.755 [2024-07-14 15:10:12.960186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.755 qpair failed and we were unable to recover it. 00:37:33.755 [2024-07-14 15:10:12.960331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.755 [2024-07-14 15:10:12.960367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.755 qpair failed and we were unable to recover it. 00:37:33.755 [2024-07-14 15:10:12.960479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.755 [2024-07-14 15:10:12.960513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.755 qpair failed and we were unable to recover it. 00:37:33.755 [2024-07-14 15:10:12.960687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.755 [2024-07-14 15:10:12.960720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.755 qpair failed and we were unable to recover it. 00:37:33.755 [2024-07-14 15:10:12.960873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.755 [2024-07-14 15:10:12.960928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.755 qpair failed and we were unable to recover it. 00:37:33.755 [2024-07-14 15:10:12.961079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.755 [2024-07-14 15:10:12.961114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.755 qpair failed and we were unable to recover it. 00:37:33.755 [2024-07-14 15:10:12.961260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.755 [2024-07-14 15:10:12.961294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.755 qpair failed and we were unable to recover it. 00:37:33.755 [2024-07-14 15:10:12.961437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.755 [2024-07-14 15:10:12.961472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.755 qpair failed and we were unable to recover it. 00:37:33.755 [2024-07-14 15:10:12.961621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.755 [2024-07-14 15:10:12.961656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.755 qpair failed and we were unable to recover it. 00:37:33.755 [2024-07-14 15:10:12.961795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.755 [2024-07-14 15:10:12.961841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.755 qpair failed and we were unable to recover it. 00:37:33.755 [2024-07-14 15:10:12.961980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.755 [2024-07-14 15:10:12.962014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.755 qpair failed and we were unable to recover it. 00:37:33.755 [2024-07-14 15:10:12.962132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.755 [2024-07-14 15:10:12.962165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.755 qpair failed and we were unable to recover it. 00:37:33.755 [2024-07-14 15:10:12.962311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.755 [2024-07-14 15:10:12.962344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.755 qpair failed and we were unable to recover it. 00:37:33.755 [2024-07-14 15:10:12.962480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.755 [2024-07-14 15:10:12.962513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.755 qpair failed and we were unable to recover it. 00:37:33.755 [2024-07-14 15:10:12.962628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.755 [2024-07-14 15:10:12.962662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.755 qpair failed and we were unable to recover it. 00:37:33.755 [2024-07-14 15:10:12.962797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.755 [2024-07-14 15:10:12.962830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.755 qpair failed and we were unable to recover it. 00:37:33.755 [2024-07-14 15:10:12.962982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.755 [2024-07-14 15:10:12.963015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.755 qpair failed and we were unable to recover it. 00:37:33.755 [2024-07-14 15:10:12.963128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.755 [2024-07-14 15:10:12.963172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.755 qpair failed and we were unable to recover it. 00:37:33.755 [2024-07-14 15:10:12.963312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.755 [2024-07-14 15:10:12.963345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.755 qpair failed and we were unable to recover it. 00:37:33.755 [2024-07-14 15:10:12.963504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.755 [2024-07-14 15:10:12.963536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.755 qpair failed and we were unable to recover it. 00:37:33.755 [2024-07-14 15:10:12.963651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.755 [2024-07-14 15:10:12.963684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.755 qpair failed and we were unable to recover it. 00:37:33.755 [2024-07-14 15:10:12.963826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.755 [2024-07-14 15:10:12.963860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.755 qpair failed and we were unable to recover it. 00:37:33.755 [2024-07-14 15:10:12.964003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.755 [2024-07-14 15:10:12.964036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.755 qpair failed and we were unable to recover it. 00:37:33.755 [2024-07-14 15:10:12.964152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.755 [2024-07-14 15:10:12.964185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.755 qpair failed and we were unable to recover it. 00:37:33.755 [2024-07-14 15:10:12.964323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.755 [2024-07-14 15:10:12.964356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.755 qpair failed and we were unable to recover it. 00:37:33.755 [2024-07-14 15:10:12.964468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.755 [2024-07-14 15:10:12.964500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.755 qpair failed and we were unable to recover it. 00:37:33.755 [2024-07-14 15:10:12.964603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.755 [2024-07-14 15:10:12.964636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.755 qpair failed and we were unable to recover it. 00:37:33.755 [2024-07-14 15:10:12.964778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.755 [2024-07-14 15:10:12.964811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.755 qpair failed and we were unable to recover it. 00:37:33.755 [2024-07-14 15:10:12.964970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.755 [2024-07-14 15:10:12.965004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.755 qpair failed and we were unable to recover it. 00:37:33.755 [2024-07-14 15:10:12.965121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.755 [2024-07-14 15:10:12.965154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.755 qpair failed and we were unable to recover it. 00:37:33.755 [2024-07-14 15:10:12.965320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.755 [2024-07-14 15:10:12.965353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.755 qpair failed and we were unable to recover it. 00:37:33.755 [2024-07-14 15:10:12.965482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.755 [2024-07-14 15:10:12.965515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.755 qpair failed and we were unable to recover it. 00:37:33.755 [2024-07-14 15:10:12.965656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.755 [2024-07-14 15:10:12.965689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.755 qpair failed and we were unable to recover it. 00:37:33.755 [2024-07-14 15:10:12.965790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.755 [2024-07-14 15:10:12.965823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.755 qpair failed and we were unable to recover it. 00:37:33.755 [2024-07-14 15:10:12.965941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.755 [2024-07-14 15:10:12.965974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.755 qpair failed and we were unable to recover it. 00:37:33.755 [2024-07-14 15:10:12.966109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.755 [2024-07-14 15:10:12.966142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.755 qpair failed and we were unable to recover it. 00:37:33.756 [2024-07-14 15:10:12.966292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.756 [2024-07-14 15:10:12.966324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.756 qpair failed and we were unable to recover it. 00:37:33.756 [2024-07-14 15:10:12.966438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.756 [2024-07-14 15:10:12.966470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.756 qpair failed and we were unable to recover it. 00:37:33.756 [2024-07-14 15:10:12.966581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.756 [2024-07-14 15:10:12.966619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.756 qpair failed and we were unable to recover it. 00:37:33.756 [2024-07-14 15:10:12.966780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.756 [2024-07-14 15:10:12.966813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.756 qpair failed and we were unable to recover it. 00:37:33.756 [2024-07-14 15:10:12.966957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.756 [2024-07-14 15:10:12.966990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.756 qpair failed and we were unable to recover it. 00:37:33.756 [2024-07-14 15:10:12.967112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.756 [2024-07-14 15:10:12.967145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.756 qpair failed and we were unable to recover it. 00:37:33.756 [2024-07-14 15:10:12.967295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.756 [2024-07-14 15:10:12.967328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.756 qpair failed and we were unable to recover it. 00:37:33.756 [2024-07-14 15:10:12.967466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.756 [2024-07-14 15:10:12.967500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.756 qpair failed and we were unable to recover it. 00:37:33.756 [2024-07-14 15:10:12.967619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.756 [2024-07-14 15:10:12.967652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.756 qpair failed and we were unable to recover it. 00:37:33.756 [2024-07-14 15:10:12.967789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.756 [2024-07-14 15:10:12.967822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.756 qpair failed and we were unable to recover it. 00:37:33.756 [2024-07-14 15:10:12.967953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.756 [2024-07-14 15:10:12.967998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.756 qpair failed and we were unable to recover it. 00:37:33.756 [2024-07-14 15:10:12.968109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.756 [2024-07-14 15:10:12.968142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.756 qpair failed and we were unable to recover it. 00:37:33.756 [2024-07-14 15:10:12.968283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.756 [2024-07-14 15:10:12.968316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.756 qpair failed and we were unable to recover it. 00:37:33.756 [2024-07-14 15:10:12.968478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.756 [2024-07-14 15:10:12.968510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.756 qpair failed and we were unable to recover it. 00:37:33.756 [2024-07-14 15:10:12.968637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.756 [2024-07-14 15:10:12.968671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.756 qpair failed and we were unable to recover it. 00:37:33.756 [2024-07-14 15:10:12.968824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.756 [2024-07-14 15:10:12.968872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.756 qpair failed and we were unable to recover it. 00:37:33.756 [2024-07-14 15:10:12.969037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.756 [2024-07-14 15:10:12.969085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.756 qpair failed and we were unable to recover it. 00:37:33.756 [2024-07-14 15:10:12.969251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.756 [2024-07-14 15:10:12.969287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.756 qpair failed and we were unable to recover it. 00:37:33.756 [2024-07-14 15:10:12.969419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.756 [2024-07-14 15:10:12.969453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.756 qpair failed and we were unable to recover it. 00:37:33.756 [2024-07-14 15:10:12.969591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.756 [2024-07-14 15:10:12.969625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.756 qpair failed and we were unable to recover it. 00:37:33.756 [2024-07-14 15:10:12.969790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.756 [2024-07-14 15:10:12.969825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.756 qpair failed and we were unable to recover it. 00:37:33.756 [2024-07-14 15:10:12.969961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.756 [2024-07-14 15:10:12.969995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.756 qpair failed and we were unable to recover it. 00:37:33.756 [2024-07-14 15:10:12.970149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.756 [2024-07-14 15:10:12.970203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.756 qpair failed and we were unable to recover it. 00:37:33.756 [2024-07-14 15:10:12.970367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.756 [2024-07-14 15:10:12.970402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.756 qpair failed and we were unable to recover it. 00:37:33.756 [2024-07-14 15:10:12.970573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.756 [2024-07-14 15:10:12.970607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.756 qpair failed and we were unable to recover it. 00:37:33.756 [2024-07-14 15:10:12.970742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.756 [2024-07-14 15:10:12.970776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.756 qpair failed and we were unable to recover it. 00:37:33.756 [2024-07-14 15:10:12.970921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.756 [2024-07-14 15:10:12.970956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.756 qpair failed and we were unable to recover it. 00:37:33.756 [2024-07-14 15:10:12.971103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.756 [2024-07-14 15:10:12.971139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.756 qpair failed and we were unable to recover it. 00:37:33.756 [2024-07-14 15:10:12.971284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.756 [2024-07-14 15:10:12.971318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.756 qpair failed and we were unable to recover it. 00:37:33.756 [2024-07-14 15:10:12.971461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.756 [2024-07-14 15:10:12.971494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.756 qpair failed and we were unable to recover it. 00:37:33.756 [2024-07-14 15:10:12.971631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.756 [2024-07-14 15:10:12.971664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.756 qpair failed and we were unable to recover it. 00:37:33.756 [2024-07-14 15:10:12.971768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.756 [2024-07-14 15:10:12.971803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.756 qpair failed and we were unable to recover it. 00:37:33.756 [2024-07-14 15:10:12.971970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.756 [2024-07-14 15:10:12.972003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.756 qpair failed and we were unable to recover it. 00:37:33.756 [2024-07-14 15:10:12.972116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.756 [2024-07-14 15:10:12.972150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.757 qpair failed and we were unable to recover it. 00:37:33.757 [2024-07-14 15:10:12.972300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.757 [2024-07-14 15:10:12.972332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.757 qpair failed and we were unable to recover it. 00:37:33.757 [2024-07-14 15:10:12.972496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.757 [2024-07-14 15:10:12.972530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.757 qpair failed and we were unable to recover it. 00:37:33.757 [2024-07-14 15:10:12.972659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.757 [2024-07-14 15:10:12.972693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.757 qpair failed and we were unable to recover it. 00:37:33.757 [2024-07-14 15:10:12.972823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.757 [2024-07-14 15:10:12.972855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.757 qpair failed and we were unable to recover it. 00:37:33.757 [2024-07-14 15:10:12.973014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.757 [2024-07-14 15:10:12.973047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.757 qpair failed and we were unable to recover it. 00:37:33.757 [2024-07-14 15:10:12.973160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.757 [2024-07-14 15:10:12.973195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.757 qpair failed and we were unable to recover it. 00:37:33.757 [2024-07-14 15:10:12.973344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.757 [2024-07-14 15:10:12.973377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.757 qpair failed and we were unable to recover it. 00:37:33.757 [2024-07-14 15:10:12.973484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.757 [2024-07-14 15:10:12.973517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.757 qpair failed and we were unable to recover it. 00:37:33.757 [2024-07-14 15:10:12.973651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.757 [2024-07-14 15:10:12.973689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.757 qpair failed and we were unable to recover it. 00:37:33.757 [2024-07-14 15:10:12.973828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.757 [2024-07-14 15:10:12.973866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.757 qpair failed and we were unable to recover it. 00:37:33.757 [2024-07-14 15:10:12.973988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.757 [2024-07-14 15:10:12.974021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.757 qpair failed and we were unable to recover it. 00:37:33.757 [2024-07-14 15:10:12.974178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.757 [2024-07-14 15:10:12.974210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.757 qpair failed and we were unable to recover it. 00:37:33.757 [2024-07-14 15:10:12.974385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.757 [2024-07-14 15:10:12.974418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.757 qpair failed and we were unable to recover it. 00:37:33.757 [2024-07-14 15:10:12.974579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.757 [2024-07-14 15:10:12.974612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.757 qpair failed and we were unable to recover it. 00:37:33.757 [2024-07-14 15:10:12.974710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.757 [2024-07-14 15:10:12.974743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.757 qpair failed and we were unable to recover it. 00:37:33.757 [2024-07-14 15:10:12.974879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.757 [2024-07-14 15:10:12.974913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.757 qpair failed and we were unable to recover it. 00:37:33.757 [2024-07-14 15:10:12.975048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.757 [2024-07-14 15:10:12.975081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.757 qpair failed and we were unable to recover it. 00:37:33.757 [2024-07-14 15:10:12.975202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.757 [2024-07-14 15:10:12.975234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.757 qpair failed and we were unable to recover it. 00:37:33.757 [2024-07-14 15:10:12.975384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.757 [2024-07-14 15:10:12.975417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.757 qpair failed and we were unable to recover it. 00:37:33.757 [2024-07-14 15:10:12.975555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.757 [2024-07-14 15:10:12.975589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.757 qpair failed and we were unable to recover it. 00:37:33.757 [2024-07-14 15:10:12.975711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.757 [2024-07-14 15:10:12.975747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.757 qpair failed and we were unable to recover it. 00:37:33.757 [2024-07-14 15:10:12.975894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.757 [2024-07-14 15:10:12.975942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.757 qpair failed and we were unable to recover it. 00:37:33.757 [2024-07-14 15:10:12.976097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.757 [2024-07-14 15:10:12.976134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.757 qpair failed and we were unable to recover it. 00:37:33.757 [2024-07-14 15:10:12.976271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.757 [2024-07-14 15:10:12.976305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.757 qpair failed and we were unable to recover it. 00:37:33.757 [2024-07-14 15:10:12.976413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.757 [2024-07-14 15:10:12.976446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.757 qpair failed and we were unable to recover it. 00:37:33.757 [2024-07-14 15:10:12.976591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.757 [2024-07-14 15:10:12.976624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.757 qpair failed and we were unable to recover it. 00:37:33.757 [2024-07-14 15:10:12.976735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.757 [2024-07-14 15:10:12.976770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.757 qpair failed and we were unable to recover it. 00:37:33.757 [2024-07-14 15:10:12.976872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.757 [2024-07-14 15:10:12.976911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.757 qpair failed and we were unable to recover it. 00:37:33.757 [2024-07-14 15:10:12.977052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.757 [2024-07-14 15:10:12.977086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.757 qpair failed and we were unable to recover it. 00:37:33.757 [2024-07-14 15:10:12.977220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.757 [2024-07-14 15:10:12.977252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.757 qpair failed and we were unable to recover it. 00:37:33.757 [2024-07-14 15:10:12.977394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.757 [2024-07-14 15:10:12.977442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.757 qpair failed and we were unable to recover it. 00:37:33.757 [2024-07-14 15:10:12.977592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.757 [2024-07-14 15:10:12.977625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.757 qpair failed and we were unable to recover it. 00:37:33.757 [2024-07-14 15:10:12.977767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.757 [2024-07-14 15:10:12.977801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.757 qpair failed and we were unable to recover it. 00:37:33.757 [2024-07-14 15:10:12.977958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.757 [2024-07-14 15:10:12.977992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.757 qpair failed and we were unable to recover it. 00:37:33.757 [2024-07-14 15:10:12.978125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.757 [2024-07-14 15:10:12.978159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.757 qpair failed and we were unable to recover it. 00:37:33.757 [2024-07-14 15:10:12.978279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.757 [2024-07-14 15:10:12.978311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.757 qpair failed and we were unable to recover it. 00:37:33.757 [2024-07-14 15:10:12.978469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.757 [2024-07-14 15:10:12.978503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.757 qpair failed and we were unable to recover it. 00:37:33.757 [2024-07-14 15:10:12.978611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.757 [2024-07-14 15:10:12.978652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.757 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-14 15:10:12.978794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-14 15:10:12.978829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-14 15:10:12.978962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-14 15:10:12.978996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-14 15:10:12.979133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-14 15:10:12.979168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-14 15:10:12.979286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-14 15:10:12.979319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-14 15:10:12.979455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-14 15:10:12.979488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-14 15:10:12.979629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-14 15:10:12.979662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-14 15:10:12.979799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-14 15:10:12.979840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-14 15:10:12.979986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-14 15:10:12.980032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-14 15:10:12.980216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-14 15:10:12.980251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-14 15:10:12.980367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-14 15:10:12.980401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-14 15:10:12.980542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-14 15:10:12.980581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-14 15:10:12.980697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-14 15:10:12.980730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-14 15:10:12.980836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-14 15:10:12.980871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-14 15:10:12.981017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-14 15:10:12.981051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-14 15:10:12.981177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-14 15:10:12.981211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-14 15:10:12.981313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-14 15:10:12.981346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-14 15:10:12.981481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-14 15:10:12.981515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-14 15:10:12.981676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-14 15:10:12.981709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-14 15:10:12.981816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-14 15:10:12.981848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-14 15:10:12.981998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-14 15:10:12.982032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-14 15:10:12.982140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-14 15:10:12.982180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-14 15:10:12.982293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-14 15:10:12.982326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-14 15:10:12.982461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-14 15:10:12.982494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-14 15:10:12.982636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-14 15:10:12.982669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-14 15:10:12.982780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-14 15:10:12.982812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-14 15:10:12.982921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-14 15:10:12.982954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-14 15:10:12.983092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-14 15:10:12.983125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-14 15:10:12.983243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-14 15:10:12.983276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-14 15:10:12.983418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-14 15:10:12.983452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-14 15:10:12.983589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-14 15:10:12.983623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-14 15:10:12.983740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-14 15:10:12.983774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-14 15:10:12.983889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-14 15:10:12.983923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-14 15:10:12.984033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.030 [2024-07-14 15:10:12.984066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.030 qpair failed and we were unable to recover it. 00:37:34.030 [2024-07-14 15:10:12.984177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.030 [2024-07-14 15:10:12.984210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.030 qpair failed and we were unable to recover it. 00:37:34.030 [2024-07-14 15:10:12.984316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.030 [2024-07-14 15:10:12.984348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.030 qpair failed and we were unable to recover it. 00:37:34.030 [2024-07-14 15:10:12.984497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.030 [2024-07-14 15:10:12.984531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.030 qpair failed and we were unable to recover it. 00:37:34.030 [2024-07-14 15:10:12.984633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.030 [2024-07-14 15:10:12.984666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.030 qpair failed and we were unable to recover it. 00:37:34.030 [2024-07-14 15:10:12.984810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.030 [2024-07-14 15:10:12.984842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.030 qpair failed and we were unable to recover it. 00:37:34.030 [2024-07-14 15:10:12.984995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.030 [2024-07-14 15:10:12.985039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.030 qpair failed and we were unable to recover it. 00:37:34.030 [2024-07-14 15:10:12.985176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.030 [2024-07-14 15:10:12.985209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.030 qpair failed and we were unable to recover it. 00:37:34.030 [2024-07-14 15:10:12.985320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.030 [2024-07-14 15:10:12.985352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.030 qpair failed and we were unable to recover it. 00:37:34.030 [2024-07-14 15:10:12.985497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.030 [2024-07-14 15:10:12.985530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.030 qpair failed and we were unable to recover it. 00:37:34.030 [2024-07-14 15:10:12.985667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.030 [2024-07-14 15:10:12.985699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.030 qpair failed and we were unable to recover it. 00:37:34.030 [2024-07-14 15:10:12.985851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.030 [2024-07-14 15:10:12.985918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.030 qpair failed and we were unable to recover it. 00:37:34.030 [2024-07-14 15:10:12.986074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.030 [2024-07-14 15:10:12.986109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.030 qpair failed and we were unable to recover it. 00:37:34.030 [2024-07-14 15:10:12.986229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.030 [2024-07-14 15:10:12.986264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.030 qpair failed and we were unable to recover it. 00:37:34.030 [2024-07-14 15:10:12.986368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.030 [2024-07-14 15:10:12.986402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.030 qpair failed and we were unable to recover it. 00:37:34.030 [2024-07-14 15:10:12.986543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.030 [2024-07-14 15:10:12.986576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.030 qpair failed and we were unable to recover it. 00:37:34.030 [2024-07-14 15:10:12.986692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.030 [2024-07-14 15:10:12.986725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.030 qpair failed and we were unable to recover it. 00:37:34.030 [2024-07-14 15:10:12.986852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.030 [2024-07-14 15:10:12.986899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.030 qpair failed and we were unable to recover it. 00:37:34.030 [2024-07-14 15:10:12.987064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.030 [2024-07-14 15:10:12.987103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.030 qpair failed and we were unable to recover it. 00:37:34.030 [2024-07-14 15:10:12.987248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.030 [2024-07-14 15:10:12.987281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.030 qpair failed and we were unable to recover it. 00:37:34.030 [2024-07-14 15:10:12.987423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.030 [2024-07-14 15:10:12.987456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.030 qpair failed and we were unable to recover it. 00:37:34.030 [2024-07-14 15:10:12.987564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.030 [2024-07-14 15:10:12.987597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.030 qpair failed and we were unable to recover it. 00:37:34.030 [2024-07-14 15:10:12.987731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.030 [2024-07-14 15:10:12.987765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.030 EAL: No free 2048 kB hugepages reported on node 1 00:37:34.030 qpair failed and we were unable to recover it. 00:37:34.030 [2024-07-14 15:10:12.987906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.030 [2024-07-14 15:10:12.987954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.030 qpair failed and we were unable to recover it. 00:37:34.030 [2024-07-14 15:10:12.988086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.030 [2024-07-14 15:10:12.988121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.030 qpair failed and we were unable to recover it. 00:37:34.030 [2024-07-14 15:10:12.988281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.030 [2024-07-14 15:10:12.988315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.030 qpair failed and we were unable to recover it. 00:37:34.030 [2024-07-14 15:10:12.988474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.030 [2024-07-14 15:10:12.988507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.030 qpair failed and we were unable to recover it. 00:37:34.030 [2024-07-14 15:10:12.988640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.030 [2024-07-14 15:10:12.988673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.030 qpair failed and we were unable to recover it. 00:37:34.030 [2024-07-14 15:10:12.988779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.030 [2024-07-14 15:10:12.988812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.030 qpair failed and we were unable to recover it. 00:37:34.030 [2024-07-14 15:10:12.988928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.030 [2024-07-14 15:10:12.988963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.030 qpair failed and we were unable to recover it. 00:37:34.030 [2024-07-14 15:10:12.989094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.030 [2024-07-14 15:10:12.989127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.030 qpair failed and we were unable to recover it. 00:37:34.030 [2024-07-14 15:10:12.989275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.030 [2024-07-14 15:10:12.989316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.030 qpair failed and we were unable to recover it. 00:37:34.030 [2024-07-14 15:10:12.989446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.030 [2024-07-14 15:10:12.989479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.030 qpair failed and we were unable to recover it. 00:37:34.030 [2024-07-14 15:10:12.989617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.030 [2024-07-14 15:10:12.989650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.030 qpair failed and we were unable to recover it. 00:37:34.030 [2024-07-14 15:10:12.989785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.030 [2024-07-14 15:10:12.989819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.030 qpair failed and we were unable to recover it. 00:37:34.030 [2024-07-14 15:10:12.989954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.030 [2024-07-14 15:10:12.989988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.030 qpair failed and we were unable to recover it. 00:37:34.030 [2024-07-14 15:10:12.990129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.030 [2024-07-14 15:10:12.990162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.030 qpair failed and we were unable to recover it. 00:37:34.030 [2024-07-14 15:10:12.990274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.030 [2024-07-14 15:10:12.990308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.030 qpair failed and we were unable to recover it. 00:37:34.030 [2024-07-14 15:10:12.990474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.030 [2024-07-14 15:10:12.990507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.030 qpair failed and we were unable to recover it. 00:37:34.030 [2024-07-14 15:10:12.990642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.030 [2024-07-14 15:10:12.990675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.030 qpair failed and we were unable to recover it. 00:37:34.030 [2024-07-14 15:10:12.990798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.030 [2024-07-14 15:10:12.990846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.031 qpair failed and we were unable to recover it. 00:37:34.031 [2024-07-14 15:10:12.990975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.031 [2024-07-14 15:10:12.991010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.031 qpair failed and we were unable to recover it. 00:37:34.031 [2024-07-14 15:10:12.991154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.031 [2024-07-14 15:10:12.991196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.031 qpair failed and we were unable to recover it. 00:37:34.031 [2024-07-14 15:10:12.991355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.031 [2024-07-14 15:10:12.991388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.031 qpair failed and we were unable to recover it. 00:37:34.031 [2024-07-14 15:10:12.991499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.031 [2024-07-14 15:10:12.991532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.031 qpair failed and we were unable to recover it. 00:37:34.031 [2024-07-14 15:10:12.991679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.031 [2024-07-14 15:10:12.991724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.031 qpair failed and we were unable to recover it. 00:37:34.031 [2024-07-14 15:10:12.991853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.031 [2024-07-14 15:10:12.991902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.031 qpair failed and we were unable to recover it. 00:37:34.031 [2024-07-14 15:10:12.992035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.031 [2024-07-14 15:10:12.992067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.031 qpair failed and we were unable to recover it. 00:37:34.031 [2024-07-14 15:10:12.992173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.031 [2024-07-14 15:10:12.992206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.031 qpair failed and we were unable to recover it. 00:37:34.031 [2024-07-14 15:10:12.992346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.031 [2024-07-14 15:10:12.992379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.031 qpair failed and we were unable to recover it. 00:37:34.031 [2024-07-14 15:10:12.992517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.031 [2024-07-14 15:10:12.992550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.031 qpair failed and we were unable to recover it. 00:37:34.031 [2024-07-14 15:10:12.992666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.031 [2024-07-14 15:10:12.992699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.031 qpair failed and we were unable to recover it. 00:37:34.031 [2024-07-14 15:10:12.992939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.031 [2024-07-14 15:10:12.992973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.031 qpair failed and we were unable to recover it. 00:37:34.031 [2024-07-14 15:10:12.993088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.031 [2024-07-14 15:10:12.993120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.031 qpair failed and we were unable to recover it. 00:37:34.031 [2024-07-14 15:10:12.993328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.031 [2024-07-14 15:10:12.993361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.031 qpair failed and we were unable to recover it. 00:37:34.031 [2024-07-14 15:10:12.993526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.031 [2024-07-14 15:10:12.993560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.031 qpair failed and we were unable to recover it. 00:37:34.031 [2024-07-14 15:10:12.993697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.031 [2024-07-14 15:10:12.993730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.031 qpair failed and we were unable to recover it. 00:37:34.031 [2024-07-14 15:10:12.993897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.031 [2024-07-14 15:10:12.993931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.031 qpair failed and we were unable to recover it. 00:37:34.031 [2024-07-14 15:10:12.994052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.031 [2024-07-14 15:10:12.994099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.031 qpair failed and we were unable to recover it. 00:37:34.031 [2024-07-14 15:10:12.994224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.031 [2024-07-14 15:10:12.994260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.031 qpair failed and we were unable to recover it. 00:37:34.031 [2024-07-14 15:10:12.994414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.031 [2024-07-14 15:10:12.994447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.031 qpair failed and we were unable to recover it. 00:37:34.031 [2024-07-14 15:10:12.994585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.031 [2024-07-14 15:10:12.994618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.031 qpair failed and we were unable to recover it. 00:37:34.031 [2024-07-14 15:10:12.994753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.031 [2024-07-14 15:10:12.994786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.031 qpair failed and we were unable to recover it. 00:37:34.031 [2024-07-14 15:10:12.994902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.031 [2024-07-14 15:10:12.994936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.031 qpair failed and we were unable to recover it. 00:37:34.031 [2024-07-14 15:10:12.995069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.031 [2024-07-14 15:10:12.995102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.031 qpair failed and we were unable to recover it. 00:37:34.031 [2024-07-14 15:10:12.995218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.031 [2024-07-14 15:10:12.995251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.031 qpair failed and we were unable to recover it. 00:37:34.031 [2024-07-14 15:10:12.995397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.031 [2024-07-14 15:10:12.995430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.031 qpair failed and we were unable to recover it. 00:37:34.031 [2024-07-14 15:10:12.995540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.031 [2024-07-14 15:10:12.995572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.031 qpair failed and we were unable to recover it. 00:37:34.031 [2024-07-14 15:10:12.995674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.031 [2024-07-14 15:10:12.995706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.031 qpair failed and we were unable to recover it. 00:37:34.031 [2024-07-14 15:10:12.995811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.031 [2024-07-14 15:10:12.995844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.031 qpair failed and we were unable to recover it. 00:37:34.031 [2024-07-14 15:10:12.996019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.031 [2024-07-14 15:10:12.996052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.031 qpair failed and we were unable to recover it. 00:37:34.031 [2024-07-14 15:10:12.996172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.031 [2024-07-14 15:10:12.996205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.031 qpair failed and we were unable to recover it. 00:37:34.031 [2024-07-14 15:10:12.996349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.031 [2024-07-14 15:10:12.996382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.031 qpair failed and we were unable to recover it. 00:37:34.031 [2024-07-14 15:10:12.996521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.031 [2024-07-14 15:10:12.996555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.031 qpair failed and we were unable to recover it. 00:37:34.031 [2024-07-14 15:10:12.996702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.031 [2024-07-14 15:10:12.996749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.031 qpair failed and we were unable to recover it. 00:37:34.031 [2024-07-14 15:10:12.996911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.031 [2024-07-14 15:10:12.996947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.031 qpair failed and we were unable to recover it. 00:37:34.031 [2024-07-14 15:10:12.997099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.031 [2024-07-14 15:10:12.997133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.031 qpair failed and we were unable to recover it. 00:37:34.031 [2024-07-14 15:10:12.997278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.031 [2024-07-14 15:10:12.997311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.031 qpair failed and we were unable to recover it. 00:37:34.031 [2024-07-14 15:10:12.997450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.031 [2024-07-14 15:10:12.997484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.031 qpair failed and we were unable to recover it. 00:37:34.031 [2024-07-14 15:10:12.997596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.031 [2024-07-14 15:10:12.997630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.031 qpair failed and we were unable to recover it. 00:37:34.031 [2024-07-14 15:10:12.997748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.031 [2024-07-14 15:10:12.997782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.031 qpair failed and we were unable to recover it. 00:37:34.031 [2024-07-14 15:10:12.997910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.032 [2024-07-14 15:10:12.997945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.032 qpair failed and we were unable to recover it. 00:37:34.032 [2024-07-14 15:10:12.998081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.032 [2024-07-14 15:10:12.998114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.032 qpair failed and we were unable to recover it. 00:37:34.032 [2024-07-14 15:10:12.998250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.032 [2024-07-14 15:10:12.998284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.032 qpair failed and we were unable to recover it. 00:37:34.032 [2024-07-14 15:10:12.998426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.032 [2024-07-14 15:10:12.998460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.032 qpair failed and we were unable to recover it. 00:37:34.032 [2024-07-14 15:10:12.998608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.032 [2024-07-14 15:10:12.998642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.032 qpair failed and we were unable to recover it. 00:37:34.032 [2024-07-14 15:10:12.998806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.032 [2024-07-14 15:10:12.998841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.032 qpair failed and we were unable to recover it. 00:37:34.032 [2024-07-14 15:10:12.999000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.032 [2024-07-14 15:10:12.999033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.032 qpair failed and we were unable to recover it. 00:37:34.032 [2024-07-14 15:10:12.999138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.032 [2024-07-14 15:10:12.999174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.032 qpair failed and we were unable to recover it. 00:37:34.032 [2024-07-14 15:10:12.999309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.032 [2024-07-14 15:10:12.999342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.032 qpair failed and we were unable to recover it. 00:37:34.032 [2024-07-14 15:10:12.999490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.032 [2024-07-14 15:10:12.999523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.032 qpair failed and we were unable to recover it. 00:37:34.032 [2024-07-14 15:10:12.999625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.032 [2024-07-14 15:10:12.999658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.032 qpair failed and we were unable to recover it. 00:37:34.032 [2024-07-14 15:10:12.999768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.032 [2024-07-14 15:10:12.999801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.032 qpair failed and we were unable to recover it. 00:37:34.032 [2024-07-14 15:10:12.999956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.032 [2024-07-14 15:10:12.999990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.032 qpair failed and we were unable to recover it. 00:37:34.032 [2024-07-14 15:10:13.000091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.032 [2024-07-14 15:10:13.000123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.032 qpair failed and we were unable to recover it. 00:37:34.032 [2024-07-14 15:10:13.000268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.032 [2024-07-14 15:10:13.000302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.032 qpair failed and we were unable to recover it. 00:37:34.032 [2024-07-14 15:10:13.000467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.032 [2024-07-14 15:10:13.000500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.032 qpair failed and we were unable to recover it. 00:37:34.032 [2024-07-14 15:10:13.000609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.032 [2024-07-14 15:10:13.000641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.032 qpair failed and we were unable to recover it. 00:37:34.032 [2024-07-14 15:10:13.000761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.032 [2024-07-14 15:10:13.000814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.032 qpair failed and we were unable to recover it. 00:37:34.032 [2024-07-14 15:10:13.000953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.032 [2024-07-14 15:10:13.000989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.032 qpair failed and we were unable to recover it. 00:37:34.032 [2024-07-14 15:10:13.001103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.032 [2024-07-14 15:10:13.001139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.032 qpair failed and we were unable to recover it. 00:37:34.032 [2024-07-14 15:10:13.001279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.032 [2024-07-14 15:10:13.001313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.032 qpair failed and we were unable to recover it. 00:37:34.032 [2024-07-14 15:10:13.001453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.032 [2024-07-14 15:10:13.001487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.032 qpair failed and we were unable to recover it. 00:37:34.032 [2024-07-14 15:10:13.001626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.032 [2024-07-14 15:10:13.001661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.032 qpair failed and we were unable to recover it. 00:37:34.032 [2024-07-14 15:10:13.001774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.032 [2024-07-14 15:10:13.001808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.032 qpair failed and we were unable to recover it. 00:37:34.032 [2024-07-14 15:10:13.001965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.032 [2024-07-14 15:10:13.001998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.032 qpair failed and we were unable to recover it. 00:37:34.032 [2024-07-14 15:10:13.002106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.032 [2024-07-14 15:10:13.002139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.032 qpair failed and we were unable to recover it. 00:37:34.032 [2024-07-14 15:10:13.002250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.032 [2024-07-14 15:10:13.002283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.032 qpair failed and we were unable to recover it. 00:37:34.032 [2024-07-14 15:10:13.002417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.032 [2024-07-14 15:10:13.002450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.032 qpair failed and we were unable to recover it. 00:37:34.032 [2024-07-14 15:10:13.002584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.032 [2024-07-14 15:10:13.002616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.032 qpair failed and we were unable to recover it. 00:37:34.032 [2024-07-14 15:10:13.002725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.032 [2024-07-14 15:10:13.002761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.032 qpair failed and we were unable to recover it. 00:37:34.032 [2024-07-14 15:10:13.002872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.032 [2024-07-14 15:10:13.002911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.032 qpair failed and we were unable to recover it. 00:37:34.032 [2024-07-14 15:10:13.003034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.032 [2024-07-14 15:10:13.003082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.032 qpair failed and we were unable to recover it. 00:37:34.032 [2024-07-14 15:10:13.003235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.032 [2024-07-14 15:10:13.003270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.032 qpair failed and we were unable to recover it. 00:37:34.032 [2024-07-14 15:10:13.003416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.032 [2024-07-14 15:10:13.003450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.032 qpair failed and we were unable to recover it. 00:37:34.032 [2024-07-14 15:10:13.003587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.032 [2024-07-14 15:10:13.003621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.032 qpair failed and we were unable to recover it. 00:37:34.032 [2024-07-14 15:10:13.003759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.032 [2024-07-14 15:10:13.003794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.032 qpair failed and we were unable to recover it. 00:37:34.032 [2024-07-14 15:10:13.003922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.032 [2024-07-14 15:10:13.003957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.032 qpair failed and we were unable to recover it. 00:37:34.032 [2024-07-14 15:10:13.004067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.032 [2024-07-14 15:10:13.004101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.032 qpair failed and we were unable to recover it. 00:37:34.032 [2024-07-14 15:10:13.004245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.032 [2024-07-14 15:10:13.004278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.032 qpair failed and we were unable to recover it. 00:37:34.032 [2024-07-14 15:10:13.004426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.032 [2024-07-14 15:10:13.004459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.032 qpair failed and we were unable to recover it. 00:37:34.032 [2024-07-14 15:10:13.004571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.032 [2024-07-14 15:10:13.004603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.032 qpair failed and we were unable to recover it. 00:37:34.033 [2024-07-14 15:10:13.004726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.033 [2024-07-14 15:10:13.004760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.033 qpair failed and we were unable to recover it. 00:37:34.033 [2024-07-14 15:10:13.004891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.033 [2024-07-14 15:10:13.004925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.033 qpair failed and we were unable to recover it. 00:37:34.033 [2024-07-14 15:10:13.005039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.033 [2024-07-14 15:10:13.005072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.033 qpair failed and we were unable to recover it. 00:37:34.033 [2024-07-14 15:10:13.005195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.033 [2024-07-14 15:10:13.005228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.033 qpair failed and we were unable to recover it. 00:37:34.033 [2024-07-14 15:10:13.005391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.033 [2024-07-14 15:10:13.005425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.033 qpair failed and we were unable to recover it. 00:37:34.033 [2024-07-14 15:10:13.005534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.033 [2024-07-14 15:10:13.005567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.033 qpair failed and we were unable to recover it. 00:37:34.033 [2024-07-14 15:10:13.005729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.033 [2024-07-14 15:10:13.005763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.033 qpair failed and we were unable to recover it. 00:37:34.033 [2024-07-14 15:10:13.005919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.033 [2024-07-14 15:10:13.005953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.033 qpair failed and we were unable to recover it. 00:37:34.033 [2024-07-14 15:10:13.006080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.033 [2024-07-14 15:10:13.006113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.033 qpair failed and we were unable to recover it. 00:37:34.033 [2024-07-14 15:10:13.006227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.033 [2024-07-14 15:10:13.006259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.033 qpair failed and we were unable to recover it. 00:37:34.033 [2024-07-14 15:10:13.006412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.033 [2024-07-14 15:10:13.006445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.033 qpair failed and we were unable to recover it. 00:37:34.033 [2024-07-14 15:10:13.006590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.033 [2024-07-14 15:10:13.006632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.033 qpair failed and we were unable to recover it. 00:37:34.033 [2024-07-14 15:10:13.006770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.033 [2024-07-14 15:10:13.006804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.033 qpair failed and we were unable to recover it. 00:37:34.033 [2024-07-14 15:10:13.006931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.033 [2024-07-14 15:10:13.006964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.033 qpair failed and we were unable to recover it. 00:37:34.033 [2024-07-14 15:10:13.007089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.033 [2024-07-14 15:10:13.007136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.033 qpair failed and we were unable to recover it. 00:37:34.033 [2024-07-14 15:10:13.007303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.033 [2024-07-14 15:10:13.007339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.033 qpair failed and we were unable to recover it. 00:37:34.033 [2024-07-14 15:10:13.007457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.033 [2024-07-14 15:10:13.007496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.033 qpair failed and we were unable to recover it. 00:37:34.033 [2024-07-14 15:10:13.007616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.033 [2024-07-14 15:10:13.007650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.033 qpair failed and we were unable to recover it. 00:37:34.033 [2024-07-14 15:10:13.007762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.033 [2024-07-14 15:10:13.007795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.033 qpair failed and we were unable to recover it. 00:37:34.033 [2024-07-14 15:10:13.007944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.033 [2024-07-14 15:10:13.007978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.033 qpair failed and we were unable to recover it. 00:37:34.033 [2024-07-14 15:10:13.008100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.033 [2024-07-14 15:10:13.008145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.033 qpair failed and we were unable to recover it. 00:37:34.033 [2024-07-14 15:10:13.008261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.033 [2024-07-14 15:10:13.008294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.033 qpair failed and we were unable to recover it. 00:37:34.033 [2024-07-14 15:10:13.008432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.033 [2024-07-14 15:10:13.008466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.033 qpair failed and we were unable to recover it. 00:37:34.033 [2024-07-14 15:10:13.008626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.033 [2024-07-14 15:10:13.008658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.033 qpair failed and we were unable to recover it. 00:37:34.033 [2024-07-14 15:10:13.008760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.033 [2024-07-14 15:10:13.008793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.033 qpair failed and we were unable to recover it. 00:37:34.033 [2024-07-14 15:10:13.008928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.033 [2024-07-14 15:10:13.008961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.033 qpair failed and we were unable to recover it. 00:37:34.033 [2024-07-14 15:10:13.009069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.033 [2024-07-14 15:10:13.009102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.033 qpair failed and we were unable to recover it. 00:37:34.033 [2024-07-14 15:10:13.009252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.033 [2024-07-14 15:10:13.009284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.033 qpair failed and we were unable to recover it. 00:37:34.033 [2024-07-14 15:10:13.009424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.033 [2024-07-14 15:10:13.009457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.033 qpair failed and we were unable to recover it. 00:37:34.033 [2024-07-14 15:10:13.009591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.033 [2024-07-14 15:10:13.009623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.033 qpair failed and we were unable to recover it. 00:37:34.033 [2024-07-14 15:10:13.009758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.033 [2024-07-14 15:10:13.009791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.033 qpair failed and we were unable to recover it. 00:37:34.033 [2024-07-14 15:10:13.009916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.033 [2024-07-14 15:10:13.009949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.033 qpair failed and we were unable to recover it. 00:37:34.033 [2024-07-14 15:10:13.010068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.033 [2024-07-14 15:10:13.010116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.033 qpair failed and we were unable to recover it. 00:37:34.033 [2024-07-14 15:10:13.010271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.034 [2024-07-14 15:10:13.010308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.034 qpair failed and we were unable to recover it. 00:37:34.034 [2024-07-14 15:10:13.010459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.034 [2024-07-14 15:10:13.010494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.034 qpair failed and we were unable to recover it. 00:37:34.034 [2024-07-14 15:10:13.010673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.034 [2024-07-14 15:10:13.010708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.034 qpair failed and we were unable to recover it. 00:37:34.034 [2024-07-14 15:10:13.010846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.034 [2024-07-14 15:10:13.010898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.034 qpair failed and we were unable to recover it. 00:37:34.034 [2024-07-14 15:10:13.011012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.034 [2024-07-14 15:10:13.011046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.034 qpair failed and we were unable to recover it. 00:37:34.034 [2024-07-14 15:10:13.011159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.034 [2024-07-14 15:10:13.011192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.034 qpair failed and we were unable to recover it. 00:37:34.034 [2024-07-14 15:10:13.011381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.034 [2024-07-14 15:10:13.011416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.034 qpair failed and we were unable to recover it. 00:37:34.034 [2024-07-14 15:10:13.011566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.034 [2024-07-14 15:10:13.011599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.034 qpair failed and we were unable to recover it. 00:37:34.034 [2024-07-14 15:10:13.011734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.034 [2024-07-14 15:10:13.011768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.034 qpair failed and we were unable to recover it. 00:37:34.034 [2024-07-14 15:10:13.011902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.034 [2024-07-14 15:10:13.011949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.034 qpair failed and we were unable to recover it. 00:37:34.034 [2024-07-14 15:10:13.012085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.034 [2024-07-14 15:10:13.012119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.034 qpair failed and we were unable to recover it. 00:37:34.034 [2024-07-14 15:10:13.012285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.034 [2024-07-14 15:10:13.012319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.034 qpair failed and we were unable to recover it. 00:37:34.034 [2024-07-14 15:10:13.012451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.034 [2024-07-14 15:10:13.012484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.034 qpair failed and we were unable to recover it. 00:37:34.034 [2024-07-14 15:10:13.012628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.034 [2024-07-14 15:10:13.012662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.034 qpair failed and we were unable to recover it. 00:37:34.034 [2024-07-14 15:10:13.012825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.034 [2024-07-14 15:10:13.012858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.034 qpair failed and we were unable to recover it. 00:37:34.034 [2024-07-14 15:10:13.012977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.034 [2024-07-14 15:10:13.013011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.034 qpair failed and we were unable to recover it. 00:37:34.034 [2024-07-14 15:10:13.013125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.034 [2024-07-14 15:10:13.013159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.034 qpair failed and we were unable to recover it. 00:37:34.034 [2024-07-14 15:10:13.013296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.034 [2024-07-14 15:10:13.013329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.034 qpair failed and we were unable to recover it. 00:37:34.034 [2024-07-14 15:10:13.013456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.034 [2024-07-14 15:10:13.013504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.034 qpair failed and we were unable to recover it. 00:37:34.034 [2024-07-14 15:10:13.013627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.034 [2024-07-14 15:10:13.013662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.034 qpair failed and we were unable to recover it. 00:37:34.034 [2024-07-14 15:10:13.013792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.034 [2024-07-14 15:10:13.013826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.034 qpair failed and we were unable to recover it. 00:37:34.034 [2024-07-14 15:10:13.013940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.034 [2024-07-14 15:10:13.013974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.034 qpair failed and we were unable to recover it. 00:37:34.034 [2024-07-14 15:10:13.014081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.034 [2024-07-14 15:10:13.014114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.034 qpair failed and we were unable to recover it. 00:37:34.034 [2024-07-14 15:10:13.014232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.034 [2024-07-14 15:10:13.014270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.034 qpair failed and we were unable to recover it. 00:37:34.034 [2024-07-14 15:10:13.014405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.034 [2024-07-14 15:10:13.014438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.034 qpair failed and we were unable to recover it. 00:37:34.034 [2024-07-14 15:10:13.014543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.034 [2024-07-14 15:10:13.014577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.034 qpair failed and we were unable to recover it. 00:37:34.034 [2024-07-14 15:10:13.014689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.034 [2024-07-14 15:10:13.014722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.034 qpair failed and we were unable to recover it. 00:37:34.034 [2024-07-14 15:10:13.014835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.034 [2024-07-14 15:10:13.014868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.034 qpair failed and we were unable to recover it. 00:37:34.034 [2024-07-14 15:10:13.015014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.034 [2024-07-14 15:10:13.015047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.034 qpair failed and we were unable to recover it. 00:37:34.034 [2024-07-14 15:10:13.015180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.034 [2024-07-14 15:10:13.015213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.034 qpair failed and we were unable to recover it. 00:37:34.034 [2024-07-14 15:10:13.015346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.034 [2024-07-14 15:10:13.015379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.034 qpair failed and we were unable to recover it. 00:37:34.034 [2024-07-14 15:10:13.015511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.034 [2024-07-14 15:10:13.015545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.034 qpair failed and we were unable to recover it. 00:37:34.034 [2024-07-14 15:10:13.015682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.034 [2024-07-14 15:10:13.015715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.034 qpair failed and we were unable to recover it. 00:37:34.034 [2024-07-14 15:10:13.015868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.034 [2024-07-14 15:10:13.015923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.034 qpair failed and we were unable to recover it. 00:37:34.034 [2024-07-14 15:10:13.016040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.034 [2024-07-14 15:10:13.016076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.034 qpair failed and we were unable to recover it. 00:37:34.034 [2024-07-14 15:10:13.016213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.034 [2024-07-14 15:10:13.016247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.034 qpair failed and we were unable to recover it. 00:37:34.034 [2024-07-14 15:10:13.016387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.034 [2024-07-14 15:10:13.016420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.034 qpair failed and we were unable to recover it. 00:37:34.034 [2024-07-14 15:10:13.016600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.034 [2024-07-14 15:10:13.016633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.034 qpair failed and we were unable to recover it. 00:37:34.034 [2024-07-14 15:10:13.016737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.034 [2024-07-14 15:10:13.016771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.034 qpair failed and we were unable to recover it. 00:37:34.034 [2024-07-14 15:10:13.016912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.034 [2024-07-14 15:10:13.016946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.034 qpair failed and we were unable to recover it. 00:37:34.034 [2024-07-14 15:10:13.017068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.034 [2024-07-14 15:10:13.017102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.035 qpair failed and we were unable to recover it. 00:37:34.035 [2024-07-14 15:10:13.017276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.035 [2024-07-14 15:10:13.017309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.035 qpair failed and we were unable to recover it. 00:37:34.035 [2024-07-14 15:10:13.017423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.035 [2024-07-14 15:10:13.017456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.035 qpair failed and we were unable to recover it. 00:37:34.035 [2024-07-14 15:10:13.017591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.035 [2024-07-14 15:10:13.017623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.035 qpair failed and we were unable to recover it. 00:37:34.035 [2024-07-14 15:10:13.017734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.035 [2024-07-14 15:10:13.017767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.035 qpair failed and we were unable to recover it. 00:37:34.035 [2024-07-14 15:10:13.017914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.035 [2024-07-14 15:10:13.017949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.035 qpair failed and we were unable to recover it. 00:37:34.035 [2024-07-14 15:10:13.018093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.035 [2024-07-14 15:10:13.018127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.035 qpair failed and we were unable to recover it. 00:37:34.035 [2024-07-14 15:10:13.018279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.035 [2024-07-14 15:10:13.018314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.035 qpair failed and we were unable to recover it. 00:37:34.035 [2024-07-14 15:10:13.018607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.035 [2024-07-14 15:10:13.018671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.035 qpair failed and we were unable to recover it. 00:37:34.035 [2024-07-14 15:10:13.018810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.035 [2024-07-14 15:10:13.018843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.035 qpair failed and we were unable to recover it. 00:37:34.035 [2024-07-14 15:10:13.019021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.035 [2024-07-14 15:10:13.019054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.035 qpair failed and we were unable to recover it. 00:37:34.035 [2024-07-14 15:10:13.019165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.035 [2024-07-14 15:10:13.019198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.035 qpair failed and we were unable to recover it. 00:37:34.035 [2024-07-14 15:10:13.019344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.035 [2024-07-14 15:10:13.019378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.035 qpair failed and we were unable to recover it. 00:37:34.035 [2024-07-14 15:10:13.019516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.035 [2024-07-14 15:10:13.019549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.035 qpair failed and we were unable to recover it. 00:37:34.035 [2024-07-14 15:10:13.019685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.035 [2024-07-14 15:10:13.019718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.035 qpair failed and we were unable to recover it. 00:37:34.035 [2024-07-14 15:10:13.019890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.035 [2024-07-14 15:10:13.019924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.035 qpair failed and we were unable to recover it. 00:37:34.035 [2024-07-14 15:10:13.020057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.035 [2024-07-14 15:10:13.020091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.035 qpair failed and we were unable to recover it. 00:37:34.035 [2024-07-14 15:10:13.020205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.035 [2024-07-14 15:10:13.020238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.035 qpair failed and we were unable to recover it. 00:37:34.035 [2024-07-14 15:10:13.020401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.035 [2024-07-14 15:10:13.020434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.035 qpair failed and we were unable to recover it. 00:37:34.035 [2024-07-14 15:10:13.020541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.035 [2024-07-14 15:10:13.020574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.035 qpair failed and we were unable to recover it. 00:37:34.035 [2024-07-14 15:10:13.020686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.035 [2024-07-14 15:10:13.020721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.035 qpair failed and we were unable to recover it. 00:37:34.035 [2024-07-14 15:10:13.020883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.035 [2024-07-14 15:10:13.020917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.035 qpair failed and we were unable to recover it. 00:37:34.035 [2024-07-14 15:10:13.021049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.035 [2024-07-14 15:10:13.021082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.035 qpair failed and we were unable to recover it. 00:37:34.035 [2024-07-14 15:10:13.021245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.035 [2024-07-14 15:10:13.021283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.035 qpair failed and we were unable to recover it. 00:37:34.035 [2024-07-14 15:10:13.021418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.035 [2024-07-14 15:10:13.021452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.035 qpair failed and we were unable to recover it. 00:37:34.035 [2024-07-14 15:10:13.021570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.035 [2024-07-14 15:10:13.021603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.035 qpair failed and we were unable to recover it. 00:37:34.035 [2024-07-14 15:10:13.021765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.035 [2024-07-14 15:10:13.021799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.035 qpair failed and we were unable to recover it. 00:37:34.035 [2024-07-14 15:10:13.021942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.035 [2024-07-14 15:10:13.021976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.035 qpair failed and we were unable to recover it. 00:37:34.035 [2024-07-14 15:10:13.022129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.035 [2024-07-14 15:10:13.022176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.035 qpair failed and we were unable to recover it. 00:37:34.035 [2024-07-14 15:10:13.022331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.035 [2024-07-14 15:10:13.022366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.035 qpair failed and we were unable to recover it. 00:37:34.035 [2024-07-14 15:10:13.022508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.035 [2024-07-14 15:10:13.022542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.035 qpair failed and we were unable to recover it. 00:37:34.035 [2024-07-14 15:10:13.022672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.035 [2024-07-14 15:10:13.022704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.035 qpair failed and we were unable to recover it. 00:37:34.035 [2024-07-14 15:10:13.022836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.035 [2024-07-14 15:10:13.022869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.035 qpair failed and we were unable to recover it. 00:37:34.035 [2024-07-14 15:10:13.023012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.035 [2024-07-14 15:10:13.023045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.035 qpair failed and we were unable to recover it. 00:37:34.035 [2024-07-14 15:10:13.023190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.035 [2024-07-14 15:10:13.023224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.035 qpair failed and we were unable to recover it. 00:37:34.035 [2024-07-14 15:10:13.023325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.035 [2024-07-14 15:10:13.023359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.035 qpair failed and we were unable to recover it. 00:37:34.035 [2024-07-14 15:10:13.023524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.035 [2024-07-14 15:10:13.023557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.035 qpair failed and we were unable to recover it. 00:37:34.035 [2024-07-14 15:10:13.023697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.035 [2024-07-14 15:10:13.023730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.035 qpair failed and we were unable to recover it. 00:37:34.035 [2024-07-14 15:10:13.023924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.035 [2024-07-14 15:10:13.023972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.035 qpair failed and we were unable to recover it. 00:37:34.035 [2024-07-14 15:10:13.024143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.035 [2024-07-14 15:10:13.024190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.035 qpair failed and we were unable to recover it. 00:37:34.035 [2024-07-14 15:10:13.024304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.035 [2024-07-14 15:10:13.024337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.035 qpair failed and we were unable to recover it. 00:37:34.035 [2024-07-14 15:10:13.024474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.035 [2024-07-14 15:10:13.024507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.036 qpair failed and we were unable to recover it. 00:37:34.036 [2024-07-14 15:10:13.024617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.036 [2024-07-14 15:10:13.024650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.036 qpair failed and we were unable to recover it. 00:37:34.036 [2024-07-14 15:10:13.024769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.036 [2024-07-14 15:10:13.024816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.036 qpair failed and we were unable to recover it. 00:37:34.036 [2024-07-14 15:10:13.024966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.036 [2024-07-14 15:10:13.025014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.036 qpair failed and we were unable to recover it. 00:37:34.036 [2024-07-14 15:10:13.025165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.036 [2024-07-14 15:10:13.025201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.036 qpair failed and we were unable to recover it. 00:37:34.036 [2024-07-14 15:10:13.025354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.036 [2024-07-14 15:10:13.025389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.036 qpair failed and we were unable to recover it. 00:37:34.036 [2024-07-14 15:10:13.025501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.036 [2024-07-14 15:10:13.025535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.036 qpair failed and we were unable to recover it. 00:37:34.036 [2024-07-14 15:10:13.025677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.036 [2024-07-14 15:10:13.025712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.036 qpair failed and we were unable to recover it. 00:37:34.036 [2024-07-14 15:10:13.025823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.036 [2024-07-14 15:10:13.025856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.036 qpair failed and we were unable to recover it. 00:37:34.036 [2024-07-14 15:10:13.026003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.036 [2024-07-14 15:10:13.026041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.036 qpair failed and we were unable to recover it. 00:37:34.036 [2024-07-14 15:10:13.026180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.036 [2024-07-14 15:10:13.026214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.036 qpair failed and we were unable to recover it. 00:37:34.036 [2024-07-14 15:10:13.026332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.036 [2024-07-14 15:10:13.026366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.036 qpair failed and we were unable to recover it. 00:37:34.036 [2024-07-14 15:10:13.026500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.036 [2024-07-14 15:10:13.026533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.036 qpair failed and we were unable to recover it. 00:37:34.036 [2024-07-14 15:10:13.026642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.036 [2024-07-14 15:10:13.026676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.036 qpair failed and we were unable to recover it. 00:37:34.036 [2024-07-14 15:10:13.026843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.036 [2024-07-14 15:10:13.026894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.036 qpair failed and we were unable to recover it. 00:37:34.036 [2024-07-14 15:10:13.027002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.036 [2024-07-14 15:10:13.027037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.036 qpair failed and we were unable to recover it. 00:37:34.036 [2024-07-14 15:10:13.027163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.036 [2024-07-14 15:10:13.027197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.036 qpair failed and we were unable to recover it. 00:37:34.036 [2024-07-14 15:10:13.027330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.036 [2024-07-14 15:10:13.027364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.036 qpair failed and we were unable to recover it. 00:37:34.036 [2024-07-14 15:10:13.027506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.036 [2024-07-14 15:10:13.027540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.036 qpair failed and we were unable to recover it. 00:37:34.036 [2024-07-14 15:10:13.027682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.036 [2024-07-14 15:10:13.027717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.036 qpair failed and we were unable to recover it. 00:37:34.036 [2024-07-14 15:10:13.027833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.036 [2024-07-14 15:10:13.027867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.036 qpair failed and we were unable to recover it. 00:37:34.036 [2024-07-14 15:10:13.028043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.036 [2024-07-14 15:10:13.028090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.036 qpair failed and we were unable to recover it. 00:37:34.036 [2024-07-14 15:10:13.028252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.036 [2024-07-14 15:10:13.028293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.036 qpair failed and we were unable to recover it. 00:37:34.036 [2024-07-14 15:10:13.028426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.036 [2024-07-14 15:10:13.028460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.036 qpair failed and we were unable to recover it. 00:37:34.036 [2024-07-14 15:10:13.028566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.036 [2024-07-14 15:10:13.028599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.036 qpair failed and we were unable to recover it. 00:37:34.036 [2024-07-14 15:10:13.028709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.036 [2024-07-14 15:10:13.028746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.036 qpair failed and we were unable to recover it. 00:37:34.036 [2024-07-14 15:10:13.028919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.036 [2024-07-14 15:10:13.028954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.036 qpair failed and we were unable to recover it. 00:37:34.036 [2024-07-14 15:10:13.029110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.036 [2024-07-14 15:10:13.029158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.036 qpair failed and we were unable to recover it. 00:37:34.036 [2024-07-14 15:10:13.029286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.036 [2024-07-14 15:10:13.029321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.036 qpair failed and we were unable to recover it. 00:37:34.036 [2024-07-14 15:10:13.029470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.036 [2024-07-14 15:10:13.029505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.036 qpair failed and we were unable to recover it. 00:37:34.036 [2024-07-14 15:10:13.029618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.036 [2024-07-14 15:10:13.029651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.036 qpair failed and we were unable to recover it. 00:37:34.036 [2024-07-14 15:10:13.029762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.036 [2024-07-14 15:10:13.029795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.036 qpair failed and we were unable to recover it. 00:37:34.036 [2024-07-14 15:10:13.029934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.036 [2024-07-14 15:10:13.029968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.036 qpair failed and we were unable to recover it. 00:37:34.036 [2024-07-14 15:10:13.030097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.036 [2024-07-14 15:10:13.030130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.036 qpair failed and we were unable to recover it. 00:37:34.036 [2024-07-14 15:10:13.030276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.036 [2024-07-14 15:10:13.030309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.036 qpair failed and we were unable to recover it. 00:37:34.036 [2024-07-14 15:10:13.030447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.036 [2024-07-14 15:10:13.030480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.036 qpair failed and we were unable to recover it. 00:37:34.036 [2024-07-14 15:10:13.030665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.036 [2024-07-14 15:10:13.030700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.036 qpair failed and we were unable to recover it. 00:37:34.036 [2024-07-14 15:10:13.030839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.036 [2024-07-14 15:10:13.030888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.036 qpair failed and we were unable to recover it. 00:37:34.036 [2024-07-14 15:10:13.031034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.036 [2024-07-14 15:10:13.031067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.036 qpair failed and we were unable to recover it. 00:37:34.036 [2024-07-14 15:10:13.031779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.036 [2024-07-14 15:10:13.031827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.036 qpair failed and we were unable to recover it. 00:37:34.036 [2024-07-14 15:10:13.031968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.036 [2024-07-14 15:10:13.032009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.036 qpair failed and we were unable to recover it. 00:37:34.036 [2024-07-14 15:10:13.032124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.037 [2024-07-14 15:10:13.032157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.037 qpair failed and we were unable to recover it. 00:37:34.037 [2024-07-14 15:10:13.032307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.037 [2024-07-14 15:10:13.032340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.037 qpair failed and we were unable to recover it. 00:37:34.037 [2024-07-14 15:10:13.032499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.037 [2024-07-14 15:10:13.032532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.037 qpair failed and we were unable to recover it. 00:37:34.037 [2024-07-14 15:10:13.032663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.037 [2024-07-14 15:10:13.032695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.037 qpair failed and we were unable to recover it. 00:37:34.037 [2024-07-14 15:10:13.032847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.037 [2024-07-14 15:10:13.032908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.037 qpair failed and we were unable to recover it. 00:37:34.037 [2024-07-14 15:10:13.033057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.037 [2024-07-14 15:10:13.033092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.037 qpair failed and we were unable to recover it. 00:37:34.037 [2024-07-14 15:10:13.033223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.037 [2024-07-14 15:10:13.033257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.037 qpair failed and we were unable to recover it. 00:37:34.037 [2024-07-14 15:10:13.033398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.037 [2024-07-14 15:10:13.033432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.037 qpair failed and we were unable to recover it. 00:37:34.037 [2024-07-14 15:10:13.033573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.037 [2024-07-14 15:10:13.033607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.037 qpair failed and we were unable to recover it. 00:37:34.037 [2024-07-14 15:10:13.033737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.037 [2024-07-14 15:10:13.033770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.037 qpair failed and we were unable to recover it. 00:37:34.037 [2024-07-14 15:10:13.033916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.037 [2024-07-14 15:10:13.033951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.037 qpair failed and we were unable to recover it. 00:37:34.037 [2024-07-14 15:10:13.034091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.037 [2024-07-14 15:10:13.034125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.037 qpair failed and we were unable to recover it. 00:37:34.037 [2024-07-14 15:10:13.034232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.037 [2024-07-14 15:10:13.034264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.037 qpair failed and we were unable to recover it. 00:37:34.037 [2024-07-14 15:10:13.034366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.037 [2024-07-14 15:10:13.034399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.037 qpair failed and we were unable to recover it. 00:37:34.037 [2024-07-14 15:10:13.034518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.037 [2024-07-14 15:10:13.034551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.037 qpair failed and we were unable to recover it. 00:37:34.037 [2024-07-14 15:10:13.034712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.037 [2024-07-14 15:10:13.034745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.037 qpair failed and we were unable to recover it. 00:37:34.037 [2024-07-14 15:10:13.034852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.037 [2024-07-14 15:10:13.034907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.037 qpair failed and we were unable to recover it. 00:37:34.037 [2024-07-14 15:10:13.035023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.037 [2024-07-14 15:10:13.035056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.037 qpair failed and we were unable to recover it. 00:37:34.037 [2024-07-14 15:10:13.035197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.037 [2024-07-14 15:10:13.035231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.037 qpair failed and we were unable to recover it. 00:37:34.037 [2024-07-14 15:10:13.035366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.037 [2024-07-14 15:10:13.035399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.037 qpair failed and we were unable to recover it. 00:37:34.037 [2024-07-14 15:10:13.035532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.037 [2024-07-14 15:10:13.035566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.037 qpair failed and we were unable to recover it. 00:37:34.037 [2024-07-14 15:10:13.035714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.037 [2024-07-14 15:10:13.035752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.037 qpair failed and we were unable to recover it. 00:37:34.037 [2024-07-14 15:10:13.035871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.037 [2024-07-14 15:10:13.035910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.037 qpair failed and we were unable to recover it. 00:37:34.037 [2024-07-14 15:10:13.036050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.037 [2024-07-14 15:10:13.036083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.037 qpair failed and we were unable to recover it. 00:37:34.037 [2024-07-14 15:10:13.036202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.037 [2024-07-14 15:10:13.036234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.037 qpair failed and we were unable to recover it. 00:37:34.037 [2024-07-14 15:10:13.036370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.037 [2024-07-14 15:10:13.036403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.037 qpair failed and we were unable to recover it. 00:37:34.037 [2024-07-14 15:10:13.036537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.037 [2024-07-14 15:10:13.036570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.037 qpair failed and we were unable to recover it. 00:37:34.037 [2024-07-14 15:10:13.036674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.037 [2024-07-14 15:10:13.036707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.037 qpair failed and we were unable to recover it. 00:37:34.037 [2024-07-14 15:10:13.036849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.037 [2024-07-14 15:10:13.036898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.037 qpair failed and we were unable to recover it. 00:37:34.037 [2024-07-14 15:10:13.037006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.037 [2024-07-14 15:10:13.037038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.037 qpair failed and we were unable to recover it. 00:37:34.037 [2024-07-14 15:10:13.037179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.037 [2024-07-14 15:10:13.037212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.037 qpair failed and we were unable to recover it. 00:37:34.037 [2024-07-14 15:10:13.037321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.037 [2024-07-14 15:10:13.037354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.037 qpair failed and we were unable to recover it. 00:37:34.037 [2024-07-14 15:10:13.037485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.037 [2024-07-14 15:10:13.037518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.037 qpair failed and we were unable to recover it. 00:37:34.037 [2024-07-14 15:10:13.037634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.037 [2024-07-14 15:10:13.037666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.037 qpair failed and we were unable to recover it. 00:37:34.037 [2024-07-14 15:10:13.037799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.037 [2024-07-14 15:10:13.037832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.037 qpair failed and we were unable to recover it. 00:37:34.037 [2024-07-14 15:10:13.038017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.038 [2024-07-14 15:10:13.038067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.038 qpair failed and we were unable to recover it. 00:37:34.038 [2024-07-14 15:10:13.038208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.038 [2024-07-14 15:10:13.038245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.038 qpair failed and we were unable to recover it. 00:37:34.038 [2024-07-14 15:10:13.038390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.038 [2024-07-14 15:10:13.038426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.038 qpair failed and we were unable to recover it. 00:37:34.038 [2024-07-14 15:10:13.038539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.038 [2024-07-14 15:10:13.038574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.038 qpair failed and we were unable to recover it. 00:37:34.038 [2024-07-14 15:10:13.038707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.038 [2024-07-14 15:10:13.038742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.038 qpair failed and we were unable to recover it. 00:37:34.038 [2024-07-14 15:10:13.038867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.038 [2024-07-14 15:10:13.038923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.038 qpair failed and we were unable to recover it. 00:37:34.038 [2024-07-14 15:10:13.039066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.038 [2024-07-14 15:10:13.039100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.038 qpair failed and we were unable to recover it. 00:37:34.038 [2024-07-14 15:10:13.039209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.038 [2024-07-14 15:10:13.039242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.038 qpair failed and we were unable to recover it. 00:37:34.038 [2024-07-14 15:10:13.039346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.038 [2024-07-14 15:10:13.039378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.038 qpair failed and we were unable to recover it. 00:37:34.038 [2024-07-14 15:10:13.039490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.038 [2024-07-14 15:10:13.039523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.038 qpair failed and we were unable to recover it. 00:37:34.038 [2024-07-14 15:10:13.039635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.038 [2024-07-14 15:10:13.039667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.038 qpair failed and we were unable to recover it. 00:37:34.038 [2024-07-14 15:10:13.039814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.038 [2024-07-14 15:10:13.039850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.038 qpair failed and we were unable to recover it. 00:37:34.038 [2024-07-14 15:10:13.040008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.038 [2024-07-14 15:10:13.040055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.038 qpair failed and we were unable to recover it. 00:37:34.038 [2024-07-14 15:10:13.040240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.038 [2024-07-14 15:10:13.040287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.038 qpair failed and we were unable to recover it. 00:37:34.038 [2024-07-14 15:10:13.040432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.038 [2024-07-14 15:10:13.040465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.038 qpair failed and we were unable to recover it. 00:37:34.038 [2024-07-14 15:10:13.040575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.038 [2024-07-14 15:10:13.040607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.038 qpair failed and we were unable to recover it. 00:37:34.038 [2024-07-14 15:10:13.040742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.038 [2024-07-14 15:10:13.040774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.038 qpair failed and we were unable to recover it. 00:37:34.038 [2024-07-14 15:10:13.040938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.038 [2024-07-14 15:10:13.040971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.038 qpair failed and we were unable to recover it. 00:37:34.038 [2024-07-14 15:10:13.041082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.038 [2024-07-14 15:10:13.041117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.038 qpair failed and we were unable to recover it. 00:37:34.038 [2024-07-14 15:10:13.041225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.038 [2024-07-14 15:10:13.041262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.038 qpair failed and we were unable to recover it. 00:37:34.038 [2024-07-14 15:10:13.041399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.038 [2024-07-14 15:10:13.041433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.038 qpair failed and we were unable to recover it. 00:37:34.038 [2024-07-14 15:10:13.041540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.038 [2024-07-14 15:10:13.041572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.038 qpair failed and we were unable to recover it. 00:37:34.038 [2024-07-14 15:10:13.041676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.038 [2024-07-14 15:10:13.041709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.038 qpair failed and we were unable to recover it. 00:37:34.038 [2024-07-14 15:10:13.041854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.038 [2024-07-14 15:10:13.041901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.038 qpair failed and we were unable to recover it. 00:37:34.038 [2024-07-14 15:10:13.042036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.038 [2024-07-14 15:10:13.042070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.038 qpair failed and we were unable to recover it. 00:37:34.038 [2024-07-14 15:10:13.042182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.038 [2024-07-14 15:10:13.042214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.038 qpair failed and we were unable to recover it. 00:37:34.038 [2024-07-14 15:10:13.042350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.038 [2024-07-14 15:10:13.042388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.038 qpair failed and we were unable to recover it. 00:37:34.038 [2024-07-14 15:10:13.042499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.038 [2024-07-14 15:10:13.042534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.038 qpair failed and we were unable to recover it. 00:37:34.038 [2024-07-14 15:10:13.042660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.038 [2024-07-14 15:10:13.042708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.038 qpair failed and we were unable to recover it. 00:37:34.038 [2024-07-14 15:10:13.042851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.038 [2024-07-14 15:10:13.042898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.038 qpair failed and we were unable to recover it. 00:37:34.038 [2024-07-14 15:10:13.043034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.038 [2024-07-14 15:10:13.043067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.038 qpair failed and we were unable to recover it. 00:37:34.038 [2024-07-14 15:10:13.043206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.038 [2024-07-14 15:10:13.043239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.038 qpair failed and we were unable to recover it. 00:37:34.038 [2024-07-14 15:10:13.043384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.038 [2024-07-14 15:10:13.043417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.038 qpair failed and we were unable to recover it. 00:37:34.038 [2024-07-14 15:10:13.043552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.038 [2024-07-14 15:10:13.043584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.038 qpair failed and we were unable to recover it. 00:37:34.038 [2024-07-14 15:10:13.043710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.038 [2024-07-14 15:10:13.043742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.038 qpair failed and we were unable to recover it. 00:37:34.038 [2024-07-14 15:10:13.043887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.038 [2024-07-14 15:10:13.043921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.038 qpair failed and we were unable to recover it. 00:37:34.038 [2024-07-14 15:10:13.044029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.038 [2024-07-14 15:10:13.044062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.038 qpair failed and we were unable to recover it. 00:37:34.038 [2024-07-14 15:10:13.044169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.038 [2024-07-14 15:10:13.044201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.038 qpair failed and we were unable to recover it. 00:37:34.038 [2024-07-14 15:10:13.044359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.038 [2024-07-14 15:10:13.044391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.038 qpair failed and we were unable to recover it. 00:37:34.038 [2024-07-14 15:10:13.044500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.038 [2024-07-14 15:10:13.044532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.038 qpair failed and we were unable to recover it. 00:37:34.038 [2024-07-14 15:10:13.044657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.038 [2024-07-14 15:10:13.044693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.038 qpair failed and we were unable to recover it. 00:37:34.038 [2024-07-14 15:10:13.044808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.039 [2024-07-14 15:10:13.044841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.039 qpair failed and we were unable to recover it. 00:37:34.039 [2024-07-14 15:10:13.044981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.039 [2024-07-14 15:10:13.045028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.039 qpair failed and we were unable to recover it. 00:37:34.039 [2024-07-14 15:10:13.045146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.039 [2024-07-14 15:10:13.045179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.039 qpair failed and we were unable to recover it. 00:37:34.039 [2024-07-14 15:10:13.045311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.039 [2024-07-14 15:10:13.045343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.039 qpair failed and we were unable to recover it. 00:37:34.039 [2024-07-14 15:10:13.045474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.039 [2024-07-14 15:10:13.045506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.039 qpair failed and we were unable to recover it. 00:37:34.039 [2024-07-14 15:10:13.045617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.039 [2024-07-14 15:10:13.045649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.039 qpair failed and we were unable to recover it. 00:37:34.039 [2024-07-14 15:10:13.045793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.039 [2024-07-14 15:10:13.045841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.039 qpair failed and we were unable to recover it. 00:37:34.039 [2024-07-14 15:10:13.045969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.039 [2024-07-14 15:10:13.046004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.039 qpair failed and we were unable to recover it. 00:37:34.039 [2024-07-14 15:10:13.046167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.039 [2024-07-14 15:10:13.046204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.039 qpair failed and we were unable to recover it. 00:37:34.039 [2024-07-14 15:10:13.046341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.039 [2024-07-14 15:10:13.046374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.039 qpair failed and we were unable to recover it. 00:37:34.039 [2024-07-14 15:10:13.046487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.039 [2024-07-14 15:10:13.046520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.039 qpair failed and we were unable to recover it. 00:37:34.039 [2024-07-14 15:10:13.046653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.039 [2024-07-14 15:10:13.046688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.039 qpair failed and we were unable to recover it. 00:37:34.039 [2024-07-14 15:10:13.046804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.039 [2024-07-14 15:10:13.046838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.039 qpair failed and we were unable to recover it. 00:37:34.039 [2024-07-14 15:10:13.046990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.039 [2024-07-14 15:10:13.047023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.039 qpair failed and we were unable to recover it. 00:37:34.039 [2024-07-14 15:10:13.047129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.039 [2024-07-14 15:10:13.047171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.039 qpair failed and we were unable to recover it. 00:37:34.039 [2024-07-14 15:10:13.047275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.039 [2024-07-14 15:10:13.047308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.039 qpair failed and we were unable to recover it. 00:37:34.039 [2024-07-14 15:10:13.047421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.039 [2024-07-14 15:10:13.047453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.039 qpair failed and we were unable to recover it. 00:37:34.039 [2024-07-14 15:10:13.047586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.039 [2024-07-14 15:10:13.047621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.039 qpair failed and we were unable to recover it. 00:37:34.039 [2024-07-14 15:10:13.047747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.039 [2024-07-14 15:10:13.047794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.039 qpair failed and we were unable to recover it. 00:37:34.039 [2024-07-14 15:10:13.047962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.039 [2024-07-14 15:10:13.048011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.039 qpair failed and we were unable to recover it. 00:37:34.039 [2024-07-14 15:10:13.048116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.039 [2024-07-14 15:10:13.048150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.039 qpair failed and we were unable to recover it. 00:37:34.039 [2024-07-14 15:10:13.048272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.039 [2024-07-14 15:10:13.048305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.039 qpair failed and we were unable to recover it. 00:37:34.039 [2024-07-14 15:10:13.048435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.039 [2024-07-14 15:10:13.048467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.039 qpair failed and we were unable to recover it. 00:37:34.039 [2024-07-14 15:10:13.048596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.039 [2024-07-14 15:10:13.048629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.039 qpair failed and we were unable to recover it. 00:37:34.039 [2024-07-14 15:10:13.048793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.039 [2024-07-14 15:10:13.048826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.039 qpair failed and we were unable to recover it. 00:37:34.039 [2024-07-14 15:10:13.048952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.039 [2024-07-14 15:10:13.048991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.039 qpair failed and we were unable to recover it. 00:37:34.039 [2024-07-14 15:10:13.049129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.039 [2024-07-14 15:10:13.049169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.039 qpair failed and we were unable to recover it. 00:37:34.039 [2024-07-14 15:10:13.049271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.039 [2024-07-14 15:10:13.049304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.039 qpair failed and we were unable to recover it. 00:37:34.039 [2024-07-14 15:10:13.049460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.039 [2024-07-14 15:10:13.049504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.039 qpair failed and we were unable to recover it. 00:37:34.039 [2024-07-14 15:10:13.049644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.039 [2024-07-14 15:10:13.049677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.039 qpair failed and we were unable to recover it. 00:37:34.039 [2024-07-14 15:10:13.049812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.039 [2024-07-14 15:10:13.049844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.039 qpair failed and we were unable to recover it. 00:37:34.039 [2024-07-14 15:10:13.050005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.039 [2024-07-14 15:10:13.050053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.039 qpair failed and we were unable to recover it. 00:37:34.039 [2024-07-14 15:10:13.050208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.039 [2024-07-14 15:10:13.050255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.039 qpair failed and we were unable to recover it. 00:37:34.039 [2024-07-14 15:10:13.050379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.039 [2024-07-14 15:10:13.050425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.039 qpair failed and we were unable to recover it. 00:37:34.039 [2024-07-14 15:10:13.050556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.039 [2024-07-14 15:10:13.050590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.039 qpair failed and we were unable to recover it. 00:37:34.039 [2024-07-14 15:10:13.050749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.039 [2024-07-14 15:10:13.050783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.039 qpair failed and we were unable to recover it. 00:37:34.039 [2024-07-14 15:10:13.050907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.039 [2024-07-14 15:10:13.050942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.039 qpair failed and we were unable to recover it. 00:37:34.039 [2024-07-14 15:10:13.051051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.039 [2024-07-14 15:10:13.051086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.039 qpair failed and we were unable to recover it. 00:37:34.039 [2024-07-14 15:10:13.051251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.039 [2024-07-14 15:10:13.051290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.039 qpair failed and we were unable to recover it. 00:37:34.039 [2024-07-14 15:10:13.051432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.039 [2024-07-14 15:10:13.051467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.039 qpair failed and we were unable to recover it. 00:37:34.039 [2024-07-14 15:10:13.051604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.039 [2024-07-14 15:10:13.051637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.039 qpair failed and we were unable to recover it. 00:37:34.040 [2024-07-14 15:10:13.051750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.040 [2024-07-14 15:10:13.051788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.040 qpair failed and we were unable to recover it. 00:37:34.040 [2024-07-14 15:10:13.051961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.040 [2024-07-14 15:10:13.052008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.040 qpair failed and we were unable to recover it. 00:37:34.040 [2024-07-14 15:10:13.052129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.040 [2024-07-14 15:10:13.052171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.040 qpair failed and we were unable to recover it. 00:37:34.040 [2024-07-14 15:10:13.052307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.040 [2024-07-14 15:10:13.052341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.040 qpair failed and we were unable to recover it. 00:37:34.040 [2024-07-14 15:10:13.052482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.040 [2024-07-14 15:10:13.052515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.040 qpair failed and we were unable to recover it. 00:37:34.040 [2024-07-14 15:10:13.052652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.040 [2024-07-14 15:10:13.052684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.040 qpair failed and we were unable to recover it. 00:37:34.040 [2024-07-14 15:10:13.052817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.040 [2024-07-14 15:10:13.052849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.040 qpair failed and we were unable to recover it. 00:37:34.040 [2024-07-14 15:10:13.052964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.040 [2024-07-14 15:10:13.052998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.040 qpair failed and we were unable to recover it. 00:37:34.040 [2024-07-14 15:10:13.053109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.040 [2024-07-14 15:10:13.053142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.040 qpair failed and we were unable to recover it. 00:37:34.040 [2024-07-14 15:10:13.053291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.040 [2024-07-14 15:10:13.053325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.040 qpair failed and we were unable to recover it. 00:37:34.040 [2024-07-14 15:10:13.053487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.040 [2024-07-14 15:10:13.053520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.040 qpair failed and we were unable to recover it. 00:37:34.040 [2024-07-14 15:10:13.053669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.040 [2024-07-14 15:10:13.053716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.040 qpair failed and we were unable to recover it. 00:37:34.040 [2024-07-14 15:10:13.053844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.040 [2024-07-14 15:10:13.053910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.040 qpair failed and we were unable to recover it. 00:37:34.040 [2024-07-14 15:10:13.054049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.040 [2024-07-14 15:10:13.054096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.040 qpair failed and we were unable to recover it. 00:37:34.040 [2024-07-14 15:10:13.054227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.040 [2024-07-14 15:10:13.054263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.040 qpair failed and we were unable to recover it. 00:37:34.040 [2024-07-14 15:10:13.054379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.040 [2024-07-14 15:10:13.054412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.040 qpair failed and we were unable to recover it. 00:37:34.040 [2024-07-14 15:10:13.054548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.040 [2024-07-14 15:10:13.054581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.040 qpair failed and we were unable to recover it. 00:37:34.040 [2024-07-14 15:10:13.054713] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:34.040 [2024-07-14 15:10:13.054722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.040 [2024-07-14 15:10:13.054759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.040 qpair failed and we were unable to recover it. 00:37:34.040 [2024-07-14 15:10:13.054871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.040 [2024-07-14 15:10:13.054914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.040 qpair failed and we were unable to recover it. 00:37:34.040 [2024-07-14 15:10:13.055064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.040 [2024-07-14 15:10:13.055111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.040 qpair failed and we were unable to recover it. 00:37:34.040 [2024-07-14 15:10:13.055268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.040 [2024-07-14 15:10:13.055303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.040 qpair failed and we were unable to recover it. 00:37:34.040 [2024-07-14 15:10:13.055415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.040 [2024-07-14 15:10:13.055449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.040 qpair failed and we were unable to recover it. 00:37:34.040 [2024-07-14 15:10:13.055588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.040 [2024-07-14 15:10:13.055622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.040 qpair failed and we were unable to recover it. 00:37:34.040 [2024-07-14 15:10:13.055738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.040 [2024-07-14 15:10:13.055772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.040 qpair failed and we were unable to recover it. 00:37:34.040 [2024-07-14 15:10:13.055916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.040 [2024-07-14 15:10:13.055963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.040 qpair failed and we were unable to recover it. 00:37:34.040 [2024-07-14 15:10:13.056082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.040 [2024-07-14 15:10:13.056117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.040 qpair failed and we were unable to recover it. 00:37:34.040 [2024-07-14 15:10:13.056265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.040 [2024-07-14 15:10:13.056299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.040 qpair failed and we were unable to recover it. 00:37:34.040 [2024-07-14 15:10:13.056406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.040 [2024-07-14 15:10:13.056439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.040 qpair failed and we were unable to recover it. 00:37:34.040 [2024-07-14 15:10:13.056599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.040 [2024-07-14 15:10:13.056633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.040 qpair failed and we were unable to recover it. 00:37:34.040 [2024-07-14 15:10:13.056768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.040 [2024-07-14 15:10:13.056803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.040 qpair failed and we were unable to recover it. 00:37:34.040 [2024-07-14 15:10:13.056980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.040 [2024-07-14 15:10:13.057014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.040 qpair failed and we were unable to recover it. 00:37:34.040 [2024-07-14 15:10:13.057124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.040 [2024-07-14 15:10:13.057168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.040 qpair failed and we were unable to recover it. 00:37:34.040 [2024-07-14 15:10:13.057276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.040 [2024-07-14 15:10:13.057308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.040 qpair failed and we were unable to recover it. 00:37:34.040 [2024-07-14 15:10:13.057471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.040 [2024-07-14 15:10:13.057504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.040 qpair failed and we were unable to recover it. 00:37:34.040 [2024-07-14 15:10:13.057640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.040 [2024-07-14 15:10:13.057673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.040 qpair failed and we were unable to recover it. 00:37:34.040 [2024-07-14 15:10:13.057790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.040 [2024-07-14 15:10:13.057823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.040 qpair failed and we were unable to recover it. 00:37:34.040 [2024-07-14 15:10:13.057943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.040 [2024-07-14 15:10:13.057976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.040 qpair failed and we were unable to recover it. 00:37:34.040 [2024-07-14 15:10:13.058133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.040 [2024-07-14 15:10:13.058170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.040 qpair failed and we were unable to recover it. 00:37:34.040 [2024-07-14 15:10:13.058302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.040 [2024-07-14 15:10:13.058335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.040 qpair failed and we were unable to recover it. 00:37:34.040 [2024-07-14 15:10:13.058443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.040 [2024-07-14 15:10:13.058476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.040 qpair failed and we were unable to recover it. 00:37:34.040 [2024-07-14 15:10:13.058608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.041 [2024-07-14 15:10:13.058641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.041 qpair failed and we were unable to recover it. 00:37:34.041 [2024-07-14 15:10:13.058742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.041 [2024-07-14 15:10:13.058775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.041 qpair failed and we were unable to recover it. 00:37:34.041 [2024-07-14 15:10:13.058901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.041 [2024-07-14 15:10:13.058934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.041 qpair failed and we were unable to recover it. 00:37:34.041 [2024-07-14 15:10:13.059057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.041 [2024-07-14 15:10:13.059104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.041 qpair failed and we were unable to recover it. 00:37:34.041 [2024-07-14 15:10:13.059236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.041 [2024-07-14 15:10:13.059272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.041 qpair failed and we were unable to recover it. 00:37:34.041 [2024-07-14 15:10:13.059391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.041 [2024-07-14 15:10:13.059426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.041 qpair failed and we were unable to recover it. 00:37:34.041 [2024-07-14 15:10:13.059530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.041 [2024-07-14 15:10:13.059564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.041 qpair failed and we were unable to recover it. 00:37:34.041 [2024-07-14 15:10:13.059670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.041 [2024-07-14 15:10:13.059702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.041 qpair failed and we were unable to recover it. 00:37:34.041 [2024-07-14 15:10:13.059813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.041 [2024-07-14 15:10:13.059845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.041 qpair failed and we were unable to recover it. 00:37:34.041 [2024-07-14 15:10:13.059974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.041 [2024-07-14 15:10:13.060007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.041 qpair failed and we were unable to recover it. 00:37:34.041 [2024-07-14 15:10:13.060140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.041 [2024-07-14 15:10:13.060196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.041 qpair failed and we were unable to recover it. 00:37:34.041 [2024-07-14 15:10:13.060321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.041 [2024-07-14 15:10:13.060356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.041 qpair failed and we were unable to recover it. 00:37:34.041 [2024-07-14 15:10:13.060493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.041 [2024-07-14 15:10:13.060527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.041 qpair failed and we were unable to recover it. 00:37:34.041 [2024-07-14 15:10:13.060663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.041 [2024-07-14 15:10:13.060696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.041 qpair failed and we were unable to recover it. 00:37:34.041 [2024-07-14 15:10:13.060838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.041 [2024-07-14 15:10:13.060908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.041 qpair failed and we were unable to recover it. 00:37:34.041 [2024-07-14 15:10:13.061060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.041 [2024-07-14 15:10:13.061096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.041 qpair failed and we were unable to recover it. 00:37:34.041 [2024-07-14 15:10:13.061238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.041 [2024-07-14 15:10:13.061273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.041 qpair failed and we were unable to recover it. 00:37:34.041 [2024-07-14 15:10:13.061414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.041 [2024-07-14 15:10:13.061448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.041 qpair failed and we were unable to recover it. 00:37:34.041 [2024-07-14 15:10:13.061575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.041 [2024-07-14 15:10:13.061612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.041 qpair failed and we were unable to recover it. 00:37:34.041 [2024-07-14 15:10:13.061742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.041 [2024-07-14 15:10:13.061787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.041 qpair failed and we were unable to recover it. 00:37:34.041 [2024-07-14 15:10:13.061923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.041 [2024-07-14 15:10:13.061957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.041 qpair failed and we were unable to recover it. 00:37:34.041 [2024-07-14 15:10:13.062109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.041 [2024-07-14 15:10:13.062157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.041 qpair failed and we were unable to recover it. 00:37:34.041 [2024-07-14 15:10:13.062290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.041 [2024-07-14 15:10:13.062324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.041 qpair failed and we were unable to recover it. 00:37:34.041 [2024-07-14 15:10:13.062459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.041 [2024-07-14 15:10:13.062492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.041 qpair failed and we were unable to recover it. 00:37:34.041 [2024-07-14 15:10:13.062613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.041 [2024-07-14 15:10:13.062647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.041 qpair failed and we were unable to recover it. 00:37:34.041 [2024-07-14 15:10:13.062792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.041 [2024-07-14 15:10:13.062825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.041 qpair failed and we were unable to recover it. 00:37:34.041 [2024-07-14 15:10:13.062980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.041 [2024-07-14 15:10:13.063015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.041 qpair failed and we were unable to recover it. 00:37:34.041 [2024-07-14 15:10:13.063144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.041 [2024-07-14 15:10:13.063188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.041 qpair failed and we were unable to recover it. 00:37:34.041 [2024-07-14 15:10:13.063323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.041 [2024-07-14 15:10:13.063356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.041 qpair failed and we were unable to recover it. 00:37:34.041 [2024-07-14 15:10:13.063487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.041 [2024-07-14 15:10:13.063520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.041 qpair failed and we were unable to recover it. 00:37:34.041 [2024-07-14 15:10:13.063674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.041 [2024-07-14 15:10:13.063721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.041 qpair failed and we were unable to recover it. 00:37:34.041 [2024-07-14 15:10:13.063856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.041 [2024-07-14 15:10:13.063915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.041 qpair failed and we were unable to recover it. 00:37:34.041 [2024-07-14 15:10:13.064045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.041 [2024-07-14 15:10:13.064082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.041 qpair failed and we were unable to recover it. 00:37:34.041 [2024-07-14 15:10:13.064200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.041 [2024-07-14 15:10:13.064234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.041 qpair failed and we were unable to recover it. 00:37:34.041 [2024-07-14 15:10:13.064376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.041 [2024-07-14 15:10:13.064411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.041 qpair failed and we were unable to recover it. 00:37:34.041 [2024-07-14 15:10:13.064522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.041 [2024-07-14 15:10:13.064556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.041 qpair failed and we were unable to recover it. 00:37:34.041 [2024-07-14 15:10:13.064701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.041 [2024-07-14 15:10:13.064736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.041 qpair failed and we were unable to recover it. 00:37:34.041 [2024-07-14 15:10:13.064904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.041 [2024-07-14 15:10:13.064957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.041 qpair failed and we were unable to recover it. 00:37:34.041 [2024-07-14 15:10:13.065081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.041 [2024-07-14 15:10:13.065116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.041 qpair failed and we were unable to recover it. 00:37:34.041 [2024-07-14 15:10:13.065254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.041 [2024-07-14 15:10:13.065288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.042 qpair failed and we were unable to recover it. 00:37:34.042 [2024-07-14 15:10:13.065447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.042 [2024-07-14 15:10:13.065479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.042 qpair failed and we were unable to recover it. 00:37:34.042 [2024-07-14 15:10:13.065577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.042 [2024-07-14 15:10:13.065610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.042 qpair failed and we were unable to recover it. 00:37:34.042 [2024-07-14 15:10:13.065734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.042 [2024-07-14 15:10:13.065782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.042 qpair failed and we were unable to recover it. 00:37:34.042 [2024-07-14 15:10:13.065921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.042 [2024-07-14 15:10:13.065957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.042 qpair failed and we were unable to recover it. 00:37:34.042 [2024-07-14 15:10:13.066127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.042 [2024-07-14 15:10:13.066164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.042 qpair failed and we were unable to recover it. 00:37:34.042 [2024-07-14 15:10:13.066277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.042 [2024-07-14 15:10:13.066310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.042 qpair failed and we were unable to recover it. 00:37:34.042 [2024-07-14 15:10:13.066461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.042 [2024-07-14 15:10:13.066494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.042 qpair failed and we were unable to recover it. 00:37:34.042 [2024-07-14 15:10:13.066628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.042 [2024-07-14 15:10:13.066661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.042 qpair failed and we were unable to recover it. 00:37:34.042 [2024-07-14 15:10:13.066789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.042 [2024-07-14 15:10:13.066824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.042 qpair failed and we were unable to recover it. 00:37:34.042 [2024-07-14 15:10:13.066990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.042 [2024-07-14 15:10:13.067037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.042 qpair failed and we were unable to recover it. 00:37:34.042 [2024-07-14 15:10:13.067181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.042 [2024-07-14 15:10:13.067237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.042 qpair failed and we were unable to recover it. 00:37:34.042 [2024-07-14 15:10:13.067357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.042 [2024-07-14 15:10:13.067390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.042 qpair failed and we were unable to recover it. 00:37:34.042 [2024-07-14 15:10:13.067499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.042 [2024-07-14 15:10:13.067532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.042 qpair failed and we were unable to recover it. 00:37:34.042 [2024-07-14 15:10:13.067664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.042 [2024-07-14 15:10:13.067697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.042 qpair failed and we were unable to recover it. 00:37:34.042 [2024-07-14 15:10:13.067859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.042 [2024-07-14 15:10:13.067900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.042 qpair failed and we were unable to recover it. 00:37:34.042 [2024-07-14 15:10:13.068037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.042 [2024-07-14 15:10:13.068084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.042 qpair failed and we were unable to recover it. 00:37:34.042 [2024-07-14 15:10:13.068221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.042 [2024-07-14 15:10:13.068269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.042 qpair failed and we were unable to recover it. 00:37:34.042 [2024-07-14 15:10:13.068404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.042 [2024-07-14 15:10:13.068440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.042 qpair failed and we were unable to recover it. 00:37:34.042 [2024-07-14 15:10:13.068579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.042 [2024-07-14 15:10:13.068613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.042 qpair failed and we were unable to recover it. 00:37:34.042 [2024-07-14 15:10:13.068724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.042 [2024-07-14 15:10:13.068759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.042 qpair failed and we were unable to recover it. 00:37:34.042 [2024-07-14 15:10:13.068916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.042 [2024-07-14 15:10:13.068964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.042 qpair failed and we were unable to recover it. 00:37:34.042 [2024-07-14 15:10:13.069133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.042 [2024-07-14 15:10:13.069181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.042 qpair failed and we were unable to recover it. 00:37:34.042 [2024-07-14 15:10:13.069330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.042 [2024-07-14 15:10:13.069367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.042 qpair failed and we were unable to recover it. 00:37:34.042 [2024-07-14 15:10:13.069513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.042 [2024-07-14 15:10:13.069547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.042 qpair failed and we were unable to recover it. 00:37:34.042 [2024-07-14 15:10:13.069701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.042 [2024-07-14 15:10:13.069735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.042 qpair failed and we were unable to recover it. 00:37:34.042 [2024-07-14 15:10:13.069897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.042 [2024-07-14 15:10:13.069930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.042 qpair failed and we were unable to recover it. 00:37:34.042 [2024-07-14 15:10:13.070072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.042 [2024-07-14 15:10:13.070107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.042 qpair failed and we were unable to recover it. 00:37:34.042 [2024-07-14 15:10:13.070253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.042 [2024-07-14 15:10:13.070287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.042 qpair failed and we were unable to recover it. 00:37:34.042 [2024-07-14 15:10:13.070419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.042 [2024-07-14 15:10:13.070453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.042 qpair failed and we were unable to recover it. 00:37:34.042 [2024-07-14 15:10:13.070572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.042 [2024-07-14 15:10:13.070606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.042 qpair failed and we were unable to recover it. 00:37:34.042 [2024-07-14 15:10:13.070759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.042 [2024-07-14 15:10:13.070793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.042 qpair failed and we were unable to recover it. 00:37:34.042 [2024-07-14 15:10:13.070970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.042 [2024-07-14 15:10:13.071018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.042 qpair failed and we were unable to recover it. 00:37:34.042 [2024-07-14 15:10:13.071194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.042 [2024-07-14 15:10:13.071229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.042 qpair failed and we were unable to recover it. 00:37:34.042 [2024-07-14 15:10:13.071387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.042 [2024-07-14 15:10:13.071421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.042 qpair failed and we were unable to recover it. 00:37:34.042 [2024-07-14 15:10:13.071533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.042 [2024-07-14 15:10:13.071566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.042 qpair failed and we were unable to recover it. 00:37:34.042 [2024-07-14 15:10:13.071671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.042 [2024-07-14 15:10:13.071703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.042 qpair failed and we were unable to recover it. 00:37:34.042 [2024-07-14 15:10:13.071869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.042 [2024-07-14 15:10:13.071923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.042 qpair failed and we were unable to recover it. 00:37:34.043 [2024-07-14 15:10:13.072069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.043 [2024-07-14 15:10:13.072104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.043 qpair failed and we were unable to recover it. 00:37:34.043 [2024-07-14 15:10:13.072222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.043 [2024-07-14 15:10:13.072255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.043 qpair failed and we were unable to recover it. 00:37:34.043 [2024-07-14 15:10:13.072367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.043 [2024-07-14 15:10:13.072401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.043 qpair failed and we were unable to recover it. 00:37:34.043 [2024-07-14 15:10:13.072534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.043 [2024-07-14 15:10:13.072567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.043 qpair failed and we were unable to recover it. 00:37:34.043 [2024-07-14 15:10:13.072703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.043 [2024-07-14 15:10:13.072736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.043 qpair failed and we were unable to recover it. 00:37:34.043 [2024-07-14 15:10:13.072872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.043 [2024-07-14 15:10:13.072951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.043 qpair failed and we were unable to recover it. 00:37:34.043 [2024-07-14 15:10:13.073103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.043 [2024-07-14 15:10:13.073140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.043 qpair failed and we were unable to recover it. 00:37:34.043 [2024-07-14 15:10:13.073260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.043 [2024-07-14 15:10:13.073304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.043 qpair failed and we were unable to recover it. 00:37:34.043 [2024-07-14 15:10:13.073422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.043 [2024-07-14 15:10:13.073455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.043 qpair failed and we were unable to recover it. 00:37:34.043 [2024-07-14 15:10:13.073554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.043 [2024-07-14 15:10:13.073587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.043 qpair failed and we were unable to recover it. 00:37:34.043 [2024-07-14 15:10:13.073693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.043 [2024-07-14 15:10:13.073725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.043 qpair failed and we were unable to recover it. 00:37:34.043 [2024-07-14 15:10:13.073844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.043 [2024-07-14 15:10:13.073886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.043 qpair failed and we were unable to recover it. 00:37:34.043 [2024-07-14 15:10:13.074001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.043 [2024-07-14 15:10:13.074036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.043 qpair failed and we were unable to recover it. 00:37:34.043 [2024-07-14 15:10:13.074173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.043 [2024-07-14 15:10:13.074207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.043 qpair failed and we were unable to recover it. 00:37:34.043 [2024-07-14 15:10:13.074345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.043 [2024-07-14 15:10:13.074379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.043 qpair failed and we were unable to recover it. 00:37:34.043 [2024-07-14 15:10:13.074494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.043 [2024-07-14 15:10:13.074527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.043 qpair failed and we were unable to recover it. 00:37:34.043 [2024-07-14 15:10:13.074667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.043 [2024-07-14 15:10:13.074701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.043 qpair failed and we were unable to recover it. 00:37:34.043 [2024-07-14 15:10:13.074813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.043 [2024-07-14 15:10:13.074848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.043 qpair failed and we were unable to recover it. 00:37:34.043 [2024-07-14 15:10:13.074957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.043 [2024-07-14 15:10:13.074991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.043 qpair failed and we were unable to recover it. 00:37:34.043 [2024-07-14 15:10:13.075124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.043 [2024-07-14 15:10:13.075171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.043 qpair failed and we were unable to recover it. 00:37:34.043 [2024-07-14 15:10:13.075342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.043 [2024-07-14 15:10:13.075376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.043 qpair failed and we were unable to recover it. 00:37:34.043 [2024-07-14 15:10:13.075517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.043 [2024-07-14 15:10:13.075551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.043 qpair failed and we were unable to recover it. 00:37:34.043 [2024-07-14 15:10:13.075673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.043 [2024-07-14 15:10:13.075708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.043 qpair failed and we were unable to recover it. 00:37:34.043 [2024-07-14 15:10:13.075819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.043 [2024-07-14 15:10:13.075872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.043 qpair failed and we were unable to recover it. 00:37:34.043 [2024-07-14 15:10:13.075995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.043 [2024-07-14 15:10:13.076029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.043 qpair failed and we were unable to recover it. 00:37:34.043 [2024-07-14 15:10:13.076171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.043 [2024-07-14 15:10:13.076212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.043 qpair failed and we were unable to recover it. 00:37:34.043 [2024-07-14 15:10:13.076341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.043 [2024-07-14 15:10:13.076374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.043 qpair failed and we were unable to recover it. 00:37:34.043 [2024-07-14 15:10:13.076488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.043 [2024-07-14 15:10:13.076529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.043 qpair failed and we were unable to recover it. 00:37:34.043 [2024-07-14 15:10:13.076665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.043 [2024-07-14 15:10:13.076698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.043 qpair failed and we were unable to recover it. 00:37:34.043 [2024-07-14 15:10:13.076824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.043 [2024-07-14 15:10:13.076857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.043 qpair failed and we were unable to recover it. 00:37:34.043 [2024-07-14 15:10:13.077021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.043 [2024-07-14 15:10:13.077067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.043 qpair failed and we were unable to recover it. 00:37:34.043 [2024-07-14 15:10:13.077207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.043 [2024-07-14 15:10:13.077242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.043 qpair failed and we were unable to recover it. 00:37:34.043 [2024-07-14 15:10:13.077370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.043 [2024-07-14 15:10:13.077403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.043 qpair failed and we were unable to recover it. 00:37:34.043 [2024-07-14 15:10:13.077540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.043 [2024-07-14 15:10:13.077573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.043 qpair failed and we were unable to recover it. 00:37:34.043 [2024-07-14 15:10:13.077732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.043 [2024-07-14 15:10:13.077765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.043 qpair failed and we were unable to recover it. 00:37:34.043 [2024-07-14 15:10:13.077895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.043 [2024-07-14 15:10:13.077929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.043 qpair failed and we were unable to recover it. 00:37:34.043 [2024-07-14 15:10:13.078046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.043 [2024-07-14 15:10:13.078080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.043 qpair failed and we were unable to recover it. 00:37:34.043 [2024-07-14 15:10:13.078215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.043 [2024-07-14 15:10:13.078250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.043 qpair failed and we were unable to recover it. 00:37:34.043 [2024-07-14 15:10:13.078384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.043 [2024-07-14 15:10:13.078418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.043 qpair failed and we were unable to recover it. 00:37:34.043 [2024-07-14 15:10:13.078556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.043 [2024-07-14 15:10:13.078590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.043 qpair failed and we were unable to recover it. 00:37:34.043 [2024-07-14 15:10:13.078700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.043 [2024-07-14 15:10:13.078733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.044 qpair failed and we were unable to recover it. 00:37:34.044 [2024-07-14 15:10:13.078868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.044 [2024-07-14 15:10:13.078909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.044 qpair failed and we were unable to recover it. 00:37:34.044 [2024-07-14 15:10:13.079022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.044 [2024-07-14 15:10:13.079058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.044 qpair failed and we were unable to recover it. 00:37:34.044 [2024-07-14 15:10:13.079201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.044 [2024-07-14 15:10:13.079237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.044 qpair failed and we were unable to recover it. 00:37:34.044 [2024-07-14 15:10:13.079375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.044 [2024-07-14 15:10:13.079409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.044 qpair failed and we were unable to recover it. 00:37:34.044 [2024-07-14 15:10:13.079541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.044 [2024-07-14 15:10:13.079575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.044 qpair failed and we were unable to recover it. 00:37:34.044 [2024-07-14 15:10:13.079679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.044 [2024-07-14 15:10:13.079711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.044 qpair failed and we were unable to recover it. 00:37:34.044 [2024-07-14 15:10:13.079820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.044 [2024-07-14 15:10:13.079853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.044 qpair failed and we were unable to recover it. 00:37:34.044 [2024-07-14 15:10:13.079983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.044 [2024-07-14 15:10:13.080018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.044 qpair failed and we were unable to recover it. 00:37:34.044 [2024-07-14 15:10:13.080144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.044 [2024-07-14 15:10:13.080190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.044 qpair failed and we were unable to recover it. 00:37:34.044 [2024-07-14 15:10:13.080331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.044 [2024-07-14 15:10:13.080367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.044 qpair failed and we were unable to recover it. 00:37:34.044 [2024-07-14 15:10:13.080480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.044 [2024-07-14 15:10:13.080513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.044 qpair failed and we were unable to recover it. 00:37:34.044 [2024-07-14 15:10:13.080611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.044 [2024-07-14 15:10:13.080644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.044 qpair failed and we were unable to recover it. 00:37:34.044 [2024-07-14 15:10:13.080755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.044 [2024-07-14 15:10:13.080788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.044 qpair failed and we were unable to recover it. 00:37:34.044 [2024-07-14 15:10:13.080934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.044 [2024-07-14 15:10:13.080968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.044 qpair failed and we were unable to recover it. 00:37:34.044 [2024-07-14 15:10:13.081079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.044 [2024-07-14 15:10:13.081114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.044 qpair failed and we were unable to recover it. 00:37:34.044 [2024-07-14 15:10:13.081250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.044 [2024-07-14 15:10:13.081283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.044 qpair failed and we were unable to recover it. 00:37:34.044 [2024-07-14 15:10:13.081392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.044 [2024-07-14 15:10:13.081426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.044 qpair failed and we were unable to recover it. 00:37:34.044 [2024-07-14 15:10:13.081589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.044 [2024-07-14 15:10:13.081622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.044 qpair failed and we were unable to recover it. 00:37:34.044 [2024-07-14 15:10:13.081730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.044 [2024-07-14 15:10:13.081763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.044 qpair failed and we were unable to recover it. 00:37:34.044 [2024-07-14 15:10:13.081926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.044 [2024-07-14 15:10:13.081961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.044 qpair failed and we were unable to recover it. 00:37:34.044 [2024-07-14 15:10:13.082077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.044 [2024-07-14 15:10:13.082111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.044 qpair failed and we were unable to recover it. 00:37:34.044 [2024-07-14 15:10:13.082246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.044 [2024-07-14 15:10:13.082278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.044 qpair failed and we were unable to recover it. 00:37:34.044 [2024-07-14 15:10:13.082382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.044 [2024-07-14 15:10:13.082414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.044 qpair failed and we were unable to recover it. 00:37:34.044 [2024-07-14 15:10:13.082525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.044 [2024-07-14 15:10:13.082559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.044 qpair failed and we were unable to recover it. 00:37:34.044 [2024-07-14 15:10:13.082662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.044 [2024-07-14 15:10:13.082695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.044 qpair failed and we were unable to recover it. 00:37:34.044 [2024-07-14 15:10:13.082834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.044 [2024-07-14 15:10:13.082867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.044 qpair failed and we were unable to recover it. 00:37:34.044 [2024-07-14 15:10:13.082982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.044 [2024-07-14 15:10:13.083021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.044 qpair failed and we were unable to recover it. 00:37:34.044 [2024-07-14 15:10:13.083130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.044 [2024-07-14 15:10:13.083164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.044 qpair failed and we were unable to recover it. 00:37:34.044 [2024-07-14 15:10:13.083292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.044 [2024-07-14 15:10:13.083325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.044 qpair failed and we were unable to recover it. 00:37:34.044 [2024-07-14 15:10:13.083465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.044 [2024-07-14 15:10:13.083498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.044 qpair failed and we were unable to recover it. 00:37:34.044 [2024-07-14 15:10:13.083628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.044 [2024-07-14 15:10:13.083661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.044 qpair failed and we were unable to recover it. 00:37:34.044 [2024-07-14 15:10:13.083772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.044 [2024-07-14 15:10:13.083804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.044 qpair failed and we were unable to recover it. 00:37:34.044 [2024-07-14 15:10:13.083943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.044 [2024-07-14 15:10:13.083979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.044 qpair failed and we were unable to recover it. 00:37:34.044 [2024-07-14 15:10:13.084117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.044 [2024-07-14 15:10:13.084150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.044 qpair failed and we were unable to recover it. 00:37:34.044 [2024-07-14 15:10:13.084261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.044 [2024-07-14 15:10:13.084294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.045 qpair failed and we were unable to recover it. 00:37:34.045 [2024-07-14 15:10:13.084455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.045 [2024-07-14 15:10:13.084488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.045 qpair failed and we were unable to recover it. 00:37:34.045 [2024-07-14 15:10:13.084594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.045 [2024-07-14 15:10:13.084628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.045 qpair failed and we were unable to recover it. 00:37:34.045 [2024-07-14 15:10:13.084787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.045 [2024-07-14 15:10:13.084820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.045 qpair failed and we were unable to recover it. 00:37:34.045 [2024-07-14 15:10:13.084927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.045 [2024-07-14 15:10:13.084961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.045 qpair failed and we were unable to recover it. 00:37:34.045 [2024-07-14 15:10:13.085082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.045 [2024-07-14 15:10:13.085129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.045 qpair failed and we were unable to recover it. 00:37:34.045 [2024-07-14 15:10:13.085276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.045 [2024-07-14 15:10:13.085312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.045 qpair failed and we were unable to recover it. 00:37:34.045 [2024-07-14 15:10:13.085474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.045 [2024-07-14 15:10:13.085507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.045 qpair failed and we were unable to recover it. 00:37:34.045 [2024-07-14 15:10:13.085617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.045 [2024-07-14 15:10:13.085651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.045 qpair failed and we were unable to recover it. 00:37:34.045 [2024-07-14 15:10:13.085799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.045 [2024-07-14 15:10:13.085832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.045 qpair failed and we were unable to recover it. 00:37:34.045 [2024-07-14 15:10:13.085976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.045 [2024-07-14 15:10:13.086011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.045 qpair failed and we were unable to recover it. 00:37:34.045 [2024-07-14 15:10:13.086112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.045 [2024-07-14 15:10:13.086146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.045 qpair failed and we were unable to recover it. 00:37:34.045 [2024-07-14 15:10:13.086248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.045 [2024-07-14 15:10:13.086280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.045 qpair failed and we were unable to recover it. 00:37:34.045 [2024-07-14 15:10:13.086405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.045 [2024-07-14 15:10:13.086437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.045 qpair failed and we were unable to recover it. 00:37:34.045 [2024-07-14 15:10:13.086542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.045 [2024-07-14 15:10:13.086574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.045 qpair failed and we were unable to recover it. 00:37:34.045 [2024-07-14 15:10:13.086699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.045 [2024-07-14 15:10:13.086745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.045 qpair failed and we were unable to recover it. 00:37:34.045 [2024-07-14 15:10:13.086914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.045 [2024-07-14 15:10:13.086962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.045 qpair failed and we were unable to recover it. 00:37:34.045 [2024-07-14 15:10:13.087080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.045 [2024-07-14 15:10:13.087115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.045 qpair failed and we were unable to recover it. 00:37:34.045 [2024-07-14 15:10:13.087244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.045 [2024-07-14 15:10:13.087277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.045 qpair failed and we were unable to recover it. 00:37:34.045 [2024-07-14 15:10:13.087422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.045 [2024-07-14 15:10:13.087456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.045 qpair failed and we were unable to recover it. 00:37:34.045 [2024-07-14 15:10:13.087566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.045 [2024-07-14 15:10:13.087599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.045 qpair failed and we were unable to recover it. 00:37:34.045 [2024-07-14 15:10:13.087728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.045 [2024-07-14 15:10:13.087761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.045 qpair failed and we were unable to recover it. 00:37:34.045 [2024-07-14 15:10:13.087874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.045 [2024-07-14 15:10:13.087914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.045 qpair failed and we were unable to recover it. 00:37:34.045 [2024-07-14 15:10:13.088028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.045 [2024-07-14 15:10:13.088061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.045 qpair failed and we were unable to recover it. 00:37:34.045 [2024-07-14 15:10:13.088162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.045 [2024-07-14 15:10:13.088194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.045 qpair failed and we were unable to recover it. 00:37:34.045 [2024-07-14 15:10:13.088320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.045 [2024-07-14 15:10:13.088352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.045 qpair failed and we were unable to recover it. 00:37:34.045 [2024-07-14 15:10:13.088466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.045 [2024-07-14 15:10:13.088499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.045 qpair failed and we were unable to recover it. 00:37:34.045 [2024-07-14 15:10:13.088615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.045 [2024-07-14 15:10:13.088648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.045 qpair failed and we were unable to recover it. 00:37:34.045 [2024-07-14 15:10:13.088790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.045 [2024-07-14 15:10:13.088826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.045 qpair failed and we were unable to recover it. 00:37:34.045 [2024-07-14 15:10:13.088956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.045 [2024-07-14 15:10:13.089004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.045 qpair failed and we were unable to recover it. 00:37:34.045 [2024-07-14 15:10:13.089133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.045 [2024-07-14 15:10:13.089168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.045 qpair failed and we were unable to recover it. 00:37:34.045 [2024-07-14 15:10:13.089284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.045 [2024-07-14 15:10:13.089320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.045 qpair failed and we were unable to recover it. 00:37:34.045 [2024-07-14 15:10:13.089430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.046 [2024-07-14 15:10:13.089469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.046 qpair failed and we were unable to recover it. 00:37:34.046 [2024-07-14 15:10:13.089587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.046 [2024-07-14 15:10:13.089621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.046 qpair failed and we were unable to recover it. 00:37:34.046 [2024-07-14 15:10:13.089731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.046 [2024-07-14 15:10:13.089764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.046 qpair failed and we were unable to recover it. 00:37:34.046 [2024-07-14 15:10:13.089904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.046 [2024-07-14 15:10:13.089939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.046 qpair failed and we were unable to recover it. 00:37:34.046 [2024-07-14 15:10:13.090047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.046 [2024-07-14 15:10:13.090081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.046 qpair failed and we were unable to recover it. 00:37:34.046 [2024-07-14 15:10:13.090213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.046 [2024-07-14 15:10:13.090246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.046 qpair failed and we were unable to recover it. 00:37:34.046 [2024-07-14 15:10:13.090381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.046 [2024-07-14 15:10:13.090415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.046 qpair failed and we were unable to recover it. 00:37:34.046 [2024-07-14 15:10:13.090533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.046 [2024-07-14 15:10:13.090566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.046 qpair failed and we were unable to recover it. 00:37:34.046 [2024-07-14 15:10:13.090700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.046 [2024-07-14 15:10:13.090734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.046 qpair failed and we were unable to recover it. 00:37:34.046 [2024-07-14 15:10:13.090848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.046 [2024-07-14 15:10:13.090890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.046 qpair failed and we were unable to recover it. 00:37:34.046 [2024-07-14 15:10:13.091036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.046 [2024-07-14 15:10:13.091069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.046 qpair failed and we were unable to recover it. 00:37:34.046 [2024-07-14 15:10:13.091206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.046 [2024-07-14 15:10:13.091239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.046 qpair failed and we were unable to recover it. 00:37:34.046 [2024-07-14 15:10:13.091375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.046 [2024-07-14 15:10:13.091409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.046 qpair failed and we were unable to recover it. 00:37:34.046 [2024-07-14 15:10:13.091519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.046 [2024-07-14 15:10:13.091554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.046 qpair failed and we were unable to recover it. 00:37:34.046 [2024-07-14 15:10:13.091712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.046 [2024-07-14 15:10:13.091759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.046 qpair failed and we were unable to recover it. 00:37:34.046 [2024-07-14 15:10:13.091909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.046 [2024-07-14 15:10:13.091944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.046 qpair failed and we were unable to recover it. 00:37:34.046 [2024-07-14 15:10:13.092060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.046 [2024-07-14 15:10:13.092094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.046 qpair failed and we were unable to recover it. 00:37:34.046 [2024-07-14 15:10:13.092229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.046 [2024-07-14 15:10:13.092263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.046 qpair failed and we were unable to recover it. 00:37:34.046 [2024-07-14 15:10:13.092370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.046 [2024-07-14 15:10:13.092404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.046 qpair failed and we were unable to recover it. 00:37:34.046 [2024-07-14 15:10:13.092516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.046 [2024-07-14 15:10:13.092551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.046 qpair failed and we were unable to recover it. 00:37:34.046 [2024-07-14 15:10:13.092688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.046 [2024-07-14 15:10:13.092722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.046 qpair failed and we were unable to recover it. 00:37:34.046 [2024-07-14 15:10:13.092867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.046 [2024-07-14 15:10:13.092915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.046 qpair failed and we were unable to recover it. 00:37:34.046 [2024-07-14 15:10:13.093061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.046 [2024-07-14 15:10:13.093095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.046 qpair failed and we were unable to recover it. 00:37:34.046 [2024-07-14 15:10:13.093228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.046 [2024-07-14 15:10:13.093263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.046 qpair failed and we were unable to recover it. 00:37:34.046 [2024-07-14 15:10:13.093403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.047 [2024-07-14 15:10:13.093438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.047 qpair failed and we were unable to recover it. 00:37:34.047 [2024-07-14 15:10:13.093573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.047 [2024-07-14 15:10:13.093607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.047 qpair failed and we were unable to recover it. 00:37:34.047 [2024-07-14 15:10:13.093713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.047 [2024-07-14 15:10:13.093746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.047 qpair failed and we were unable to recover it. 00:37:34.047 [2024-07-14 15:10:13.093856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.047 [2024-07-14 15:10:13.093897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.047 qpair failed and we were unable to recover it. 00:37:34.047 [2024-07-14 15:10:13.094036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.047 [2024-07-14 15:10:13.094070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.047 qpair failed and we were unable to recover it. 00:37:34.047 [2024-07-14 15:10:13.094194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.047 [2024-07-14 15:10:13.094228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.047 qpair failed and we were unable to recover it. 00:37:34.047 [2024-07-14 15:10:13.094373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.047 [2024-07-14 15:10:13.094406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.047 qpair failed and we were unable to recover it. 00:37:34.047 [2024-07-14 15:10:13.094510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.047 [2024-07-14 15:10:13.094543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.047 qpair failed and we were unable to recover it. 00:37:34.047 [2024-07-14 15:10:13.094683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.047 [2024-07-14 15:10:13.094716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.047 qpair failed and we were unable to recover it. 00:37:34.047 [2024-07-14 15:10:13.094852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.047 [2024-07-14 15:10:13.094892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.047 qpair failed and we were unable to recover it. 00:37:34.047 [2024-07-14 15:10:13.095008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.047 [2024-07-14 15:10:13.095058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.047 qpair failed and we were unable to recover it. 00:37:34.047 [2024-07-14 15:10:13.095174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.047 [2024-07-14 15:10:13.095209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.047 qpair failed and we were unable to recover it. 00:37:34.047 [2024-07-14 15:10:13.095308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.047 [2024-07-14 15:10:13.095341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.047 qpair failed and we were unable to recover it. 00:37:34.047 [2024-07-14 15:10:13.095437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.047 [2024-07-14 15:10:13.095471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.047 qpair failed and we were unable to recover it. 00:37:34.047 [2024-07-14 15:10:13.095603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.047 [2024-07-14 15:10:13.095637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.047 qpair failed and we were unable to recover it. 00:37:34.047 [2024-07-14 15:10:13.095775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.047 [2024-07-14 15:10:13.095808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.047 qpair failed and we were unable to recover it. 00:37:34.047 [2024-07-14 15:10:13.095922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.047 [2024-07-14 15:10:13.095961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.047 qpair failed and we were unable to recover it. 00:37:34.047 [2024-07-14 15:10:13.096102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.047 [2024-07-14 15:10:13.096135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.047 qpair failed and we were unable to recover it. 00:37:34.047 [2024-07-14 15:10:13.096266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.047 [2024-07-14 15:10:13.096298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.047 qpair failed and we were unable to recover it. 00:37:34.047 [2024-07-14 15:10:13.096438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.047 [2024-07-14 15:10:13.096472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.047 qpair failed and we were unable to recover it. 00:37:34.047 [2024-07-14 15:10:13.096606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.047 [2024-07-14 15:10:13.096639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.047 qpair failed and we were unable to recover it. 00:37:34.047 [2024-07-14 15:10:13.096744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.047 [2024-07-14 15:10:13.096776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.047 qpair failed and we were unable to recover it. 00:37:34.047 [2024-07-14 15:10:13.096891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.047 [2024-07-14 15:10:13.096924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.047 qpair failed and we were unable to recover it. 00:37:34.047 [2024-07-14 15:10:13.097042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.047 [2024-07-14 15:10:13.097074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.047 qpair failed and we were unable to recover it. 00:37:34.047 [2024-07-14 15:10:13.097214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.047 [2024-07-14 15:10:13.097246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.047 qpair failed and we were unable to recover it. 00:37:34.047 [2024-07-14 15:10:13.097343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.047 [2024-07-14 15:10:13.097375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.047 qpair failed and we were unable to recover it. 00:37:34.047 [2024-07-14 15:10:13.097511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.047 [2024-07-14 15:10:13.097543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.047 qpair failed and we were unable to recover it. 00:37:34.047 [2024-07-14 15:10:13.097652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.047 [2024-07-14 15:10:13.097684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.047 qpair failed and we were unable to recover it. 00:37:34.047 [2024-07-14 15:10:13.097824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.047 [2024-07-14 15:10:13.097856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.047 qpair failed and we were unable to recover it. 00:37:34.047 [2024-07-14 15:10:13.097981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.047 [2024-07-14 15:10:13.098021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.047 qpair failed and we were unable to recover it. 00:37:34.047 [2024-07-14 15:10:13.098175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.047 [2024-07-14 15:10:13.098209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.047 qpair failed and we were unable to recover it. 00:37:34.047 [2024-07-14 15:10:13.098312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.047 [2024-07-14 15:10:13.098346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.047 qpair failed and we were unable to recover it. 00:37:34.047 [2024-07-14 15:10:13.098458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.047 [2024-07-14 15:10:13.098491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.047 qpair failed and we were unable to recover it. 00:37:34.047 [2024-07-14 15:10:13.098631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.047 [2024-07-14 15:10:13.098664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.047 qpair failed and we were unable to recover it. 00:37:34.047 [2024-07-14 15:10:13.098822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.047 [2024-07-14 15:10:13.098870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.047 qpair failed and we were unable to recover it. 00:37:34.047 [2024-07-14 15:10:13.099032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.047 [2024-07-14 15:10:13.099068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.047 qpair failed and we were unable to recover it. 00:37:34.048 [2024-07-14 15:10:13.099181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.048 [2024-07-14 15:10:13.099215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.048 qpair failed and we were unable to recover it. 00:37:34.048 [2024-07-14 15:10:13.099355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.048 [2024-07-14 15:10:13.099389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.048 qpair failed and we were unable to recover it. 00:37:34.048 [2024-07-14 15:10:13.099509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.048 [2024-07-14 15:10:13.099548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.048 qpair failed and we were unable to recover it. 00:37:34.048 [2024-07-14 15:10:13.099663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.048 [2024-07-14 15:10:13.099697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.048 qpair failed and we were unable to recover it. 00:37:34.048 [2024-07-14 15:10:13.099802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.048 [2024-07-14 15:10:13.099836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.048 qpair failed and we were unable to recover it. 00:37:34.048 [2024-07-14 15:10:13.099960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.048 [2024-07-14 15:10:13.099994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.048 qpair failed and we were unable to recover it. 00:37:34.048 [2024-07-14 15:10:13.100119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.048 [2024-07-14 15:10:13.100154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.048 qpair failed and we were unable to recover it. 00:37:34.048 [2024-07-14 15:10:13.100302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.048 [2024-07-14 15:10:13.100347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.048 qpair failed and we were unable to recover it. 00:37:34.048 [2024-07-14 15:10:13.100488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.048 [2024-07-14 15:10:13.100521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.048 qpair failed and we were unable to recover it. 00:37:34.048 [2024-07-14 15:10:13.100656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.048 [2024-07-14 15:10:13.100689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.048 qpair failed and we were unable to recover it. 00:37:34.048 [2024-07-14 15:10:13.100797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.048 [2024-07-14 15:10:13.100829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.048 qpair failed and we were unable to recover it. 00:37:34.048 [2024-07-14 15:10:13.100963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.048 [2024-07-14 15:10:13.101010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.048 qpair failed and we were unable to recover it. 00:37:34.048 [2024-07-14 15:10:13.101134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.048 [2024-07-14 15:10:13.101181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.048 qpair failed and we were unable to recover it. 00:37:34.048 [2024-07-14 15:10:13.101323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.048 [2024-07-14 15:10:13.101359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.048 qpair failed and we were unable to recover it. 00:37:34.048 [2024-07-14 15:10:13.101468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.048 [2024-07-14 15:10:13.101502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.048 qpair failed and we were unable to recover it. 00:37:34.048 [2024-07-14 15:10:13.101608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.048 [2024-07-14 15:10:13.101641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.048 qpair failed and we were unable to recover it. 00:37:34.048 [2024-07-14 15:10:13.101760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.048 [2024-07-14 15:10:13.101794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.048 qpair failed and we were unable to recover it. 00:37:34.048 [2024-07-14 15:10:13.101902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.048 [2024-07-14 15:10:13.101937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.048 qpair failed and we were unable to recover it. 00:37:34.048 [2024-07-14 15:10:13.102041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.048 [2024-07-14 15:10:13.102074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.048 qpair failed and we were unable to recover it. 00:37:34.048 [2024-07-14 15:10:13.102176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.048 [2024-07-14 15:10:13.102210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.048 qpair failed and we were unable to recover it. 00:37:34.048 [2024-07-14 15:10:13.102337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.048 [2024-07-14 15:10:13.102377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.048 qpair failed and we were unable to recover it. 00:37:34.048 [2024-07-14 15:10:13.102493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.048 [2024-07-14 15:10:13.102530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.048 qpair failed and we were unable to recover it. 00:37:34.048 [2024-07-14 15:10:13.102642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.048 [2024-07-14 15:10:13.102675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.048 qpair failed and we were unable to recover it. 00:37:34.048 [2024-07-14 15:10:13.102774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.048 [2024-07-14 15:10:13.102806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.048 qpair failed and we were unable to recover it. 00:37:34.048 [2024-07-14 15:10:13.102923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.048 [2024-07-14 15:10:13.102956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.048 qpair failed and we were unable to recover it. 00:37:34.048 [2024-07-14 15:10:13.103057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.048 [2024-07-14 15:10:13.103090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.048 qpair failed and we were unable to recover it. 00:37:34.048 [2024-07-14 15:10:13.103193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.048 [2024-07-14 15:10:13.103225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.048 qpair failed and we were unable to recover it. 00:37:34.048 [2024-07-14 15:10:13.103367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.048 [2024-07-14 15:10:13.103401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.048 qpair failed and we were unable to recover it. 00:37:34.048 [2024-07-14 15:10:13.103505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.048 [2024-07-14 15:10:13.103538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.048 qpair failed and we were unable to recover it. 00:37:34.048 [2024-07-14 15:10:13.103642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.048 [2024-07-14 15:10:13.103675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.048 qpair failed and we were unable to recover it. 00:37:34.048 [2024-07-14 15:10:13.103820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.048 [2024-07-14 15:10:13.103855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.048 qpair failed and we were unable to recover it. 00:37:34.048 [2024-07-14 15:10:13.103975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.048 [2024-07-14 15:10:13.104011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.048 qpair failed and we were unable to recover it. 00:37:34.048 [2024-07-14 15:10:13.104159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.048 [2024-07-14 15:10:13.104195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.048 qpair failed and we were unable to recover it. 00:37:34.048 [2024-07-14 15:10:13.104360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.048 [2024-07-14 15:10:13.104394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.048 qpair failed and we were unable to recover it. 00:37:34.048 [2024-07-14 15:10:13.104504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.048 [2024-07-14 15:10:13.104537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.048 qpair failed and we were unable to recover it. 00:37:34.048 [2024-07-14 15:10:13.104654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.048 [2024-07-14 15:10:13.104687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.048 qpair failed and we were unable to recover it. 00:37:34.048 [2024-07-14 15:10:13.104796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.049 [2024-07-14 15:10:13.104830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.049 qpair failed and we were unable to recover it. 00:37:34.049 [2024-07-14 15:10:13.104991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.049 [2024-07-14 15:10:13.105025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.049 qpair failed and we were unable to recover it. 00:37:34.049 [2024-07-14 15:10:13.105159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.049 [2024-07-14 15:10:13.105192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.049 qpair failed and we were unable to recover it. 00:37:34.049 [2024-07-14 15:10:13.105291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.049 [2024-07-14 15:10:13.105323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.049 qpair failed and we were unable to recover it. 00:37:34.049 [2024-07-14 15:10:13.105429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.049 [2024-07-14 15:10:13.105461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.049 qpair failed and we were unable to recover it. 00:37:34.049 [2024-07-14 15:10:13.105608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.049 [2024-07-14 15:10:13.105641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.049 qpair failed and we were unable to recover it. 00:37:34.049 [2024-07-14 15:10:13.105750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.049 [2024-07-14 15:10:13.105784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.049 qpair failed and we were unable to recover it. 00:37:34.049 [2024-07-14 15:10:13.105913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.049 [2024-07-14 15:10:13.105961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.049 qpair failed and we were unable to recover it. 00:37:34.049 [2024-07-14 15:10:13.106079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.049 [2024-07-14 15:10:13.106115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.049 qpair failed and we were unable to recover it. 00:37:34.049 [2024-07-14 15:10:13.106275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.049 [2024-07-14 15:10:13.106315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.049 qpair failed and we were unable to recover it. 00:37:34.049 [2024-07-14 15:10:13.106454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.049 [2024-07-14 15:10:13.106487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.049 qpair failed and we were unable to recover it. 00:37:34.049 [2024-07-14 15:10:13.106591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.049 [2024-07-14 15:10:13.106625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.049 qpair failed and we were unable to recover it. 00:37:34.049 [2024-07-14 15:10:13.106732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.049 [2024-07-14 15:10:13.106766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.049 qpair failed and we were unable to recover it. 00:37:34.049 [2024-07-14 15:10:13.106893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.049 [2024-07-14 15:10:13.106941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.049 qpair failed and we were unable to recover it. 00:37:34.049 [2024-07-14 15:10:13.107091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.049 [2024-07-14 15:10:13.107134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.049 qpair failed and we were unable to recover it. 00:37:34.049 [2024-07-14 15:10:13.107249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.049 [2024-07-14 15:10:13.107283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.049 qpair failed and we were unable to recover it. 00:37:34.049 [2024-07-14 15:10:13.107394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.049 [2024-07-14 15:10:13.107428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.049 qpair failed and we were unable to recover it. 00:37:34.049 [2024-07-14 15:10:13.107589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.049 [2024-07-14 15:10:13.107622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.049 qpair failed and we were unable to recover it. 00:37:34.049 [2024-07-14 15:10:13.107725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.049 [2024-07-14 15:10:13.107759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.049 qpair failed and we were unable to recover it. 00:37:34.049 [2024-07-14 15:10:13.107915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.049 [2024-07-14 15:10:13.107962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.049 qpair failed and we were unable to recover it. 00:37:34.049 [2024-07-14 15:10:13.108106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.049 [2024-07-14 15:10:13.108141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.049 qpair failed and we were unable to recover it. 00:37:34.049 [2024-07-14 15:10:13.108259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.049 [2024-07-14 15:10:13.108292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.049 qpair failed and we were unable to recover it. 00:37:34.049 [2024-07-14 15:10:13.108433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.049 [2024-07-14 15:10:13.108465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.049 qpair failed and we were unable to recover it. 00:37:34.049 [2024-07-14 15:10:13.108576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.049 [2024-07-14 15:10:13.108609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.049 qpair failed and we were unable to recover it. 00:37:34.049 [2024-07-14 15:10:13.108719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.049 [2024-07-14 15:10:13.108756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.049 qpair failed and we were unable to recover it. 00:37:34.049 [2024-07-14 15:10:13.108895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.049 [2024-07-14 15:10:13.108928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.049 qpair failed and we were unable to recover it. 00:37:34.049 [2024-07-14 15:10:13.109061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.049 [2024-07-14 15:10:13.109093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.049 qpair failed and we were unable to recover it. 00:37:34.049 [2024-07-14 15:10:13.109197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.049 [2024-07-14 15:10:13.109229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.049 qpair failed and we were unable to recover it. 00:37:34.049 [2024-07-14 15:10:13.109341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.049 [2024-07-14 15:10:13.109373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.049 qpair failed and we were unable to recover it. 00:37:34.049 [2024-07-14 15:10:13.109511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.049 [2024-07-14 15:10:13.109543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.049 qpair failed and we were unable to recover it. 00:37:34.049 [2024-07-14 15:10:13.109657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.049 [2024-07-14 15:10:13.109690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.049 qpair failed and we were unable to recover it. 00:37:34.049 [2024-07-14 15:10:13.109803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.049 [2024-07-14 15:10:13.109836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.049 qpair failed and we were unable to recover it. 00:37:34.049 [2024-07-14 15:10:13.109969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.049 [2024-07-14 15:10:13.110002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.049 qpair failed and we were unable to recover it. 00:37:34.049 [2024-07-14 15:10:13.110117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.049 [2024-07-14 15:10:13.110149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.049 qpair failed and we were unable to recover it. 00:37:34.049 [2024-07-14 15:10:13.110284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.050 [2024-07-14 15:10:13.110317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.050 qpair failed and we were unable to recover it. 00:37:34.050 [2024-07-14 15:10:13.110415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.050 [2024-07-14 15:10:13.110460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.050 qpair failed and we were unable to recover it. 00:37:34.050 [2024-07-14 15:10:13.110618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.050 [2024-07-14 15:10:13.110651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.050 qpair failed and we were unable to recover it. 00:37:34.050 [2024-07-14 15:10:13.110765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.050 [2024-07-14 15:10:13.110796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.050 qpair failed and we were unable to recover it. 00:37:34.050 [2024-07-14 15:10:13.110922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.050 [2024-07-14 15:10:13.110956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.050 qpair failed and we were unable to recover it. 00:37:34.050 [2024-07-14 15:10:13.111077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.050 [2024-07-14 15:10:13.111109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.050 qpair failed and we were unable to recover it. 00:37:34.050 [2024-07-14 15:10:13.111253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.050 [2024-07-14 15:10:13.111285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.050 qpair failed and we were unable to recover it. 00:37:34.050 [2024-07-14 15:10:13.111394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.050 [2024-07-14 15:10:13.111426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.050 qpair failed and we were unable to recover it. 00:37:34.050 [2024-07-14 15:10:13.111536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.050 [2024-07-14 15:10:13.111568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.050 qpair failed and we were unable to recover it. 00:37:34.050 [2024-07-14 15:10:13.111701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.050 [2024-07-14 15:10:13.111733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.050 qpair failed and we were unable to recover it. 00:37:34.050 [2024-07-14 15:10:13.111843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.050 [2024-07-14 15:10:13.111891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.050 qpair failed and we were unable to recover it. 00:37:34.050 [2024-07-14 15:10:13.112007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.050 [2024-07-14 15:10:13.112040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.050 qpair failed and we were unable to recover it. 00:37:34.050 [2024-07-14 15:10:13.112180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.050 [2024-07-14 15:10:13.112213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.050 qpair failed and we were unable to recover it. 00:37:34.050 [2024-07-14 15:10:13.112326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.050 [2024-07-14 15:10:13.112358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.050 qpair failed and we were unable to recover it. 00:37:34.050 [2024-07-14 15:10:13.112470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.050 [2024-07-14 15:10:13.112502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.050 qpair failed and we were unable to recover it. 00:37:34.050 [2024-07-14 15:10:13.112636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.050 [2024-07-14 15:10:13.112668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.050 qpair failed and we were unable to recover it. 00:37:34.050 [2024-07-14 15:10:13.112780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.050 [2024-07-14 15:10:13.112812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.050 qpair failed and we were unable to recover it. 00:37:34.050 [2024-07-14 15:10:13.112920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.050 [2024-07-14 15:10:13.112953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.050 qpair failed and we were unable to recover it. 00:37:34.050 [2024-07-14 15:10:13.113082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.050 [2024-07-14 15:10:13.113114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.050 qpair failed and we were unable to recover it. 00:37:34.050 [2024-07-14 15:10:13.113249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.050 [2024-07-14 15:10:13.113282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.050 qpair failed and we were unable to recover it. 00:37:34.050 [2024-07-14 15:10:13.113420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.050 [2024-07-14 15:10:13.113452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.050 qpair failed and we were unable to recover it. 00:37:34.050 [2024-07-14 15:10:13.113612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.050 [2024-07-14 15:10:13.113645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.050 qpair failed and we were unable to recover it. 00:37:34.050 [2024-07-14 15:10:13.113782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.050 [2024-07-14 15:10:13.113815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.050 qpair failed and we were unable to recover it. 00:37:34.050 [2024-07-14 15:10:13.113935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.050 [2024-07-14 15:10:13.113966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.050 qpair failed and we were unable to recover it. 00:37:34.050 [2024-07-14 15:10:13.114087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.050 [2024-07-14 15:10:13.114134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.050 qpair failed and we were unable to recover it. 00:37:34.050 [2024-07-14 15:10:13.114254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.050 [2024-07-14 15:10:13.114290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.050 qpair failed and we were unable to recover it. 00:37:34.050 [2024-07-14 15:10:13.114410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.050 [2024-07-14 15:10:13.114443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.050 qpair failed and we were unable to recover it. 00:37:34.050 [2024-07-14 15:10:13.114560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.050 [2024-07-14 15:10:13.114594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.050 qpair failed and we were unable to recover it. 00:37:34.050 [2024-07-14 15:10:13.114706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.051 [2024-07-14 15:10:13.114738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.051 qpair failed and we were unable to recover it. 00:37:34.051 [2024-07-14 15:10:13.114849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.051 [2024-07-14 15:10:13.114888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.051 qpair failed and we were unable to recover it. 00:37:34.051 [2024-07-14 15:10:13.114989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.051 [2024-07-14 15:10:13.115027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.051 qpair failed and we were unable to recover it. 00:37:34.051 [2024-07-14 15:10:13.115128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.051 [2024-07-14 15:10:13.115161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.051 qpair failed and we were unable to recover it. 00:37:34.051 [2024-07-14 15:10:13.115297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.051 [2024-07-14 15:10:13.115329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.051 qpair failed and we were unable to recover it. 00:37:34.051 [2024-07-14 15:10:13.115489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.051 [2024-07-14 15:10:13.115521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.051 qpair failed and we were unable to recover it. 00:37:34.051 [2024-07-14 15:10:13.115634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.051 [2024-07-14 15:10:13.115666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.051 qpair failed and we were unable to recover it. 00:37:34.051 [2024-07-14 15:10:13.115776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.051 [2024-07-14 15:10:13.115808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.051 qpair failed and we were unable to recover it. 00:37:34.051 [2024-07-14 15:10:13.115917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.051 [2024-07-14 15:10:13.115950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.051 qpair failed and we were unable to recover it. 00:37:34.051 [2024-07-14 15:10:13.116052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.051 [2024-07-14 15:10:13.116085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.051 qpair failed and we were unable to recover it. 00:37:34.051 [2024-07-14 15:10:13.116205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.051 [2024-07-14 15:10:13.116238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.051 qpair failed and we were unable to recover it. 00:37:34.051 [2024-07-14 15:10:13.116399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.051 [2024-07-14 15:10:13.116432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.051 qpair failed and we were unable to recover it. 00:37:34.051 [2024-07-14 15:10:13.116536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.051 [2024-07-14 15:10:13.116570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.051 qpair failed and we were unable to recover it. 00:37:34.051 [2024-07-14 15:10:13.116685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.051 [2024-07-14 15:10:13.116719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.051 qpair failed and we were unable to recover it. 00:37:34.051 [2024-07-14 15:10:13.116870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.051 [2024-07-14 15:10:13.116910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.051 qpair failed and we were unable to recover it. 00:37:34.051 [2024-07-14 15:10:13.117025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.051 [2024-07-14 15:10:13.117057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.051 qpair failed and we were unable to recover it. 00:37:34.051 [2024-07-14 15:10:13.117178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.051 [2024-07-14 15:10:13.117210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.051 qpair failed and we were unable to recover it. 00:37:34.051 [2024-07-14 15:10:13.117349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.051 [2024-07-14 15:10:13.117382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.051 qpair failed and we were unable to recover it. 00:37:34.051 [2024-07-14 15:10:13.117515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.051 [2024-07-14 15:10:13.117547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.051 qpair failed and we were unable to recover it. 00:37:34.051 [2024-07-14 15:10:13.117688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.051 [2024-07-14 15:10:13.117721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.051 qpair failed and we were unable to recover it. 00:37:34.051 [2024-07-14 15:10:13.117829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.051 [2024-07-14 15:10:13.117874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.051 qpair failed and we were unable to recover it. 00:37:34.051 [2024-07-14 15:10:13.118019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.051 [2024-07-14 15:10:13.118053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.051 qpair failed and we were unable to recover it. 00:37:34.051 [2024-07-14 15:10:13.118160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.051 [2024-07-14 15:10:13.118193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.051 qpair failed and we were unable to recover it. 00:37:34.051 [2024-07-14 15:10:13.118298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.051 [2024-07-14 15:10:13.118331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.051 qpair failed and we were unable to recover it. 00:37:34.051 [2024-07-14 15:10:13.118445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.051 [2024-07-14 15:10:13.118479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.051 qpair failed and we were unable to recover it. 00:37:34.051 [2024-07-14 15:10:13.118588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.051 [2024-07-14 15:10:13.118621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.051 qpair failed and we were unable to recover it. 00:37:34.051 [2024-07-14 15:10:13.118759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.051 [2024-07-14 15:10:13.118794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.052 qpair failed and we were unable to recover it. 00:37:34.052 [2024-07-14 15:10:13.118970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.052 [2024-07-14 15:10:13.119018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.052 qpair failed and we were unable to recover it. 00:37:34.052 [2024-07-14 15:10:13.119167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.052 [2024-07-14 15:10:13.119203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.052 qpair failed and we were unable to recover it. 00:37:34.052 [2024-07-14 15:10:13.119343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.052 [2024-07-14 15:10:13.119378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.052 qpair failed and we were unable to recover it. 00:37:34.052 [2024-07-14 15:10:13.119478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.052 [2024-07-14 15:10:13.119511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.052 qpair failed and we were unable to recover it. 00:37:34.052 [2024-07-14 15:10:13.119615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.052 [2024-07-14 15:10:13.119648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.052 qpair failed and we were unable to recover it. 00:37:34.052 [2024-07-14 15:10:13.119778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.052 [2024-07-14 15:10:13.119812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.052 qpair failed and we were unable to recover it. 00:37:34.052 [2024-07-14 15:10:13.119953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.052 [2024-07-14 15:10:13.120000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.052 qpair failed and we were unable to recover it. 00:37:34.052 [2024-07-14 15:10:13.120127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.052 [2024-07-14 15:10:13.120162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.052 qpair failed and we were unable to recover it. 00:37:34.052 [2024-07-14 15:10:13.120275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.052 [2024-07-14 15:10:13.120312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.052 qpair failed and we were unable to recover it. 00:37:34.052 [2024-07-14 15:10:13.120437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.052 [2024-07-14 15:10:13.120470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.052 qpair failed and we were unable to recover it. 00:37:34.052 [2024-07-14 15:10:13.120627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.052 [2024-07-14 15:10:13.120660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.052 qpair failed and we were unable to recover it. 00:37:34.052 [2024-07-14 15:10:13.120793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.052 [2024-07-14 15:10:13.120826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.052 qpair failed and we were unable to recover it. 00:37:34.052 [2024-07-14 15:10:13.120962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.052 [2024-07-14 15:10:13.120996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.052 qpair failed and we were unable to recover it. 00:37:34.052 [2024-07-14 15:10:13.121105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.052 [2024-07-14 15:10:13.121138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.052 qpair failed and we were unable to recover it. 00:37:34.052 [2024-07-14 15:10:13.121270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.052 [2024-07-14 15:10:13.121304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.052 qpair failed and we were unable to recover it. 00:37:34.052 [2024-07-14 15:10:13.121407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.053 [2024-07-14 15:10:13.121445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.053 qpair failed and we were unable to recover it. 00:37:34.053 [2024-07-14 15:10:13.121583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.053 [2024-07-14 15:10:13.121616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.053 qpair failed and we were unable to recover it. 00:37:34.053 [2024-07-14 15:10:13.121755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.053 [2024-07-14 15:10:13.121788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.053 qpair failed and we were unable to recover it. 00:37:34.053 [2024-07-14 15:10:13.121933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.053 [2024-07-14 15:10:13.121967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.053 qpair failed and we were unable to recover it. 00:37:34.053 [2024-07-14 15:10:13.122121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.053 [2024-07-14 15:10:13.122177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.053 qpair failed and we were unable to recover it. 00:37:34.053 [2024-07-14 15:10:13.122306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.053 [2024-07-14 15:10:13.122344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.053 qpair failed and we were unable to recover it. 00:37:34.053 [2024-07-14 15:10:13.122454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.053 [2024-07-14 15:10:13.122487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.053 qpair failed and we were unable to recover it. 00:37:34.053 [2024-07-14 15:10:13.122625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.053 [2024-07-14 15:10:13.122659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.053 qpair failed and we were unable to recover it. 00:37:34.053 [2024-07-14 15:10:13.122786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.053 [2024-07-14 15:10:13.122819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.053 qpair failed and we were unable to recover it. 00:37:34.053 [2024-07-14 15:10:13.122940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.053 [2024-07-14 15:10:13.122975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.053 qpair failed and we were unable to recover it. 00:37:34.053 [2024-07-14 15:10:13.123084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.053 [2024-07-14 15:10:13.123118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.053 qpair failed and we were unable to recover it. 00:37:34.053 [2024-07-14 15:10:13.123220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.053 [2024-07-14 15:10:13.123253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.053 qpair failed and we were unable to recover it. 00:37:34.053 [2024-07-14 15:10:13.123369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.053 [2024-07-14 15:10:13.123404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.053 qpair failed and we were unable to recover it. 00:37:34.053 [2024-07-14 15:10:13.123555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.053 [2024-07-14 15:10:13.123589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.053 qpair failed and we were unable to recover it. 00:37:34.053 [2024-07-14 15:10:13.123710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.053 [2024-07-14 15:10:13.123744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.053 qpair failed and we were unable to recover it. 00:37:34.053 [2024-07-14 15:10:13.123905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.053 [2024-07-14 15:10:13.123940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.053 qpair failed and we were unable to recover it. 00:37:34.053 [2024-07-14 15:10:13.124072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.053 [2024-07-14 15:10:13.124105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.053 qpair failed and we were unable to recover it. 00:37:34.053 [2024-07-14 15:10:13.124234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.053 [2024-07-14 15:10:13.124281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.053 qpair failed and we were unable to recover it. 00:37:34.053 [2024-07-14 15:10:13.124420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.053 [2024-07-14 15:10:13.124454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.053 qpair failed and we were unable to recover it. 00:37:34.053 [2024-07-14 15:10:13.124561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.053 [2024-07-14 15:10:13.124594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.053 qpair failed and we were unable to recover it. 00:37:34.053 [2024-07-14 15:10:13.124708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.053 [2024-07-14 15:10:13.124740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.053 qpair failed and we were unable to recover it. 00:37:34.053 [2024-07-14 15:10:13.124848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.053 [2024-07-14 15:10:13.124888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.053 qpair failed and we were unable to recover it. 00:37:34.053 [2024-07-14 15:10:13.125003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.053 [2024-07-14 15:10:13.125036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.053 qpair failed and we were unable to recover it. 00:37:34.053 [2024-07-14 15:10:13.125177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.053 [2024-07-14 15:10:13.125211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.053 qpair failed and we were unable to recover it. 00:37:34.053 [2024-07-14 15:10:13.125347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.053 [2024-07-14 15:10:13.125380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.053 qpair failed and we were unable to recover it. 00:37:34.054 [2024-07-14 15:10:13.125519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.054 [2024-07-14 15:10:13.125551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.054 qpair failed and we were unable to recover it. 00:37:34.054 [2024-07-14 15:10:13.125659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.054 [2024-07-14 15:10:13.125691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.054 qpair failed and we were unable to recover it. 00:37:34.054 [2024-07-14 15:10:13.125806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.054 [2024-07-14 15:10:13.125844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.054 qpair failed and we were unable to recover it. 00:37:34.054 [2024-07-14 15:10:13.125989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.054 [2024-07-14 15:10:13.126036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.054 qpair failed and we were unable to recover it. 00:37:34.054 [2024-07-14 15:10:13.126167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.054 [2024-07-14 15:10:13.126201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.054 qpair failed and we were unable to recover it. 00:37:34.054 [2024-07-14 15:10:13.126311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.054 [2024-07-14 15:10:13.126346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.054 qpair failed and we were unable to recover it. 00:37:34.054 [2024-07-14 15:10:13.126508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.054 [2024-07-14 15:10:13.126541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.054 qpair failed and we were unable to recover it. 00:37:34.054 [2024-07-14 15:10:13.126648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.054 [2024-07-14 15:10:13.126681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.054 qpair failed and we were unable to recover it. 00:37:34.054 [2024-07-14 15:10:13.126795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.054 [2024-07-14 15:10:13.126828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.054 qpair failed and we were unable to recover it. 00:37:34.054 [2024-07-14 15:10:13.126977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.054 [2024-07-14 15:10:13.127024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.054 qpair failed and we were unable to recover it. 00:37:34.054 [2024-07-14 15:10:13.127143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.054 [2024-07-14 15:10:13.127177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.054 qpair failed and we were unable to recover it. 00:37:34.054 [2024-07-14 15:10:13.127282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.054 [2024-07-14 15:10:13.127315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.054 qpair failed and we were unable to recover it. 00:37:34.054 [2024-07-14 15:10:13.127416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.054 [2024-07-14 15:10:13.127449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.054 qpair failed and we were unable to recover it. 00:37:34.054 [2024-07-14 15:10:13.127597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.054 [2024-07-14 15:10:13.127633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.054 qpair failed and we were unable to recover it. 00:37:34.054 [2024-07-14 15:10:13.127751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.054 [2024-07-14 15:10:13.127784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.054 qpair failed and we were unable to recover it. 00:37:34.054 [2024-07-14 15:10:13.127936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.054 [2024-07-14 15:10:13.127987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.054 qpair failed and we were unable to recover it. 00:37:34.054 [2024-07-14 15:10:13.128106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.054 [2024-07-14 15:10:13.128140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.054 qpair failed and we were unable to recover it. 00:37:34.054 [2024-07-14 15:10:13.128251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.054 [2024-07-14 15:10:13.128282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.054 qpair failed and we were unable to recover it. 00:37:34.054 [2024-07-14 15:10:13.128414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.054 [2024-07-14 15:10:13.128446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.054 qpair failed and we were unable to recover it. 00:37:34.054 [2024-07-14 15:10:13.128555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.054 [2024-07-14 15:10:13.128588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.054 qpair failed and we were unable to recover it. 00:37:34.054 [2024-07-14 15:10:13.128732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.054 [2024-07-14 15:10:13.128766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.054 qpair failed and we were unable to recover it. 00:37:34.054 [2024-07-14 15:10:13.128903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.054 [2024-07-14 15:10:13.128949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.054 qpair failed and we were unable to recover it. 00:37:34.054 [2024-07-14 15:10:13.129081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.054 [2024-07-14 15:10:13.129114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.054 qpair failed and we were unable to recover it. 00:37:34.054 [2024-07-14 15:10:13.129231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.054 [2024-07-14 15:10:13.129263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.054 qpair failed and we were unable to recover it. 00:37:34.054 [2024-07-14 15:10:13.129399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.055 [2024-07-14 15:10:13.129430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.055 qpair failed and we were unable to recover it. 00:37:34.055 [2024-07-14 15:10:13.129547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.055 [2024-07-14 15:10:13.129579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.055 qpair failed and we were unable to recover it. 00:37:34.055 [2024-07-14 15:10:13.129711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.055 [2024-07-14 15:10:13.129744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.055 qpair failed and we were unable to recover it. 00:37:34.055 [2024-07-14 15:10:13.129873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.055 [2024-07-14 15:10:13.129914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.055 qpair failed and we were unable to recover it. 00:37:34.055 [2024-07-14 15:10:13.130018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.055 [2024-07-14 15:10:13.130050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.055 qpair failed and we were unable to recover it. 00:37:34.055 [2024-07-14 15:10:13.130167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.055 [2024-07-14 15:10:13.130199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.055 qpair failed and we were unable to recover it. 00:37:34.055 [2024-07-14 15:10:13.130302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.055 [2024-07-14 15:10:13.130345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.055 qpair failed and we were unable to recover it. 00:37:34.055 [2024-07-14 15:10:13.130466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.055 [2024-07-14 15:10:13.130499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.055 qpair failed and we were unable to recover it. 00:37:34.055 [2024-07-14 15:10:13.130618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.055 [2024-07-14 15:10:13.130664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.055 qpair failed and we were unable to recover it. 00:37:34.055 [2024-07-14 15:10:13.130773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.055 [2024-07-14 15:10:13.130808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.055 qpair failed and we were unable to recover it. 00:37:34.055 [2024-07-14 15:10:13.130941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.055 [2024-07-14 15:10:13.130987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.055 qpair failed and we were unable to recover it. 00:37:34.055 [2024-07-14 15:10:13.131129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.055 [2024-07-14 15:10:13.131169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.055 qpair failed and we were unable to recover it. 00:37:34.055 [2024-07-14 15:10:13.131309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.055 [2024-07-14 15:10:13.131341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.055 qpair failed and we were unable to recover it. 00:37:34.055 [2024-07-14 15:10:13.131448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.055 [2024-07-14 15:10:13.131480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.055 qpair failed and we were unable to recover it. 00:37:34.055 [2024-07-14 15:10:13.131624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.055 [2024-07-14 15:10:13.131655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.055 qpair failed and we were unable to recover it. 00:37:34.055 [2024-07-14 15:10:13.131781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.055 [2024-07-14 15:10:13.131818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.055 qpair failed and we were unable to recover it. 00:37:34.055 [2024-07-14 15:10:13.131993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.055 [2024-07-14 15:10:13.132028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.055 qpair failed and we were unable to recover it. 00:37:34.055 [2024-07-14 15:10:13.132140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.055 [2024-07-14 15:10:13.132173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.055 qpair failed and we were unable to recover it. 00:37:34.055 [2024-07-14 15:10:13.132290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.055 [2024-07-14 15:10:13.132325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.055 qpair failed and we were unable to recover it. 00:37:34.055 [2024-07-14 15:10:13.132463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.055 [2024-07-14 15:10:13.132495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.055 qpair failed and we were unable to recover it. 00:37:34.055 [2024-07-14 15:10:13.132610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.055 [2024-07-14 15:10:13.132643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.055 qpair failed and we were unable to recover it. 00:37:34.055 [2024-07-14 15:10:13.132745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.055 [2024-07-14 15:10:13.132778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.055 qpair failed and we were unable to recover it. 00:37:34.055 [2024-07-14 15:10:13.132890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.055 [2024-07-14 15:10:13.132923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.055 qpair failed and we were unable to recover it. 00:37:34.055 [2024-07-14 15:10:13.133031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.055 [2024-07-14 15:10:13.133063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.055 qpair failed and we were unable to recover it. 00:37:34.055 [2024-07-14 15:10:13.133192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.055 [2024-07-14 15:10:13.133225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.055 qpair failed and we were unable to recover it. 00:37:34.055 [2024-07-14 15:10:13.133355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.056 [2024-07-14 15:10:13.133387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.056 qpair failed and we were unable to recover it. 00:37:34.056 [2024-07-14 15:10:13.133493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.056 [2024-07-14 15:10:13.133526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.056 qpair failed and we were unable to recover it. 00:37:34.056 [2024-07-14 15:10:13.133629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.056 [2024-07-14 15:10:13.133662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.056 qpair failed and we were unable to recover it. 00:37:34.056 [2024-07-14 15:10:13.133789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.056 [2024-07-14 15:10:13.133822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.056 qpair failed and we were unable to recover it. 00:37:34.056 [2024-07-14 15:10:13.133981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.056 [2024-07-14 15:10:13.134028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.056 qpair failed and we were unable to recover it. 00:37:34.056 [2024-07-14 15:10:13.134140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.056 [2024-07-14 15:10:13.134176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.056 qpair failed and we were unable to recover it. 00:37:34.056 [2024-07-14 15:10:13.134331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.056 [2024-07-14 15:10:13.134384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.056 qpair failed and we were unable to recover it. 00:37:34.056 [2024-07-14 15:10:13.134496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.056 [2024-07-14 15:10:13.134529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.056 qpair failed and we were unable to recover it. 00:37:34.056 [2024-07-14 15:10:13.134647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.056 [2024-07-14 15:10:13.134680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.056 qpair failed and we were unable to recover it. 00:37:34.056 [2024-07-14 15:10:13.134792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.056 [2024-07-14 15:10:13.134825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.056 qpair failed and we were unable to recover it. 00:37:34.056 [2024-07-14 15:10:13.134936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.056 [2024-07-14 15:10:13.134970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.056 qpair failed and we were unable to recover it. 00:37:34.056 [2024-07-14 15:10:13.135090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.056 [2024-07-14 15:10:13.135127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.056 qpair failed and we were unable to recover it. 00:37:34.056 [2024-07-14 15:10:13.135239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.056 [2024-07-14 15:10:13.135273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.056 qpair failed and we were unable to recover it. 00:37:34.056 [2024-07-14 15:10:13.135386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.056 [2024-07-14 15:10:13.135419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.056 qpair failed and we were unable to recover it. 00:37:34.056 [2024-07-14 15:10:13.135581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.056 [2024-07-14 15:10:13.135614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.056 qpair failed and we were unable to recover it. 00:37:34.056 [2024-07-14 15:10:13.135732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.056 [2024-07-14 15:10:13.135764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.056 qpair failed and we were unable to recover it. 00:37:34.056 [2024-07-14 15:10:13.135867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.056 [2024-07-14 15:10:13.135911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.056 qpair failed and we were unable to recover it. 00:37:34.056 [2024-07-14 15:10:13.136024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.056 [2024-07-14 15:10:13.136058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.056 qpair failed and we were unable to recover it. 00:37:34.056 [2024-07-14 15:10:13.136176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.056 [2024-07-14 15:10:13.136213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.056 qpair failed and we were unable to recover it. 00:37:34.056 [2024-07-14 15:10:13.136325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.056 [2024-07-14 15:10:13.136359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.056 qpair failed and we were unable to recover it. 00:37:34.056 [2024-07-14 15:10:13.136497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.056 [2024-07-14 15:10:13.136531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.056 qpair failed and we were unable to recover it. 00:37:34.056 [2024-07-14 15:10:13.136664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.056 [2024-07-14 15:10:13.136697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.056 qpair failed and we were unable to recover it. 00:37:34.056 [2024-07-14 15:10:13.136808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.056 [2024-07-14 15:10:13.136841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.056 qpair failed and we were unable to recover it. 00:37:34.056 [2024-07-14 15:10:13.136964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.056 [2024-07-14 15:10:13.136999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.056 qpair failed and we were unable to recover it. 00:37:34.056 [2024-07-14 15:10:13.137116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.056 [2024-07-14 15:10:13.137149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.056 qpair failed and we were unable to recover it. 00:37:34.056 [2024-07-14 15:10:13.137249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.056 [2024-07-14 15:10:13.137283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.056 qpair failed and we were unable to recover it. 00:37:34.056 [2024-07-14 15:10:13.137424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.057 [2024-07-14 15:10:13.137457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.057 qpair failed and we were unable to recover it. 00:37:34.057 [2024-07-14 15:10:13.137593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.057 [2024-07-14 15:10:13.137626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.057 qpair failed and we were unable to recover it. 00:37:34.057 [2024-07-14 15:10:13.137742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.057 [2024-07-14 15:10:13.137774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.057 qpair failed and we were unable to recover it. 00:37:34.057 [2024-07-14 15:10:13.137926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.057 [2024-07-14 15:10:13.137959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.057 qpair failed and we were unable to recover it. 00:37:34.057 [2024-07-14 15:10:13.138067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.057 [2024-07-14 15:10:13.138100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.057 qpair failed and we were unable to recover it. 00:37:34.057 [2024-07-14 15:10:13.138233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.057 [2024-07-14 15:10:13.138266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.057 qpair failed and we were unable to recover it. 00:37:34.057 [2024-07-14 15:10:13.138369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.057 [2024-07-14 15:10:13.138401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.057 qpair failed and we were unable to recover it. 00:37:34.057 [2024-07-14 15:10:13.138511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.057 [2024-07-14 15:10:13.138545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.057 qpair failed and we were unable to recover it. 00:37:34.057 [2024-07-14 15:10:13.138735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.057 [2024-07-14 15:10:13.138781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.057 qpair failed and we were unable to recover it. 00:37:34.057 [2024-07-14 15:10:13.138903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.057 [2024-07-14 15:10:13.138938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.057 qpair failed and we were unable to recover it. 00:37:34.057 [2024-07-14 15:10:13.139054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.057 [2024-07-14 15:10:13.139088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.057 qpair failed and we were unable to recover it. 00:37:34.057 [2024-07-14 15:10:13.139222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.057 [2024-07-14 15:10:13.139256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.057 qpair failed and we were unable to recover it. 00:37:34.057 [2024-07-14 15:10:13.139364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.057 [2024-07-14 15:10:13.139398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.057 qpair failed and we were unable to recover it. 00:37:34.057 [2024-07-14 15:10:13.139510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.057 [2024-07-14 15:10:13.139544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.057 qpair failed and we were unable to recover it. 00:37:34.057 [2024-07-14 15:10:13.139658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.057 [2024-07-14 15:10:13.139693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.057 qpair failed and we were unable to recover it. 00:37:34.057 [2024-07-14 15:10:13.139847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.057 [2024-07-14 15:10:13.139891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.057 qpair failed and we were unable to recover it. 00:37:34.057 [2024-07-14 15:10:13.140009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.057 [2024-07-14 15:10:13.140042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.057 qpair failed and we were unable to recover it. 00:37:34.057 [2024-07-14 15:10:13.140158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.057 [2024-07-14 15:10:13.140191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.057 qpair failed and we were unable to recover it. 00:37:34.057 [2024-07-14 15:10:13.140303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.057 [2024-07-14 15:10:13.140336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.057 qpair failed and we were unable to recover it. 00:37:34.057 [2024-07-14 15:10:13.140454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.057 [2024-07-14 15:10:13.140493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.057 qpair failed and we were unable to recover it. 00:37:34.057 [2024-07-14 15:10:13.140633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.057 [2024-07-14 15:10:13.140670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.057 qpair failed and we were unable to recover it. 00:37:34.057 [2024-07-14 15:10:13.140792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.057 [2024-07-14 15:10:13.140840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.057 qpair failed and we were unable to recover it. 00:37:34.057 [2024-07-14 15:10:13.140996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.057 [2024-07-14 15:10:13.141031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.057 qpair failed and we were unable to recover it. 00:37:34.057 [2024-07-14 15:10:13.141151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.057 [2024-07-14 15:10:13.141185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.058 qpair failed and we were unable to recover it. 00:37:34.058 [2024-07-14 15:10:13.141291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.058 [2024-07-14 15:10:13.141325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.058 qpair failed and we were unable to recover it. 00:37:34.058 [2024-07-14 15:10:13.141489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.058 [2024-07-14 15:10:13.141523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.058 qpair failed and we were unable to recover it. 00:37:34.058 [2024-07-14 15:10:13.141642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.058 [2024-07-14 15:10:13.141676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.058 qpair failed and we were unable to recover it. 00:37:34.058 [2024-07-14 15:10:13.141828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.058 [2024-07-14 15:10:13.141883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.058 qpair failed and we were unable to recover it. 00:37:34.058 [2024-07-14 15:10:13.142011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.058 [2024-07-14 15:10:13.142048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.058 qpair failed and we were unable to recover it. 00:37:34.058 [2024-07-14 15:10:13.142212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.058 [2024-07-14 15:10:13.142246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.058 qpair failed and we were unable to recover it. 00:37:34.058 [2024-07-14 15:10:13.142360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.058 [2024-07-14 15:10:13.142394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.058 qpair failed and we were unable to recover it. 00:37:34.058 [2024-07-14 15:10:13.142507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.058 [2024-07-14 15:10:13.142542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.058 qpair failed and we were unable to recover it. 00:37:34.058 [2024-07-14 15:10:13.142650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.058 [2024-07-14 15:10:13.142683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.058 qpair failed and we were unable to recover it. 00:37:34.058 [2024-07-14 15:10:13.142796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.058 [2024-07-14 15:10:13.142830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.058 qpair failed and we were unable to recover it. 00:37:34.058 [2024-07-14 15:10:13.142954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.058 [2024-07-14 15:10:13.142988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.058 qpair failed and we were unable to recover it. 00:37:34.058 [2024-07-14 15:10:13.143099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.058 [2024-07-14 15:10:13.143132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.058 qpair failed and we were unable to recover it. 00:37:34.058 [2024-07-14 15:10:13.143266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.058 [2024-07-14 15:10:13.143300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.058 qpair failed and we were unable to recover it. 00:37:34.058 [2024-07-14 15:10:13.143434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.058 [2024-07-14 15:10:13.143467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.058 qpair failed and we were unable to recover it. 00:37:34.058 [2024-07-14 15:10:13.143569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.058 [2024-07-14 15:10:13.143602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.058 qpair failed and we were unable to recover it. 00:37:34.058 A controller has encountered a failure and is being reset. 00:37:34.058 [2024-07-14 15:10:13.143835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.058 [2024-07-14 15:10:13.143894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:34.058 [2024-07-14 15:10:13.143924] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:34.058 [2024-07-14 15:10:13.143963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:34.058 [2024-07-14 15:10:13.143991] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.058 [2024-07-14 15:10:13.144016] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.058 [2024-07-14 15:10:13.144038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.058 Unable to reset the controller. 00:37:34.058 [2024-07-14 15:10:13.298474] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:34.058 [2024-07-14 15:10:13.298543] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:34.058 [2024-07-14 15:10:13.298583] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:34.058 [2024-07-14 15:10:13.298602] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:34.058 [2024-07-14 15:10:13.298621] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:34.058 [2024-07-14 15:10:13.298914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:37:34.058 [2024-07-14 15:10:13.298965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:37:34.058 [2024-07-14 15:10:13.299009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:37:34.058 [2024-07-14 15:10:13.299019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:37:34.629 15:10:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:34.629 15:10:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:37:34.629 15:10:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:34.629 15:10:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:34.629 15:10:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:34.629 15:10:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:34.629 15:10:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:34.629 15:10:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.629 15:10:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:34.629 Malloc0 00:37:34.629 15:10:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.629 15:10:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:37:34.629 15:10:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.629 15:10:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:34.629 [2024-07-14 15:10:13.887969] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:34.629 15:10:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.629 15:10:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:34.629 15:10:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.629 15:10:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:34.629 15:10:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.629 15:10:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:34.629 15:10:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.629 15:10:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:34.629 15:10:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.629 15:10:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:34.629 15:10:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.629 15:10:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:34.629 [2024-07-14 15:10:13.917540] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:34.629 15:10:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.629 15:10:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:34.629 15:10:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.629 15:10:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:34.629 15:10:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.629 15:10:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2064957 00:37:34.888 Controller properly reset. 00:37:40.156 Initializing NVMe Controllers 00:37:40.156 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:40.156 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:40.156 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:37:40.156 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:37:40.156 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:37:40.156 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:37:40.156 Initialization complete. Launching workers. 00:37:40.156 Starting thread on core 1 00:37:40.156 Starting thread on core 2 00:37:40.156 Starting thread on core 3 00:37:40.156 Starting thread on core 0 00:37:40.156 15:10:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:37:40.156 00:37:40.156 real 0m11.594s 00:37:40.156 user 0m35.996s 00:37:40.156 sys 0m7.914s 00:37:40.156 15:10:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:40.156 15:10:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:40.156 ************************************ 00:37:40.156 END TEST nvmf_target_disconnect_tc2 00:37:40.156 ************************************ 00:37:40.156 15:10:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:37:40.156 15:10:19 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:37:40.156 15:10:19 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:37:40.156 15:10:19 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:37:40.156 15:10:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:40.156 15:10:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:37:40.156 15:10:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:40.156 15:10:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:37:40.156 15:10:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:40.156 15:10:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:40.156 rmmod nvme_tcp 00:37:40.156 rmmod nvme_fabrics 00:37:40.156 rmmod nvme_keyring 00:37:40.156 15:10:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:40.156 15:10:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:37:40.156 15:10:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:37:40.156 15:10:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 2065479 ']' 00:37:40.156 15:10:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 2065479 00:37:40.156 15:10:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 2065479 ']' 00:37:40.156 15:10:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 2065479 00:37:40.156 15:10:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:37:40.156 15:10:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:40.156 15:10:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2065479 00:37:40.156 15:10:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:37:40.156 15:10:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:37:40.156 15:10:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2065479' 00:37:40.156 killing process with pid 2065479 00:37:40.156 15:10:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 2065479 00:37:40.156 15:10:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 2065479 00:37:41.530 15:10:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:41.530 15:10:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:41.530 15:10:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:41.530 15:10:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:41.530 15:10:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:41.530 15:10:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:41.530 15:10:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:41.530 15:10:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:43.431 15:10:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:43.431 00:37:43.431 real 0m17.510s 00:37:43.431 user 1m3.869s 00:37:43.431 sys 0m10.627s 00:37:43.431 15:10:22 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:43.431 15:10:22 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:43.431 ************************************ 00:37:43.431 END TEST nvmf_target_disconnect 00:37:43.431 ************************************ 00:37:43.431 15:10:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:37:43.431 15:10:22 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:37:43.431 15:10:22 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:43.431 15:10:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:43.689 15:10:22 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:37:43.689 00:37:43.689 real 28m59.692s 00:37:43.689 user 78m39.172s 00:37:43.689 sys 5m53.988s 00:37:43.689 15:10:22 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:43.689 15:10:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:43.689 ************************************ 00:37:43.689 END TEST nvmf_tcp 00:37:43.689 ************************************ 00:37:43.689 15:10:22 -- common/autotest_common.sh@1142 -- # return 0 00:37:43.689 15:10:22 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:37:43.689 15:10:22 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:37:43.689 15:10:22 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:37:43.689 15:10:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:43.689 15:10:22 -- common/autotest_common.sh@10 -- # set +x 00:37:43.689 ************************************ 00:37:43.689 START TEST spdkcli_nvmf_tcp 00:37:43.689 ************************************ 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:37:43.689 * Looking for test storage... 00:37:43.689 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2066720 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2066720 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 2066720 ']' 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:43.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:43.689 15:10:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:43.689 [2024-07-14 15:10:22.961980] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:37:43.689 [2024-07-14 15:10:22.962136] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2066720 ] 00:37:43.948 EAL: No free 2048 kB hugepages reported on node 1 00:37:43.948 [2024-07-14 15:10:23.086769] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:44.208 [2024-07-14 15:10:23.337931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:44.208 [2024-07-14 15:10:23.337938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:44.775 15:10:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:44.775 15:10:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:37:44.775 15:10:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:37:44.775 15:10:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:44.775 15:10:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:44.775 15:10:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:37:44.775 15:10:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:37:44.775 15:10:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:37:44.775 15:10:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:44.775 15:10:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:44.775 15:10:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:37:44.775 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:37:44.775 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:37:44.775 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:37:44.775 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:37:44.775 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:37:44.775 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:37:44.775 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:37:44.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:37:44.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:37:44.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:44.775 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:44.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:37:44.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:44.775 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:44.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:37:44.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:44.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:37:44.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:37:44.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:44.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:37:44.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:37:44.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:37:44.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:37:44.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:44.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:37:44.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:37:44.775 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:37:44.775 ' 00:37:47.304 [2024-07-14 15:10:26.609931] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:48.677 [2024-07-14 15:10:27.835498] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:37:51.250 [2024-07-14 15:10:30.098988] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:37:53.157 [2024-07-14 15:10:32.061394] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:37:54.537 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:37:54.537 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:37:54.537 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:37:54.537 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:37:54.537 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:37:54.537 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:37:54.537 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:37:54.537 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:37:54.537 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:37:54.537 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:37:54.537 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:54.537 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:54.537 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:37:54.537 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:54.537 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:54.537 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:37:54.537 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:54.537 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:37:54.537 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:37:54.537 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:54.537 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:37:54.537 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:37:54.537 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:37:54.537 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:37:54.537 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:54.537 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:37:54.537 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:37:54.537 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:37:54.537 15:10:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:37:54.537 15:10:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:54.537 15:10:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:54.537 15:10:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:37:54.537 15:10:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:54.537 15:10:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:54.537 15:10:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:37:54.537 15:10:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:37:54.795 15:10:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:37:54.795 15:10:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:37:54.795 15:10:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:37:54.795 15:10:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:54.795 15:10:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:55.052 15:10:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:37:55.052 15:10:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:55.052 15:10:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:55.052 15:10:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:37:55.052 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:37:55.052 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:37:55.052 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:37:55.052 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:37:55.052 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:37:55.052 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:37:55.052 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:37:55.052 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:37:55.052 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:37:55.052 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:37:55.052 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:37:55.052 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:37:55.052 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:37:55.052 ' 00:38:01.631 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:38:01.631 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:38:01.631 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:38:01.631 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:38:01.631 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:38:01.631 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:38:01.631 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:38:01.631 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:38:01.631 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:38:01.631 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:38:01.631 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:38:01.631 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:38:01.631 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:38:01.631 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:38:01.631 15:10:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:38:01.631 15:10:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:38:01.631 15:10:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:01.631 15:10:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2066720 00:38:01.631 15:10:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 2066720 ']' 00:38:01.631 15:10:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 2066720 00:38:01.631 15:10:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:38:01.631 15:10:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:01.631 15:10:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2066720 00:38:01.631 15:10:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:38:01.631 15:10:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:38:01.631 15:10:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2066720' 00:38:01.631 killing process with pid 2066720 00:38:01.631 15:10:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 2066720 00:38:01.631 15:10:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 2066720 00:38:01.890 15:10:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:38:01.890 15:10:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:38:01.890 15:10:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2066720 ']' 00:38:01.890 15:10:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2066720 00:38:01.890 15:10:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 2066720 ']' 00:38:01.890 15:10:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 2066720 00:38:01.890 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2066720) - No such process 00:38:01.890 15:10:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 2066720 is not found' 00:38:01.890 Process with pid 2066720 is not found 00:38:01.890 15:10:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:38:01.890 15:10:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:38:01.890 15:10:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:38:01.890 00:38:01.890 real 0m18.356s 00:38:01.890 user 0m37.694s 00:38:01.890 sys 0m1.090s 00:38:01.890 15:10:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:01.890 15:10:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:01.890 ************************************ 00:38:01.890 END TEST spdkcli_nvmf_tcp 00:38:01.890 ************************************ 00:38:01.890 15:10:41 -- common/autotest_common.sh@1142 -- # return 0 00:38:01.890 15:10:41 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:38:01.890 15:10:41 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:38:01.890 15:10:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:01.890 15:10:41 -- common/autotest_common.sh@10 -- # set +x 00:38:02.149 ************************************ 00:38:02.149 START TEST nvmf_identify_passthru 00:38:02.149 ************************************ 00:38:02.149 15:10:41 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:38:02.149 * Looking for test storage... 00:38:02.149 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:02.149 15:10:41 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:02.149 15:10:41 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:38:02.149 15:10:41 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:02.149 15:10:41 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:02.149 15:10:41 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:02.149 15:10:41 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:02.149 15:10:41 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:02.149 15:10:41 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:02.149 15:10:41 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:02.149 15:10:41 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:02.149 15:10:41 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:02.149 15:10:41 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:02.149 15:10:41 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:02.149 15:10:41 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:02.149 15:10:41 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:02.149 15:10:41 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:02.149 15:10:41 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:02.149 15:10:41 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:02.149 15:10:41 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:02.149 15:10:41 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:02.149 15:10:41 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:02.149 15:10:41 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:02.149 15:10:41 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:02.149 15:10:41 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:02.149 15:10:41 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:02.149 15:10:41 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:38:02.150 15:10:41 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:02.150 15:10:41 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:38:02.150 15:10:41 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:02.150 15:10:41 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:02.150 15:10:41 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:02.150 15:10:41 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:02.150 15:10:41 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:02.150 15:10:41 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:02.150 15:10:41 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:02.150 15:10:41 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:02.150 15:10:41 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:02.150 15:10:41 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:02.150 15:10:41 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:02.150 15:10:41 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:02.150 15:10:41 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:02.150 15:10:41 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:02.150 15:10:41 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:02.150 15:10:41 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:38:02.150 15:10:41 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:02.150 15:10:41 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:38:02.150 15:10:41 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:02.150 15:10:41 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:02.150 15:10:41 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:02.150 15:10:41 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:02.150 15:10:41 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:02.150 15:10:41 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:02.150 15:10:41 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:02.150 15:10:41 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:02.150 15:10:41 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:38:02.150 15:10:41 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:38:02.150 15:10:41 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:38:02.150 15:10:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:04.053 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:04.053 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:38:04.053 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:04.053 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:04.053 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:04.053 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:04.053 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:04.053 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:38:04.053 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:04.053 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:38:04.053 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:38:04.053 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:38:04.053 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:38:04.053 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:38:04.053 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:38:04.053 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:04.053 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:04.053 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:04.053 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:04.053 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:04.053 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:04.053 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:04.053 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:04.053 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:04.053 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:04.053 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:04.053 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:04.053 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:38:04.053 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:38:04.053 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:38:04.053 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:38:04.053 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:04.053 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:04.054 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:04.054 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:04.054 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:04.054 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:38:04.054 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:04.054 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:38:04.054 00:38:04.054 --- 10.0.0.2 ping statistics --- 00:38:04.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:04.054 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:04.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:04.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:38:04.054 00:38:04.054 --- 10.0.0.1 ping statistics --- 00:38:04.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:04.054 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:04.054 15:10:43 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:04.054 15:10:43 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:38:04.054 15:10:43 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:38:04.054 15:10:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:04.054 15:10:43 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:38:04.054 15:10:43 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:38:04.054 15:10:43 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:38:04.054 15:10:43 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:38:04.054 15:10:43 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:38:04.054 15:10:43 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:38:04.054 15:10:43 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:38:04.054 15:10:43 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:38:04.054 15:10:43 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:38:04.054 15:10:43 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:38:04.054 15:10:43 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:38:04.054 15:10:43 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:38:04.054 15:10:43 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:88:00.0 00:38:04.054 15:10:43 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:38:04.054 15:10:43 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:38:04.054 15:10:43 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:38:04.054 15:10:43 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:38:04.054 15:10:43 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:38:04.054 EAL: No free 2048 kB hugepages reported on node 1 00:38:09.315 15:10:47 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:38:09.315 15:10:47 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:38:09.315 15:10:47 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:38:09.315 15:10:47 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:38:09.315 EAL: No free 2048 kB hugepages reported on node 1 00:38:13.510 15:10:51 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:38:13.510 15:10:51 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:38:13.510 15:10:51 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:38:13.510 15:10:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:13.510 15:10:51 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:38:13.510 15:10:51 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:38:13.510 15:10:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:13.510 15:10:51 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2071562 00:38:13.510 15:10:51 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:38:13.510 15:10:51 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:13.510 15:10:51 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2071562 00:38:13.510 15:10:51 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 2071562 ']' 00:38:13.510 15:10:51 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:13.510 15:10:51 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:13.510 15:10:51 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:13.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:13.510 15:10:51 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:13.510 15:10:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:13.510 [2024-07-14 15:10:52.076176] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:38:13.510 [2024-07-14 15:10:52.076335] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:13.510 EAL: No free 2048 kB hugepages reported on node 1 00:38:13.510 [2024-07-14 15:10:52.210434] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:13.510 [2024-07-14 15:10:52.469525] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:13.510 [2024-07-14 15:10:52.469604] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:13.510 [2024-07-14 15:10:52.469631] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:13.510 [2024-07-14 15:10:52.469653] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:13.510 [2024-07-14 15:10:52.469674] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:13.510 [2024-07-14 15:10:52.469796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:13.510 [2024-07-14 15:10:52.469866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:38:13.510 [2024-07-14 15:10:52.469956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:13.510 [2024-07-14 15:10:52.469966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:38:13.768 15:10:52 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:13.768 15:10:52 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:38:13.768 15:10:52 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:38:13.768 15:10:52 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:13.768 15:10:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:13.768 INFO: Log level set to 20 00:38:13.768 INFO: Requests: 00:38:13.768 { 00:38:13.768 "jsonrpc": "2.0", 00:38:13.768 "method": "nvmf_set_config", 00:38:13.768 "id": 1, 00:38:13.768 "params": { 00:38:13.768 "admin_cmd_passthru": { 00:38:13.768 "identify_ctrlr": true 00:38:13.768 } 00:38:13.768 } 00:38:13.768 } 00:38:13.768 00:38:13.768 INFO: response: 00:38:13.768 { 00:38:13.768 "jsonrpc": "2.0", 00:38:13.768 "id": 1, 00:38:13.768 "result": true 00:38:13.768 } 00:38:13.768 00:38:13.768 15:10:52 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:13.768 15:10:52 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:38:13.768 15:10:52 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:13.768 15:10:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:13.768 INFO: Setting log level to 20 00:38:13.768 INFO: Setting log level to 20 00:38:13.768 INFO: Log level set to 20 00:38:13.768 INFO: Log level set to 20 00:38:13.768 INFO: Requests: 00:38:13.768 { 00:38:13.768 "jsonrpc": "2.0", 00:38:13.768 "method": "framework_start_init", 00:38:13.768 "id": 1 00:38:13.768 } 00:38:13.768 00:38:13.768 INFO: Requests: 00:38:13.769 { 00:38:13.769 "jsonrpc": "2.0", 00:38:13.769 "method": "framework_start_init", 00:38:13.769 "id": 1 00:38:13.769 } 00:38:13.769 00:38:14.027 [2024-07-14 15:10:53.325304] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:38:14.283 INFO: response: 00:38:14.283 { 00:38:14.283 "jsonrpc": "2.0", 00:38:14.283 "id": 1, 00:38:14.283 "result": true 00:38:14.283 } 00:38:14.283 00:38:14.283 INFO: response: 00:38:14.283 { 00:38:14.283 "jsonrpc": "2.0", 00:38:14.283 "id": 1, 00:38:14.283 "result": true 00:38:14.283 } 00:38:14.283 00:38:14.283 15:10:53 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:14.283 15:10:53 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:14.283 15:10:53 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:14.283 15:10:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:14.283 INFO: Setting log level to 40 00:38:14.283 INFO: Setting log level to 40 00:38:14.283 INFO: Setting log level to 40 00:38:14.283 [2024-07-14 15:10:53.338073] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:14.283 15:10:53 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:14.283 15:10:53 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:38:14.284 15:10:53 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:38:14.284 15:10:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:14.284 15:10:53 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:38:14.284 15:10:53 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:14.284 15:10:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:17.567 Nvme0n1 00:38:17.567 15:10:56 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:17.567 15:10:56 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:38:17.567 15:10:56 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:17.567 15:10:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:17.567 15:10:56 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:17.567 15:10:56 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:38:17.567 15:10:56 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:17.567 15:10:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:17.567 15:10:56 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:17.567 15:10:56 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:17.567 15:10:56 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:17.567 15:10:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:17.567 [2024-07-14 15:10:56.285068] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:17.567 15:10:56 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:17.567 15:10:56 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:38:17.567 15:10:56 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:17.567 15:10:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:17.567 [ 00:38:17.567 { 00:38:17.567 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:38:17.567 "subtype": "Discovery", 00:38:17.567 "listen_addresses": [], 00:38:17.567 "allow_any_host": true, 00:38:17.567 "hosts": [] 00:38:17.567 }, 00:38:17.567 { 00:38:17.567 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:38:17.567 "subtype": "NVMe", 00:38:17.568 "listen_addresses": [ 00:38:17.568 { 00:38:17.568 "trtype": "TCP", 00:38:17.568 "adrfam": "IPv4", 00:38:17.568 "traddr": "10.0.0.2", 00:38:17.568 "trsvcid": "4420" 00:38:17.568 } 00:38:17.568 ], 00:38:17.568 "allow_any_host": true, 00:38:17.568 "hosts": [], 00:38:17.568 "serial_number": "SPDK00000000000001", 00:38:17.568 "model_number": "SPDK bdev Controller", 00:38:17.568 "max_namespaces": 1, 00:38:17.568 "min_cntlid": 1, 00:38:17.568 "max_cntlid": 65519, 00:38:17.568 "namespaces": [ 00:38:17.568 { 00:38:17.568 "nsid": 1, 00:38:17.568 "bdev_name": "Nvme0n1", 00:38:17.568 "name": "Nvme0n1", 00:38:17.568 "nguid": "C1D55ED524A34B82A072CBAA851EFA15", 00:38:17.568 "uuid": "c1d55ed5-24a3-4b82-a072-cbaa851efa15" 00:38:17.568 } 00:38:17.568 ] 00:38:17.568 } 00:38:17.568 ] 00:38:17.568 15:10:56 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:17.568 15:10:56 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:38:17.568 15:10:56 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:38:17.568 15:10:56 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:38:17.568 EAL: No free 2048 kB hugepages reported on node 1 00:38:17.568 15:10:56 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:38:17.568 15:10:56 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:38:17.568 15:10:56 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:38:17.568 15:10:56 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:38:17.568 EAL: No free 2048 kB hugepages reported on node 1 00:38:17.568 15:10:56 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:38:17.568 15:10:56 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:38:17.568 15:10:56 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:38:17.568 15:10:56 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:17.568 15:10:56 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:17.568 15:10:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:17.827 15:10:56 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:17.827 15:10:56 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:38:17.827 15:10:56 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:38:17.827 15:10:56 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:17.827 15:10:56 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:38:17.827 15:10:56 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:17.827 15:10:56 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:38:17.827 15:10:56 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:17.827 15:10:56 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:17.827 rmmod nvme_tcp 00:38:17.827 rmmod nvme_fabrics 00:38:17.827 rmmod nvme_keyring 00:38:17.827 15:10:56 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:17.827 15:10:56 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:38:17.827 15:10:56 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:38:17.827 15:10:56 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 2071562 ']' 00:38:17.827 15:10:56 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 2071562 00:38:17.827 15:10:56 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 2071562 ']' 00:38:17.827 15:10:56 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 2071562 00:38:17.827 15:10:56 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:38:17.827 15:10:56 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:17.827 15:10:56 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2071562 00:38:17.827 15:10:57 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:38:17.827 15:10:57 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:38:17.827 15:10:57 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2071562' 00:38:17.827 killing process with pid 2071562 00:38:17.827 15:10:57 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 2071562 00:38:17.827 15:10:57 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 2071562 00:38:20.380 15:10:59 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:38:20.380 15:10:59 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:20.380 15:10:59 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:20.380 15:10:59 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:20.380 15:10:59 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:20.380 15:10:59 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:20.380 15:10:59 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:20.380 15:10:59 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:22.928 15:11:01 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:38:22.928 00:38:22.928 real 0m20.441s 00:38:22.928 user 0m33.684s 00:38:22.928 sys 0m2.633s 00:38:22.928 15:11:01 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:22.928 15:11:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:22.928 ************************************ 00:38:22.928 END TEST nvmf_identify_passthru 00:38:22.928 ************************************ 00:38:22.928 15:11:01 -- common/autotest_common.sh@1142 -- # return 0 00:38:22.928 15:11:01 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:38:22.928 15:11:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:22.928 15:11:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:22.928 15:11:01 -- common/autotest_common.sh@10 -- # set +x 00:38:22.928 ************************************ 00:38:22.928 START TEST nvmf_dif 00:38:22.928 ************************************ 00:38:22.928 15:11:01 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:38:22.928 * Looking for test storage... 00:38:22.928 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:22.928 15:11:01 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:22.928 15:11:01 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:38:22.928 15:11:01 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:22.928 15:11:01 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:22.928 15:11:01 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:22.928 15:11:01 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:22.928 15:11:01 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:22.928 15:11:01 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:22.928 15:11:01 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:22.928 15:11:01 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:22.928 15:11:01 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:22.928 15:11:01 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:22.928 15:11:01 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:22.928 15:11:01 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:22.928 15:11:01 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:22.928 15:11:01 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:22.928 15:11:01 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:22.928 15:11:01 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:22.928 15:11:01 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:22.928 15:11:01 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:22.928 15:11:01 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:22.928 15:11:01 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:22.928 15:11:01 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:22.928 15:11:01 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:22.928 15:11:01 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:22.928 15:11:01 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:38:22.928 15:11:01 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:22.928 15:11:01 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:38:22.928 15:11:01 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:22.928 15:11:01 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:22.928 15:11:01 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:22.928 15:11:01 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:22.928 15:11:01 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:22.928 15:11:01 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:22.928 15:11:01 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:22.928 15:11:01 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:22.928 15:11:01 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:38:22.928 15:11:01 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:38:22.928 15:11:01 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:38:22.928 15:11:01 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:38:22.928 15:11:01 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:38:22.928 15:11:01 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:22.928 15:11:01 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:22.928 15:11:01 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:22.928 15:11:01 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:22.928 15:11:01 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:22.928 15:11:01 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:22.928 15:11:01 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:22.928 15:11:01 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:22.928 15:11:01 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:38:22.928 15:11:01 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:38:22.928 15:11:01 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:38:22.928 15:11:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:24.830 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:24.830 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:24.830 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:24.830 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:38:24.830 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:24.830 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:38:24.830 00:38:24.830 --- 10.0.0.2 ping statistics --- 00:38:24.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:24.830 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:24.830 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:24.830 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:38:24.830 00:38:24.830 --- 10.0.0.1 ping statistics --- 00:38:24.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:24.830 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:38:24.830 15:11:03 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:25.761 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:38:25.761 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:38:25.761 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:38:25.761 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:38:25.761 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:38:25.761 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:38:25.761 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:38:25.761 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:38:25.761 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:38:25.761 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:38:25.761 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:38:25.761 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:38:25.761 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:38:25.761 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:38:25.761 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:38:25.761 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:38:25.761 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:38:25.761 15:11:04 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:25.761 15:11:04 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:25.761 15:11:04 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:25.761 15:11:04 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:25.761 15:11:04 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:25.761 15:11:04 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:25.761 15:11:04 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:38:25.761 15:11:04 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:38:25.761 15:11:04 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:25.761 15:11:04 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:38:25.761 15:11:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:25.761 15:11:04 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=2074973 00:38:25.761 15:11:04 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:38:25.761 15:11:04 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 2074973 00:38:25.761 15:11:04 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 2074973 ']' 00:38:25.761 15:11:04 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:25.761 15:11:04 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:25.761 15:11:04 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:25.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:25.761 15:11:04 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:25.761 15:11:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:25.761 [2024-07-14 15:11:05.037386] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:38:25.761 [2024-07-14 15:11:05.037529] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:26.019 EAL: No free 2048 kB hugepages reported on node 1 00:38:26.019 [2024-07-14 15:11:05.173477] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:26.277 [2024-07-14 15:11:05.424788] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:26.277 [2024-07-14 15:11:05.424864] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:26.278 [2024-07-14 15:11:05.424902] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:26.278 [2024-07-14 15:11:05.424947] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:26.278 [2024-07-14 15:11:05.424966] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:26.278 [2024-07-14 15:11:05.425039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:26.847 15:11:05 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:26.847 15:11:05 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:38:26.847 15:11:05 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:26.847 15:11:05 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:38:26.847 15:11:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:26.847 15:11:05 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:26.847 15:11:05 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:38:26.847 15:11:05 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:38:26.847 15:11:05 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:26.847 15:11:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:26.847 [2024-07-14 15:11:05.991456] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:26.847 15:11:05 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:26.847 15:11:05 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:38:26.847 15:11:05 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:26.847 15:11:05 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:26.847 15:11:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:26.847 ************************************ 00:38:26.847 START TEST fio_dif_1_default 00:38:26.847 ************************************ 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:26.847 bdev_null0 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:26.847 [2024-07-14 15:11:06.047760] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:26.847 { 00:38:26.847 "params": { 00:38:26.847 "name": "Nvme$subsystem", 00:38:26.847 "trtype": "$TEST_TRANSPORT", 00:38:26.847 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:26.847 "adrfam": "ipv4", 00:38:26.847 "trsvcid": "$NVMF_PORT", 00:38:26.847 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:26.847 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:26.847 "hdgst": ${hdgst:-false}, 00:38:26.847 "ddgst": ${ddgst:-false} 00:38:26.847 }, 00:38:26.847 "method": "bdev_nvme_attach_controller" 00:38:26.847 } 00:38:26.847 EOF 00:38:26.847 )") 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:38:26.847 "params": { 00:38:26.847 "name": "Nvme0", 00:38:26.847 "trtype": "tcp", 00:38:26.847 "traddr": "10.0.0.2", 00:38:26.847 "adrfam": "ipv4", 00:38:26.847 "trsvcid": "4420", 00:38:26.847 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:26.847 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:26.847 "hdgst": false, 00:38:26.847 "ddgst": false 00:38:26.847 }, 00:38:26.847 "method": "bdev_nvme_attach_controller" 00:38:26.847 }' 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # break 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:26.847 15:11:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:27.106 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:27.106 fio-3.35 00:38:27.106 Starting 1 thread 00:38:27.364 EAL: No free 2048 kB hugepages reported on node 1 00:38:39.565 00:38:39.565 filename0: (groupid=0, jobs=1): err= 0: pid=2075324: Sun Jul 14 15:11:17 2024 00:38:39.565 read: IOPS=189, BW=758KiB/s (777kB/s)(7616KiB/10041msec) 00:38:39.565 slat (nsec): min=5940, max=68888, avg=14397.65, stdev=4734.17 00:38:39.565 clat (usec): min=690, max=44207, avg=21048.87, stdev=20184.62 00:38:39.565 lat (usec): min=702, max=44229, avg=21063.26, stdev=20184.57 00:38:39.565 clat percentiles (usec): 00:38:39.565 | 1.00th=[ 725], 5.00th=[ 750], 10.00th=[ 758], 20.00th=[ 775], 00:38:39.565 | 30.00th=[ 791], 40.00th=[ 807], 50.00th=[41157], 60.00th=[41157], 00:38:39.565 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:38:39.565 | 99.00th=[41157], 99.50th=[41157], 99.90th=[44303], 99.95th=[44303], 00:38:39.565 | 99.99th=[44303] 00:38:39.565 bw ( KiB/s): min= 672, max= 768, per=100.00%, avg=760.00, stdev=25.16, samples=20 00:38:39.565 iops : min= 168, max= 192, avg=190.00, stdev= 6.29, samples=20 00:38:39.565 lat (usec) : 750=6.25%, 1000=43.54% 00:38:39.565 lat (msec) : 50=50.21% 00:38:39.565 cpu : usr=92.12%, sys=7.38%, ctx=14, majf=0, minf=1640 00:38:39.565 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:39.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:39.565 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:39.565 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:39.565 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:39.565 00:38:39.565 Run status group 0 (all jobs): 00:38:39.565 READ: bw=758KiB/s (777kB/s), 758KiB/s-758KiB/s (777kB/s-777kB/s), io=7616KiB (7799kB), run=10041-10041msec 00:38:39.565 ----------------------------------------------------- 00:38:39.565 Suppressions used: 00:38:39.565 count bytes template 00:38:39.565 1 8 /usr/src/fio/parse.c 00:38:39.565 1 8 libtcmalloc_minimal.so 00:38:39.565 1 904 libcrypto.so 00:38:39.565 ----------------------------------------------------- 00:38:39.565 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:39.565 00:38:39.565 real 0m12.389s 00:38:39.565 user 0m11.506s 00:38:39.565 sys 0m1.150s 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:39.565 ************************************ 00:38:39.565 END TEST fio_dif_1_default 00:38:39.565 ************************************ 00:38:39.565 15:11:18 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:38:39.565 15:11:18 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:38:39.565 15:11:18 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:39.565 15:11:18 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:39.565 15:11:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:39.565 ************************************ 00:38:39.565 START TEST fio_dif_1_multi_subsystems 00:38:39.565 ************************************ 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:39.565 bdev_null0 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:39.565 [2024-07-14 15:11:18.481633] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:39.565 bdev_null1 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:39.565 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:39.566 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:38:39.566 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:38:39.566 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:38:39.566 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:38:39.566 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:38:39.566 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:39.566 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:39.566 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:39.566 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:38:39.566 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:39.566 { 00:38:39.566 "params": { 00:38:39.566 "name": "Nvme$subsystem", 00:38:39.566 "trtype": "$TEST_TRANSPORT", 00:38:39.566 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:39.566 "adrfam": "ipv4", 00:38:39.566 "trsvcid": "$NVMF_PORT", 00:38:39.566 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:39.566 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:39.566 "hdgst": ${hdgst:-false}, 00:38:39.566 "ddgst": ${ddgst:-false} 00:38:39.566 }, 00:38:39.566 "method": "bdev_nvme_attach_controller" 00:38:39.566 } 00:38:39.566 EOF 00:38:39.566 )") 00:38:39.566 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:39.566 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:38:39.566 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:39.566 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:38:39.566 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:39.566 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:39.566 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:38:39.566 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:39.566 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:39.566 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:38:39.566 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:39.566 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:38:39.566 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:38:39.566 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:38:39.566 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:38:39.566 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:39.566 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:39.566 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:39.566 { 00:38:39.566 "params": { 00:38:39.566 "name": "Nvme$subsystem", 00:38:39.566 "trtype": "$TEST_TRANSPORT", 00:38:39.566 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:39.566 "adrfam": "ipv4", 00:38:39.566 "trsvcid": "$NVMF_PORT", 00:38:39.566 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:39.566 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:39.566 "hdgst": ${hdgst:-false}, 00:38:39.566 "ddgst": ${ddgst:-false} 00:38:39.566 }, 00:38:39.566 "method": "bdev_nvme_attach_controller" 00:38:39.566 } 00:38:39.566 EOF 00:38:39.566 )") 00:38:39.566 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:38:39.566 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:38:39.566 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:38:39.566 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:38:39.566 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:38:39.566 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:38:39.566 "params": { 00:38:39.566 "name": "Nvme0", 00:38:39.566 "trtype": "tcp", 00:38:39.566 "traddr": "10.0.0.2", 00:38:39.566 "adrfam": "ipv4", 00:38:39.566 "trsvcid": "4420", 00:38:39.566 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:39.566 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:39.566 "hdgst": false, 00:38:39.566 "ddgst": false 00:38:39.566 }, 00:38:39.566 "method": "bdev_nvme_attach_controller" 00:38:39.566 },{ 00:38:39.566 "params": { 00:38:39.566 "name": "Nvme1", 00:38:39.566 "trtype": "tcp", 00:38:39.566 "traddr": "10.0.0.2", 00:38:39.566 "adrfam": "ipv4", 00:38:39.566 "trsvcid": "4420", 00:38:39.566 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:39.566 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:39.566 "hdgst": false, 00:38:39.566 "ddgst": false 00:38:39.566 }, 00:38:39.566 "method": "bdev_nvme_attach_controller" 00:38:39.566 }' 00:38:39.566 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:38:39.566 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:38:39.566 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # break 00:38:39.566 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:39.566 15:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:39.566 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:39.566 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:39.566 fio-3.35 00:38:39.566 Starting 2 threads 00:38:39.825 EAL: No free 2048 kB hugepages reported on node 1 00:38:52.017 00:38:52.017 filename0: (groupid=0, jobs=1): err= 0: pid=2076856: Sun Jul 14 15:11:30 2024 00:38:52.017 read: IOPS=190, BW=760KiB/s (778kB/s)(7616KiB/10020msec) 00:38:52.017 slat (nsec): min=7347, max=43086, avg=13945.62, stdev=4810.46 00:38:52.017 clat (usec): min=686, max=44382, avg=21007.80, stdev=20188.94 00:38:52.017 lat (usec): min=696, max=44404, avg=21021.75, stdev=20188.08 00:38:52.017 clat percentiles (usec): 00:38:52.017 | 1.00th=[ 750], 5.00th=[ 766], 10.00th=[ 783], 20.00th=[ 799], 00:38:52.017 | 30.00th=[ 824], 40.00th=[ 865], 50.00th=[ 1385], 60.00th=[41157], 00:38:52.017 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:38:52.017 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:38:52.017 | 99.99th=[44303] 00:38:52.017 bw ( KiB/s): min= 672, max= 832, per=50.21%, avg=760.00, stdev=30.93, samples=20 00:38:52.017 iops : min= 168, max= 208, avg=190.00, stdev= 7.73, samples=20 00:38:52.017 lat (usec) : 750=1.42%, 1000=47.53% 00:38:52.017 lat (msec) : 2=1.05%, 50=50.00% 00:38:52.017 cpu : usr=94.17%, sys=5.34%, ctx=13, majf=0, minf=1636 00:38:52.017 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:52.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:52.017 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:52.017 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:52.017 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:52.017 filename1: (groupid=0, jobs=1): err= 0: pid=2076857: Sun Jul 14 15:11:30 2024 00:38:52.017 read: IOPS=188, BW=755KiB/s (773kB/s)(7552KiB/10003msec) 00:38:52.017 slat (nsec): min=6312, max=85088, avg=13555.65, stdev=4550.42 00:38:52.017 clat (usec): min=726, max=43545, avg=21150.83, stdev=20107.70 00:38:52.017 lat (usec): min=737, max=43597, avg=21164.38, stdev=20107.51 00:38:52.017 clat percentiles (usec): 00:38:52.017 | 1.00th=[ 791], 5.00th=[ 816], 10.00th=[ 840], 20.00th=[ 873], 00:38:52.017 | 30.00th=[ 898], 40.00th=[ 914], 50.00th=[41157], 60.00th=[41157], 00:38:52.017 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:38:52.017 | 99.00th=[41681], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:38:52.017 | 99.99th=[43779] 00:38:52.017 bw ( KiB/s): min= 672, max= 768, per=49.94%, avg=756.21, stdev=28.64, samples=19 00:38:52.017 iops : min= 168, max= 192, avg=189.05, stdev= 7.16, samples=19 00:38:52.017 lat (usec) : 750=0.21%, 1000=49.26% 00:38:52.017 lat (msec) : 2=0.11%, 50=50.42% 00:38:52.017 cpu : usr=93.97%, sys=5.54%, ctx=14, majf=0, minf=1636 00:38:52.017 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:52.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:52.017 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:52.017 issued rwts: total=1888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:52.017 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:52.017 00:38:52.017 Run status group 0 (all jobs): 00:38:52.017 READ: bw=1514KiB/s (1550kB/s), 755KiB/s-760KiB/s (773kB/s-778kB/s), io=14.8MiB (15.5MB), run=10003-10020msec 00:38:52.017 ----------------------------------------------------- 00:38:52.017 Suppressions used: 00:38:52.017 count bytes template 00:38:52.017 2 16 /usr/src/fio/parse.c 00:38:52.017 1 8 libtcmalloc_minimal.so 00:38:52.017 1 904 libcrypto.so 00:38:52.017 ----------------------------------------------------- 00:38:52.017 00:38:52.017 15:11:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:38:52.017 15:11:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:38:52.017 15:11:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:38:52.017 15:11:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:52.017 15:11:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:38:52.017 15:11:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:52.017 15:11:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:52.017 15:11:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:52.017 15:11:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:52.017 15:11:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:52.017 15:11:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:52.017 15:11:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:52.017 15:11:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:52.017 15:11:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:38:52.017 15:11:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:52.017 15:11:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:38:52.017 15:11:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:52.017 15:11:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:52.017 15:11:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:52.017 15:11:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:52.017 15:11:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:52.017 15:11:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:52.017 15:11:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:52.017 15:11:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:52.017 00:38:52.017 real 0m12.640s 00:38:52.017 user 0m21.334s 00:38:52.017 sys 0m1.542s 00:38:52.017 15:11:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:52.017 15:11:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:52.017 ************************************ 00:38:52.017 END TEST fio_dif_1_multi_subsystems 00:38:52.017 ************************************ 00:38:52.017 15:11:31 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:38:52.017 15:11:31 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:38:52.018 15:11:31 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:52.018 15:11:31 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:52.018 15:11:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:52.018 ************************************ 00:38:52.018 START TEST fio_dif_rand_params 00:38:52.018 ************************************ 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:52.018 bdev_null0 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:52.018 [2024-07-14 15:11:31.166699] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:52.018 { 00:38:52.018 "params": { 00:38:52.018 "name": "Nvme$subsystem", 00:38:52.018 "trtype": "$TEST_TRANSPORT", 00:38:52.018 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:52.018 "adrfam": "ipv4", 00:38:52.018 "trsvcid": "$NVMF_PORT", 00:38:52.018 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:52.018 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:52.018 "hdgst": ${hdgst:-false}, 00:38:52.018 "ddgst": ${ddgst:-false} 00:38:52.018 }, 00:38:52.018 "method": "bdev_nvme_attach_controller" 00:38:52.018 } 00:38:52.018 EOF 00:38:52.018 )") 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:38:52.018 "params": { 00:38:52.018 "name": "Nvme0", 00:38:52.018 "trtype": "tcp", 00:38:52.018 "traddr": "10.0.0.2", 00:38:52.018 "adrfam": "ipv4", 00:38:52.018 "trsvcid": "4420", 00:38:52.018 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:52.018 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:52.018 "hdgst": false, 00:38:52.018 "ddgst": false 00:38:52.018 }, 00:38:52.018 "method": "bdev_nvme_attach_controller" 00:38:52.018 }' 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:52.018 15:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:52.275 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:38:52.275 ... 00:38:52.275 fio-3.35 00:38:52.275 Starting 3 threads 00:38:52.275 EAL: No free 2048 kB hugepages reported on node 1 00:38:58.848 00:38:58.848 filename0: (groupid=0, jobs=1): err= 0: pid=2078369: Sun Jul 14 15:11:37 2024 00:38:58.848 read: IOPS=190, BW=23.9MiB/s (25.0MB/s)(119MiB/5004msec) 00:38:58.848 slat (nsec): min=6110, max=60712, avg=21946.77, stdev=4259.01 00:38:58.848 clat (usec): min=6691, max=54473, avg=15690.08, stdev=4690.47 00:38:58.848 lat (usec): min=6711, max=54493, avg=15712.03, stdev=4690.44 00:38:58.848 clat percentiles (usec): 00:38:58.848 | 1.00th=[ 8848], 5.00th=[10421], 10.00th=[11863], 20.00th=[13304], 00:38:58.848 | 30.00th=[14091], 40.00th=[14746], 50.00th=[15533], 60.00th=[16188], 00:38:58.848 | 70.00th=[16909], 80.00th=[17433], 90.00th=[18220], 95.00th=[19006], 00:38:58.848 | 99.00th=[46924], 99.50th=[53740], 99.90th=[54264], 99.95th=[54264], 00:38:58.848 | 99.99th=[54264] 00:38:58.848 bw ( KiB/s): min=22784, max=25856, per=32.84%, avg=24396.80, stdev=935.17, samples=10 00:38:58.848 iops : min= 178, max= 202, avg=190.60, stdev= 7.31, samples=10 00:38:58.848 lat (msec) : 10=3.66%, 20=93.19%, 50=2.51%, 100=0.63% 00:38:58.848 cpu : usr=93.86%, sys=5.56%, ctx=8, majf=0, minf=1634 00:38:58.848 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:58.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:58.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:58.848 issued rwts: total=955,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:58.848 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:58.848 filename0: (groupid=0, jobs=1): err= 0: pid=2078370: Sun Jul 14 15:11:37 2024 00:38:58.848 read: IOPS=196, BW=24.5MiB/s (25.7MB/s)(124MiB/5044msec) 00:38:58.848 slat (nsec): min=5770, max=56670, avg=24441.93, stdev=5819.13 00:38:58.848 clat (usec): min=7852, max=55976, avg=15229.05, stdev=5028.00 00:38:58.848 lat (usec): min=7872, max=55998, avg=15253.49, stdev=5027.53 00:38:58.848 clat percentiles (usec): 00:38:58.848 | 1.00th=[ 8717], 5.00th=[10159], 10.00th=[11600], 20.00th=[12780], 00:38:58.848 | 30.00th=[13566], 40.00th=[14222], 50.00th=[15008], 60.00th=[15533], 00:38:58.848 | 70.00th=[16057], 80.00th=[16909], 90.00th=[17957], 95.00th=[19006], 00:38:58.848 | 99.00th=[51119], 99.50th=[53216], 99.90th=[55837], 99.95th=[55837], 00:38:58.848 | 99.99th=[55837] 00:38:58.848 bw ( KiB/s): min=23040, max=27904, per=33.99%, avg=25246.40, stdev=1389.34, samples=10 00:38:58.848 iops : min= 180, max= 218, avg=197.20, stdev=10.88, samples=10 00:38:58.848 lat (msec) : 10=4.55%, 20=92.82%, 50=1.62%, 100=1.01% 00:38:58.848 cpu : usr=89.45%, sys=7.75%, ctx=450, majf=0, minf=1637 00:38:58.848 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:58.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:58.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:58.848 issued rwts: total=989,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:58.848 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:58.848 filename0: (groupid=0, jobs=1): err= 0: pid=2078371: Sun Jul 14 15:11:37 2024 00:38:58.848 read: IOPS=196, BW=24.5MiB/s (25.7MB/s)(123MiB/5005msec) 00:38:58.848 slat (nsec): min=6175, max=42424, avg=20865.15, stdev=4236.84 00:38:58.848 clat (usec): min=5596, max=56943, avg=15245.72, stdev=6583.39 00:38:58.848 lat (usec): min=5614, max=56967, avg=15266.59, stdev=6583.29 00:38:58.848 clat percentiles (usec): 00:38:58.848 | 1.00th=[ 5735], 5.00th=[10028], 10.00th=[11600], 20.00th=[12649], 00:38:58.848 | 30.00th=[13304], 40.00th=[13829], 50.00th=[14484], 60.00th=[15139], 00:38:58.848 | 70.00th=[15664], 80.00th=[16581], 90.00th=[17695], 95.00th=[18744], 00:38:58.848 | 99.00th=[53216], 99.50th=[54789], 99.90th=[56886], 99.95th=[56886], 00:38:58.848 | 99.99th=[56886] 00:38:58.848 bw ( KiB/s): min=22528, max=27904, per=33.78%, avg=25088.00, stdev=1502.45, samples=10 00:38:58.848 iops : min= 176, max= 218, avg=196.00, stdev=11.74, samples=10 00:38:58.848 lat (msec) : 10=4.78%, 20=92.17%, 50=0.61%, 100=2.44% 00:38:58.848 cpu : usr=94.32%, sys=5.12%, ctx=9, majf=0, minf=1636 00:38:58.848 IO depths : 1=2.0%, 2=98.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:58.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:58.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:58.848 issued rwts: total=983,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:58.848 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:58.848 00:38:58.848 Run status group 0 (all jobs): 00:38:58.848 READ: bw=72.5MiB/s (76.1MB/s), 23.9MiB/s-24.5MiB/s (25.0MB/s-25.7MB/s), io=366MiB (384MB), run=5004-5044msec 00:38:59.415 ----------------------------------------------------- 00:38:59.415 Suppressions used: 00:38:59.415 count bytes template 00:38:59.415 5 44 /usr/src/fio/parse.c 00:38:59.415 1 8 libtcmalloc_minimal.so 00:38:59.415 1 904 libcrypto.so 00:38:59.415 ----------------------------------------------------- 00:38:59.415 00:38:59.415 15:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:38:59.415 15:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:59.415 15:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:59.415 15:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:59.415 15:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:59.415 15:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:59.415 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.415 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:59.415 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.415 15:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:59.415 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.415 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:59.415 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.415 15:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:38:59.415 15:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:38:59.415 15:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:38:59.415 15:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:38:59.415 15:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:38:59.415 15:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:38:59.415 15:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:38:59.415 15:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:59.415 15:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:59.415 15:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:59.415 15:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:59.415 15:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:38:59.415 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.415 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:59.415 bdev_null0 00:38:59.415 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.415 15:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:59.415 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.415 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:59.415 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.415 15:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:59.415 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.415 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:59.415 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.415 15:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:59.415 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.415 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:59.415 [2024-07-14 15:11:38.521155] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:59.416 bdev_null1 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:59.416 bdev_null2 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:59.416 { 00:38:59.416 "params": { 00:38:59.416 "name": "Nvme$subsystem", 00:38:59.416 "trtype": "$TEST_TRANSPORT", 00:38:59.416 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:59.416 "adrfam": "ipv4", 00:38:59.416 "trsvcid": "$NVMF_PORT", 00:38:59.416 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:59.416 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:59.416 "hdgst": ${hdgst:-false}, 00:38:59.416 "ddgst": ${ddgst:-false} 00:38:59.416 }, 00:38:59.416 "method": "bdev_nvme_attach_controller" 00:38:59.416 } 00:38:59.416 EOF 00:38:59.416 )") 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:59.416 { 00:38:59.416 "params": { 00:38:59.416 "name": "Nvme$subsystem", 00:38:59.416 "trtype": "$TEST_TRANSPORT", 00:38:59.416 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:59.416 "adrfam": "ipv4", 00:38:59.416 "trsvcid": "$NVMF_PORT", 00:38:59.416 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:59.416 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:59.416 "hdgst": ${hdgst:-false}, 00:38:59.416 "ddgst": ${ddgst:-false} 00:38:59.416 }, 00:38:59.416 "method": "bdev_nvme_attach_controller" 00:38:59.416 } 00:38:59.416 EOF 00:38:59.416 )") 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:59.416 { 00:38:59.416 "params": { 00:38:59.416 "name": "Nvme$subsystem", 00:38:59.416 "trtype": "$TEST_TRANSPORT", 00:38:59.416 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:59.416 "adrfam": "ipv4", 00:38:59.416 "trsvcid": "$NVMF_PORT", 00:38:59.416 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:59.416 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:59.416 "hdgst": ${hdgst:-false}, 00:38:59.416 "ddgst": ${ddgst:-false} 00:38:59.416 }, 00:38:59.416 "method": "bdev_nvme_attach_controller" 00:38:59.416 } 00:38:59.416 EOF 00:38:59.416 )") 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:38:59.416 "params": { 00:38:59.416 "name": "Nvme0", 00:38:59.416 "trtype": "tcp", 00:38:59.416 "traddr": "10.0.0.2", 00:38:59.416 "adrfam": "ipv4", 00:38:59.416 "trsvcid": "4420", 00:38:59.416 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:59.416 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:59.416 "hdgst": false, 00:38:59.416 "ddgst": false 00:38:59.416 }, 00:38:59.416 "method": "bdev_nvme_attach_controller" 00:38:59.416 },{ 00:38:59.416 "params": { 00:38:59.416 "name": "Nvme1", 00:38:59.416 "trtype": "tcp", 00:38:59.416 "traddr": "10.0.0.2", 00:38:59.416 "adrfam": "ipv4", 00:38:59.416 "trsvcid": "4420", 00:38:59.416 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:59.416 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:59.416 "hdgst": false, 00:38:59.416 "ddgst": false 00:38:59.416 }, 00:38:59.416 "method": "bdev_nvme_attach_controller" 00:38:59.416 },{ 00:38:59.416 "params": { 00:38:59.416 "name": "Nvme2", 00:38:59.416 "trtype": "tcp", 00:38:59.416 "traddr": "10.0.0.2", 00:38:59.416 "adrfam": "ipv4", 00:38:59.416 "trsvcid": "4420", 00:38:59.416 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:38:59.416 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:38:59.416 "hdgst": false, 00:38:59.416 "ddgst": false 00:38:59.416 }, 00:38:59.416 "method": "bdev_nvme_attach_controller" 00:38:59.416 }' 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:38:59.416 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:38:59.417 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:59.417 15:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:59.675 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:59.675 ... 00:38:59.675 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:59.675 ... 00:38:59.675 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:59.675 ... 00:38:59.675 fio-3.35 00:38:59.675 Starting 24 threads 00:38:59.675 EAL: No free 2048 kB hugepages reported on node 1 00:39:11.880 00:39:11.880 filename0: (groupid=0, jobs=1): err= 0: pid=2079350: Sun Jul 14 15:11:50 2024 00:39:11.880 read: IOPS=318, BW=1273KiB/s (1304kB/s)(12.4MiB/10001msec) 00:39:11.880 slat (nsec): min=10769, max=97066, avg=39856.25, stdev=11181.84 00:39:11.880 clat (msec): min=33, max=187, avg=49.90, stdev=25.34 00:39:11.880 lat (msec): min=33, max=187, avg=49.94, stdev=25.34 00:39:11.880 clat percentiles (msec): 00:39:11.880 | 1.00th=[ 43], 5.00th=[ 43], 10.00th=[ 43], 20.00th=[ 43], 00:39:11.880 | 30.00th=[ 43], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 44], 00:39:11.880 | 70.00th=[ 44], 80.00th=[ 44], 90.00th=[ 45], 95.00th=[ 131], 00:39:11.880 | 99.00th=[ 159], 99.50th=[ 161], 99.90th=[ 161], 99.95th=[ 188], 00:39:11.880 | 99.99th=[ 188] 00:39:11.880 bw ( KiB/s): min= 384, max= 1536, per=4.12%, avg=1259.79, stdev=420.55, samples=19 00:39:11.880 iops : min= 96, max= 384, avg=314.95, stdev=105.14, samples=19 00:39:11.880 lat (msec) : 50=93.34%, 100=0.13%, 250=6.53% 00:39:11.880 cpu : usr=97.73%, sys=1.67%, ctx=45, majf=0, minf=1636 00:39:11.880 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:39:11.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.880 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.880 issued rwts: total=3184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.880 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:11.880 filename0: (groupid=0, jobs=1): err= 0: pid=2079351: Sun Jul 14 15:11:50 2024 00:39:11.880 read: IOPS=318, BW=1273KiB/s (1303kB/s)(12.4MiB/10007msec) 00:39:11.880 slat (nsec): min=11842, max=82987, avg=31830.46, stdev=11055.99 00:39:11.880 clat (msec): min=32, max=353, avg=49.98, stdev=30.32 00:39:11.880 lat (msec): min=32, max=353, avg=50.01, stdev=30.32 00:39:11.880 clat percentiles (msec): 00:39:11.880 | 1.00th=[ 34], 5.00th=[ 43], 10.00th=[ 43], 20.00th=[ 43], 00:39:11.880 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 44], 00:39:11.880 | 70.00th=[ 44], 80.00th=[ 44], 90.00th=[ 45], 95.00th=[ 146], 00:39:11.880 | 99.00th=[ 161], 99.50th=[ 317], 99.90th=[ 317], 99.95th=[ 355], 00:39:11.880 | 99.99th=[ 355] 00:39:11.880 bw ( KiB/s): min= 256, max= 1536, per=4.12%, avg=1259.79, stdev=443.94, samples=19 00:39:11.880 iops : min= 64, max= 384, avg=314.95, stdev=110.99, samples=19 00:39:11.880 lat (msec) : 50=94.47%, 250=5.03%, 500=0.50% 00:39:11.880 cpu : usr=96.76%, sys=2.18%, ctx=99, majf=0, minf=1636 00:39:11.880 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:11.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.880 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.880 issued rwts: total=3184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.880 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:11.880 filename0: (groupid=0, jobs=1): err= 0: pid=2079352: Sun Jul 14 15:11:50 2024 00:39:11.880 read: IOPS=318, BW=1273KiB/s (1304kB/s)(12.4MiB/10005msec) 00:39:11.880 slat (usec): min=11, max=117, avg=37.85, stdev=20.04 00:39:11.880 clat (msec): min=22, max=320, avg=49.93, stdev=28.23 00:39:11.880 lat (msec): min=22, max=320, avg=49.97, stdev=28.24 00:39:11.880 clat percentiles (msec): 00:39:11.880 | 1.00th=[ 43], 5.00th=[ 43], 10.00th=[ 43], 20.00th=[ 43], 00:39:11.880 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 44], 00:39:11.880 | 70.00th=[ 44], 80.00th=[ 44], 90.00th=[ 45], 95.00th=[ 146], 00:39:11.880 | 99.00th=[ 161], 99.50th=[ 249], 99.90th=[ 249], 99.95th=[ 321], 00:39:11.880 | 99.99th=[ 321] 00:39:11.880 bw ( KiB/s): min= 256, max= 1536, per=4.12%, avg=1259.79, stdev=427.79, samples=19 00:39:11.880 iops : min= 64, max= 384, avg=314.95, stdev=106.95, samples=19 00:39:11.880 lat (msec) : 50=93.91%, 100=0.69%, 250=5.34%, 500=0.06% 00:39:11.880 cpu : usr=96.48%, sys=2.27%, ctx=86, majf=0, minf=1633 00:39:11.880 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:39:11.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.880 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.880 issued rwts: total=3184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.880 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:11.880 filename0: (groupid=0, jobs=1): err= 0: pid=2079353: Sun Jul 14 15:11:50 2024 00:39:11.880 read: IOPS=321, BW=1286KiB/s (1316kB/s)(12.6MiB/10007msec) 00:39:11.880 slat (usec): min=8, max=117, avg=26.93, stdev=17.99 00:39:11.880 clat (msec): min=16, max=202, avg=49.55, stdev=24.79 00:39:11.880 lat (msec): min=16, max=202, avg=49.58, stdev=24.80 00:39:11.880 clat percentiles (msec): 00:39:11.880 | 1.00th=[ 43], 5.00th=[ 43], 10.00th=[ 43], 20.00th=[ 44], 00:39:11.880 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 44], 00:39:11.880 | 70.00th=[ 44], 80.00th=[ 44], 90.00th=[ 45], 95.00th=[ 127], 00:39:11.880 | 99.00th=[ 159], 99.50th=[ 161], 99.90th=[ 192], 99.95th=[ 203], 00:39:11.880 | 99.99th=[ 203] 00:39:11.880 bw ( KiB/s): min= 400, max= 1536, per=4.16%, avg=1273.26, stdev=394.21, samples=19 00:39:11.880 iops : min= 100, max= 384, avg=318.32, stdev=98.55, samples=19 00:39:11.880 lat (msec) : 20=0.50%, 50=93.03%, 100=0.56%, 250=5.91% 00:39:11.880 cpu : usr=96.75%, sys=2.14%, ctx=112, majf=0, minf=1637 00:39:11.880 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:39:11.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.880 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.880 issued rwts: total=3216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.880 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:11.880 filename0: (groupid=0, jobs=1): err= 0: pid=2079354: Sun Jul 14 15:11:50 2024 00:39:11.880 read: IOPS=318, BW=1272KiB/s (1303kB/s)(12.4MiB/10012msec) 00:39:11.880 slat (nsec): min=12421, max=83002, avg=33883.17, stdev=10431.96 00:39:11.880 clat (msec): min=18, max=315, avg=50.04, stdev=30.17 00:39:11.880 lat (msec): min=18, max=315, avg=50.07, stdev=30.17 00:39:11.880 clat percentiles (msec): 00:39:11.880 | 1.00th=[ 37], 5.00th=[ 43], 10.00th=[ 43], 20.00th=[ 43], 00:39:11.880 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 44], 00:39:11.880 | 70.00th=[ 44], 80.00th=[ 44], 90.00th=[ 45], 95.00th=[ 146], 00:39:11.880 | 99.00th=[ 161], 99.50th=[ 317], 99.90th=[ 317], 99.95th=[ 317], 00:39:11.880 | 99.99th=[ 317] 00:39:11.880 bw ( KiB/s): min= 256, max= 1536, per=4.11%, avg=1258.95, stdev=442.28, samples=19 00:39:11.880 iops : min= 64, max= 384, avg=314.74, stdev=110.57, samples=19 00:39:11.880 lat (msec) : 20=0.06%, 50=94.35%, 100=0.06%, 250=5.03%, 500=0.50% 00:39:11.880 cpu : usr=95.97%, sys=2.44%, ctx=138, majf=0, minf=1636 00:39:11.880 IO depths : 1=0.4%, 2=6.7%, 4=25.0%, 8=55.8%, 16=12.1%, 32=0.0%, >=64=0.0% 00:39:11.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.880 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.880 issued rwts: total=3184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.880 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:11.880 filename0: (groupid=0, jobs=1): err= 0: pid=2079355: Sun Jul 14 15:11:50 2024 00:39:11.880 read: IOPS=318, BW=1275KiB/s (1306kB/s)(12.5MiB/10036msec) 00:39:11.880 slat (usec): min=7, max=102, avg=26.20, stdev=11.29 00:39:11.880 clat (msec): min=25, max=230, avg=49.95, stdev=26.15 00:39:11.880 lat (msec): min=25, max=230, avg=49.97, stdev=26.15 00:39:11.880 clat percentiles (msec): 00:39:11.880 | 1.00th=[ 43], 5.00th=[ 43], 10.00th=[ 43], 20.00th=[ 44], 00:39:11.880 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 44], 00:39:11.880 | 70.00th=[ 44], 80.00th=[ 44], 90.00th=[ 45], 95.00th=[ 144], 00:39:11.880 | 99.00th=[ 161], 99.50th=[ 161], 99.90th=[ 228], 99.95th=[ 230], 00:39:11.880 | 99.99th=[ 230] 00:39:11.880 bw ( KiB/s): min= 384, max= 1536, per=4.16%, avg=1273.25, stdev=414.26, samples=20 00:39:11.880 iops : min= 96, max= 384, avg=318.30, stdev=103.56, samples=20 00:39:11.880 lat (msec) : 50=93.56%, 100=0.75%, 250=5.69% 00:39:11.880 cpu : usr=97.03%, sys=1.92%, ctx=187, majf=0, minf=1634 00:39:11.880 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:39:11.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.880 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.880 issued rwts: total=3200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.880 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:11.881 filename0: (groupid=0, jobs=1): err= 0: pid=2079356: Sun Jul 14 15:11:50 2024 00:39:11.881 read: IOPS=319, BW=1277KiB/s (1307kB/s)(12.5MiB/10026msec) 00:39:11.881 slat (nsec): min=11656, max=97958, avg=38619.91, stdev=10500.23 00:39:11.881 clat (msec): min=27, max=222, avg=49.79, stdev=27.14 00:39:11.881 lat (msec): min=27, max=222, avg=49.83, stdev=27.14 00:39:11.881 clat percentiles (msec): 00:39:11.881 | 1.00th=[ 31], 5.00th=[ 43], 10.00th=[ 43], 20.00th=[ 43], 00:39:11.881 | 30.00th=[ 43], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 44], 00:39:11.881 | 70.00th=[ 44], 80.00th=[ 44], 90.00th=[ 45], 95.00th=[ 146], 00:39:11.881 | 99.00th=[ 161], 99.50th=[ 203], 99.90th=[ 224], 99.95th=[ 224], 00:39:11.881 | 99.99th=[ 224] 00:39:11.881 bw ( KiB/s): min= 256, max= 1536, per=4.12%, avg=1259.79, stdev=426.99, samples=19 00:39:11.881 iops : min= 64, max= 384, avg=314.95, stdev=106.75, samples=19 00:39:11.881 lat (msec) : 50=94.00%, 250=6.00% 00:39:11.881 cpu : usr=97.85%, sys=1.57%, ctx=30, majf=0, minf=1634 00:39:11.881 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:39:11.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.881 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.881 issued rwts: total=3200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.881 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:11.881 filename0: (groupid=0, jobs=1): err= 0: pid=2079357: Sun Jul 14 15:11:50 2024 00:39:11.881 read: IOPS=318, BW=1273KiB/s (1304kB/s)(12.4MiB/10001msec) 00:39:11.881 slat (usec): min=16, max=196, avg=59.35, stdev= 9.69 00:39:11.881 clat (msec): min=22, max=316, avg=49.72, stdev=27.96 00:39:11.881 lat (msec): min=22, max=316, avg=49.78, stdev=27.96 00:39:11.881 clat percentiles (msec): 00:39:11.881 | 1.00th=[ 43], 5.00th=[ 43], 10.00th=[ 43], 20.00th=[ 43], 00:39:11.881 | 30.00th=[ 43], 40.00th=[ 43], 50.00th=[ 43], 60.00th=[ 44], 00:39:11.881 | 70.00th=[ 44], 80.00th=[ 44], 90.00th=[ 45], 95.00th=[ 146], 00:39:11.881 | 99.00th=[ 161], 99.50th=[ 245], 99.90th=[ 245], 99.95th=[ 317], 00:39:11.881 | 99.99th=[ 317] 00:39:11.881 bw ( KiB/s): min= 256, max= 1536, per=4.12%, avg=1259.79, stdev=427.79, samples=19 00:39:11.881 iops : min= 64, max= 384, avg=314.95, stdev=106.95, samples=19 00:39:11.881 lat (msec) : 50=93.84%, 100=0.63%, 250=5.46%, 500=0.06% 00:39:11.881 cpu : usr=95.20%, sys=2.81%, ctx=124, majf=0, minf=1636 00:39:11.881 IO depths : 1=5.7%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.8%, 32=0.0%, >=64=0.0% 00:39:11.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.881 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.881 issued rwts: total=3184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.881 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:11.881 filename1: (groupid=0, jobs=1): err= 0: pid=2079358: Sun Jul 14 15:11:50 2024 00:39:11.881 read: IOPS=319, BW=1277KiB/s (1307kB/s)(12.5MiB/10027msec) 00:39:11.881 slat (usec): min=13, max=104, avg=58.31, stdev=11.09 00:39:11.881 clat (msec): min=31, max=189, avg=49.60, stdev=26.43 00:39:11.881 lat (msec): min=31, max=189, avg=49.65, stdev=26.42 00:39:11.881 clat percentiles (msec): 00:39:11.881 | 1.00th=[ 34], 5.00th=[ 43], 10.00th=[ 43], 20.00th=[ 43], 00:39:11.881 | 30.00th=[ 43], 40.00th=[ 43], 50.00th=[ 43], 60.00th=[ 44], 00:39:11.881 | 70.00th=[ 44], 80.00th=[ 44], 90.00th=[ 45], 95.00th=[ 146], 00:39:11.881 | 99.00th=[ 159], 99.50th=[ 161], 99.90th=[ 190], 99.95th=[ 190], 00:39:11.881 | 99.99th=[ 190] 00:39:11.881 bw ( KiB/s): min= 384, max= 1536, per=4.12%, avg=1259.63, stdev=425.03, samples=19 00:39:11.881 iops : min= 96, max= 384, avg=314.89, stdev=106.25, samples=19 00:39:11.881 lat (msec) : 50=94.00%, 250=6.00% 00:39:11.881 cpu : usr=96.27%, sys=2.28%, ctx=97, majf=0, minf=1636 00:39:11.881 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:39:11.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.881 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.881 issued rwts: total=3200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.881 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:11.881 filename1: (groupid=0, jobs=1): err= 0: pid=2079359: Sun Jul 14 15:11:50 2024 00:39:11.881 read: IOPS=319, BW=1277KiB/s (1307kB/s)(12.5MiB/10027msec) 00:39:11.881 slat (nsec): min=10531, max=86333, avg=31721.01, stdev=7728.83 00:39:11.881 clat (msec): min=32, max=203, avg=49.86, stdev=26.55 00:39:11.881 lat (msec): min=32, max=203, avg=49.89, stdev=26.55 00:39:11.881 clat percentiles (msec): 00:39:11.881 | 1.00th=[ 34], 5.00th=[ 43], 10.00th=[ 43], 20.00th=[ 43], 00:39:11.881 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 44], 00:39:11.881 | 70.00th=[ 44], 80.00th=[ 44], 90.00th=[ 45], 95.00th=[ 146], 00:39:11.881 | 99.00th=[ 161], 99.50th=[ 190], 99.90th=[ 190], 99.95th=[ 205], 00:39:11.881 | 99.99th=[ 205] 00:39:11.881 bw ( KiB/s): min= 384, max= 1536, per=4.12%, avg=1259.63, stdev=425.03, samples=19 00:39:11.881 iops : min= 96, max= 384, avg=314.89, stdev=106.25, samples=19 00:39:11.881 lat (msec) : 50=94.00%, 250=6.00% 00:39:11.881 cpu : usr=97.83%, sys=1.60%, ctx=108, majf=0, minf=1636 00:39:11.881 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:39:11.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.881 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.881 issued rwts: total=3200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.881 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:11.881 filename1: (groupid=0, jobs=1): err= 0: pid=2079360: Sun Jul 14 15:11:50 2024 00:39:11.881 read: IOPS=319, BW=1279KiB/s (1310kB/s)(12.5MiB/10009msec) 00:39:11.881 slat (usec): min=6, max=113, avg=66.20, stdev=12.60 00:39:11.881 clat (msec): min=33, max=160, avg=49.44, stdev=24.65 00:39:11.881 lat (msec): min=33, max=160, avg=49.51, stdev=24.65 00:39:11.881 clat percentiles (msec): 00:39:11.881 | 1.00th=[ 43], 5.00th=[ 43], 10.00th=[ 43], 20.00th=[ 43], 00:39:11.881 | 30.00th=[ 43], 40.00th=[ 43], 50.00th=[ 43], 60.00th=[ 44], 00:39:11.881 | 70.00th=[ 44], 80.00th=[ 44], 90.00th=[ 45], 95.00th=[ 128], 00:39:11.881 | 99.00th=[ 153], 99.50th=[ 161], 99.90th=[ 161], 99.95th=[ 161], 00:39:11.881 | 99.99th=[ 161] 00:39:11.881 bw ( KiB/s): min= 384, max= 1536, per=4.14%, avg=1266.53, stdev=406.78, samples=19 00:39:11.881 iops : min= 96, max= 384, avg=316.63, stdev=101.69, samples=19 00:39:11.881 lat (msec) : 50=92.88%, 100=1.12%, 250=6.00% 00:39:11.881 cpu : usr=96.97%, sys=1.93%, ctx=154, majf=0, minf=1635 00:39:11.881 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:39:11.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.881 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.881 issued rwts: total=3200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.881 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:11.881 filename1: (groupid=0, jobs=1): err= 0: pid=2079361: Sun Jul 14 15:11:50 2024 00:39:11.881 read: IOPS=319, BW=1280KiB/s (1311kB/s)(12.5MiB/10001msec) 00:39:11.881 slat (nsec): min=9268, max=97055, avg=39729.28, stdev=11140.16 00:39:11.881 clat (msec): min=42, max=160, avg=49.66, stdev=24.60 00:39:11.881 lat (msec): min=42, max=160, avg=49.69, stdev=24.60 00:39:11.881 clat percentiles (msec): 00:39:11.881 | 1.00th=[ 43], 5.00th=[ 43], 10.00th=[ 43], 20.00th=[ 43], 00:39:11.881 | 30.00th=[ 43], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 44], 00:39:11.881 | 70.00th=[ 44], 80.00th=[ 44], 90.00th=[ 45], 95.00th=[ 128], 00:39:11.881 | 99.00th=[ 153], 99.50th=[ 161], 99.90th=[ 161], 99.95th=[ 161], 00:39:11.881 | 99.99th=[ 161] 00:39:11.881 bw ( KiB/s): min= 384, max= 1536, per=4.14%, avg=1266.53, stdev=406.78, samples=19 00:39:11.881 iops : min= 96, max= 384, avg=316.63, stdev=101.69, samples=19 00:39:11.881 lat (msec) : 50=93.00%, 100=1.00%, 250=6.00% 00:39:11.881 cpu : usr=95.75%, sys=2.61%, ctx=216, majf=0, minf=1635 00:39:11.881 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:39:11.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.881 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.881 issued rwts: total=3200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.881 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:11.881 filename1: (groupid=0, jobs=1): err= 0: pid=2079362: Sun Jul 14 15:11:50 2024 00:39:11.881 read: IOPS=319, BW=1277KiB/s (1308kB/s)(12.5MiB/10021msec) 00:39:11.881 slat (nsec): min=7143, max=73541, avg=34015.82, stdev=9373.12 00:39:11.881 clat (msec): min=27, max=217, avg=49.80, stdev=26.81 00:39:11.881 lat (msec): min=27, max=217, avg=49.83, stdev=26.81 00:39:11.881 clat percentiles (msec): 00:39:11.881 | 1.00th=[ 31], 5.00th=[ 43], 10.00th=[ 43], 20.00th=[ 43], 00:39:11.881 | 30.00th=[ 43], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 44], 00:39:11.881 | 70.00th=[ 44], 80.00th=[ 44], 90.00th=[ 45], 95.00th=[ 146], 00:39:11.881 | 99.00th=[ 161], 99.50th=[ 161], 99.90th=[ 218], 99.95th=[ 218], 00:39:11.881 | 99.99th=[ 218] 00:39:11.881 bw ( KiB/s): min= 256, max= 1536, per=4.12%, avg=1259.79, stdev=427.23, samples=19 00:39:11.881 iops : min= 64, max= 384, avg=314.95, stdev=106.81, samples=19 00:39:11.881 lat (msec) : 50=94.00%, 250=6.00% 00:39:11.881 cpu : usr=96.99%, sys=2.12%, ctx=42, majf=0, minf=1634 00:39:11.881 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:39:11.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.881 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.881 issued rwts: total=3200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.881 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:11.881 filename1: (groupid=0, jobs=1): err= 0: pid=2079363: Sun Jul 14 15:11:50 2024 00:39:11.881 read: IOPS=318, BW=1273KiB/s (1304kB/s)(12.4MiB/10003msec) 00:39:11.881 slat (usec): min=14, max=105, avg=45.09, stdev=15.40 00:39:11.881 clat (msec): min=30, max=263, avg=49.85, stdev=27.43 00:39:11.881 lat (msec): min=30, max=263, avg=49.90, stdev=27.43 00:39:11.881 clat percentiles (msec): 00:39:11.881 | 1.00th=[ 43], 5.00th=[ 43], 10.00th=[ 43], 20.00th=[ 43], 00:39:11.881 | 30.00th=[ 43], 40.00th=[ 43], 50.00th=[ 44], 60.00th=[ 44], 00:39:11.881 | 70.00th=[ 44], 80.00th=[ 44], 90.00th=[ 45], 95.00th=[ 146], 00:39:11.881 | 99.00th=[ 161], 99.50th=[ 228], 99.90th=[ 228], 99.95th=[ 264], 00:39:11.881 | 99.99th=[ 264] 00:39:11.881 bw ( KiB/s): min= 256, max= 1536, per=4.12%, avg=1259.58, stdev=426.85, samples=19 00:39:11.881 iops : min= 64, max= 384, avg=314.89, stdev=106.71, samples=19 00:39:11.881 lat (msec) : 50=93.97%, 250=5.97%, 500=0.06% 00:39:11.881 cpu : usr=97.69%, sys=1.55%, ctx=74, majf=0, minf=1636 00:39:11.881 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:39:11.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.881 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.881 issued rwts: total=3184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.881 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:11.881 filename1: (groupid=0, jobs=1): err= 0: pid=2079364: Sun Jul 14 15:11:50 2024 00:39:11.881 read: IOPS=318, BW=1272KiB/s (1303kB/s)(12.4MiB/10012msec) 00:39:11.881 slat (usec): min=8, max=164, avg=25.24, stdev=16.19 00:39:11.881 clat (msec): min=23, max=207, avg=50.09, stdev=25.57 00:39:11.881 lat (msec): min=23, max=207, avg=50.11, stdev=25.57 00:39:11.882 clat percentiles (msec): 00:39:11.882 | 1.00th=[ 43], 5.00th=[ 43], 10.00th=[ 44], 20.00th=[ 44], 00:39:11.882 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 44], 00:39:11.882 | 70.00th=[ 44], 80.00th=[ 44], 90.00th=[ 45], 95.00th=[ 146], 00:39:11.882 | 99.00th=[ 161], 99.50th=[ 161], 99.90th=[ 161], 99.95th=[ 207], 00:39:11.882 | 99.99th=[ 207] 00:39:11.882 bw ( KiB/s): min= 384, max= 1536, per=4.12%, avg=1259.79, stdev=419.37, samples=19 00:39:11.882 iops : min= 96, max= 384, avg=314.95, stdev=104.84, samples=19 00:39:11.882 lat (msec) : 50=93.34%, 100=0.69%, 250=5.97% 00:39:11.882 cpu : usr=96.44%, sys=2.24%, ctx=103, majf=0, minf=1636 00:39:11.882 IO depths : 1=0.4%, 2=6.7%, 4=25.0%, 8=55.8%, 16=12.1%, 32=0.0%, >=64=0.0% 00:39:11.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.882 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.882 issued rwts: total=3184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.882 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:11.882 filename1: (groupid=0, jobs=1): err= 0: pid=2079365: Sun Jul 14 15:11:50 2024 00:39:11.882 read: IOPS=329, BW=1317KiB/s (1349kB/s)(12.9MiB/10005msec) 00:39:11.882 slat (nsec): min=4594, max=78551, avg=26721.12, stdev=11411.52 00:39:11.882 clat (msec): min=14, max=161, avg=48.37, stdev=18.85 00:39:11.882 lat (msec): min=14, max=161, avg=48.39, stdev=18.84 00:39:11.882 clat percentiles (msec): 00:39:11.882 | 1.00th=[ 34], 5.00th=[ 43], 10.00th=[ 43], 20.00th=[ 43], 00:39:11.882 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 44], 00:39:11.882 | 70.00th=[ 44], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 96], 00:39:11.882 | 99.00th=[ 138], 99.50th=[ 146], 99.90th=[ 161], 99.95th=[ 161], 00:39:11.882 | 99.99th=[ 161] 00:39:11.882 bw ( KiB/s): min= 608, max= 1536, per=4.27%, avg=1306.11, stdev=330.12, samples=19 00:39:11.882 iops : min= 152, max= 384, avg=326.53, stdev=82.53, samples=19 00:39:11.882 lat (msec) : 20=0.49%, 50=90.89%, 100=4.19%, 250=4.43% 00:39:11.882 cpu : usr=97.97%, sys=1.56%, ctx=18, majf=0, minf=1634 00:39:11.882 IO depths : 1=5.7%, 2=11.5%, 4=23.6%, 8=52.3%, 16=6.8%, 32=0.0%, >=64=0.0% 00:39:11.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.882 complete : 0=0.0%, 4=93.7%, 8=0.5%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.882 issued rwts: total=3294,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.882 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:11.882 filename2: (groupid=0, jobs=1): err= 0: pid=2079366: Sun Jul 14 15:11:50 2024 00:39:11.882 read: IOPS=318, BW=1273KiB/s (1304kB/s)(12.4MiB/10001msec) 00:39:11.882 slat (nsec): min=6144, max=71443, avg=36138.09, stdev=8154.43 00:39:11.882 clat (msec): min=30, max=224, avg=49.93, stdev=27.25 00:39:11.882 lat (msec): min=30, max=224, avg=49.96, stdev=27.24 00:39:11.882 clat percentiles (msec): 00:39:11.882 | 1.00th=[ 43], 5.00th=[ 43], 10.00th=[ 43], 20.00th=[ 43], 00:39:11.882 | 30.00th=[ 43], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 44], 00:39:11.882 | 70.00th=[ 44], 80.00th=[ 44], 90.00th=[ 45], 95.00th=[ 146], 00:39:11.882 | 99.00th=[ 161], 99.50th=[ 226], 99.90th=[ 226], 99.95th=[ 226], 00:39:11.882 | 99.99th=[ 226] 00:39:11.882 bw ( KiB/s): min= 256, max= 1539, per=4.12%, avg=1259.95, stdev=427.10, samples=19 00:39:11.882 iops : min= 64, max= 384, avg=314.95, stdev=106.75, samples=19 00:39:11.882 lat (msec) : 50=93.97%, 250=6.03% 00:39:11.882 cpu : usr=97.72%, sys=1.77%, ctx=25, majf=0, minf=1635 00:39:11.882 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:39:11.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.882 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.882 issued rwts: total=3184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.882 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:11.882 filename2: (groupid=0, jobs=1): err= 0: pid=2079367: Sun Jul 14 15:11:50 2024 00:39:11.882 read: IOPS=320, BW=1283KiB/s (1314kB/s)(12.6MiB/10028msec) 00:39:11.882 slat (usec): min=6, max=124, avg=64.91, stdev=12.28 00:39:11.882 clat (msec): min=29, max=201, avg=49.30, stdev=24.81 00:39:11.882 lat (msec): min=29, max=201, avg=49.37, stdev=24.81 00:39:11.882 clat percentiles (msec): 00:39:11.882 | 1.00th=[ 43], 5.00th=[ 43], 10.00th=[ 43], 20.00th=[ 43], 00:39:11.882 | 30.00th=[ 43], 40.00th=[ 43], 50.00th=[ 43], 60.00th=[ 44], 00:39:11.882 | 70.00th=[ 44], 80.00th=[ 44], 90.00th=[ 45], 95.00th=[ 127], 00:39:11.882 | 99.00th=[ 159], 99.50th=[ 161], 99.90th=[ 188], 99.95th=[ 203], 00:39:11.882 | 99.99th=[ 203] 00:39:11.882 bw ( KiB/s): min= 384, max= 1536, per=4.18%, avg=1278.20, stdev=399.36, samples=20 00:39:11.882 iops : min= 96, max= 384, avg=319.55, stdev=99.84, samples=20 00:39:11.882 lat (msec) : 50=92.97%, 100=1.12%, 250=5.91% 00:39:11.882 cpu : usr=95.50%, sys=2.74%, ctx=281, majf=0, minf=1637 00:39:11.882 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:39:11.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.882 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.882 issued rwts: total=3216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.882 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:11.882 filename2: (groupid=0, jobs=1): err= 0: pid=2079368: Sun Jul 14 15:11:50 2024 00:39:11.882 read: IOPS=319, BW=1278KiB/s (1308kB/s)(12.5MiB/10017msec) 00:39:11.882 slat (nsec): min=13124, max=92005, avg=38568.43, stdev=9821.96 00:39:11.882 clat (msec): min=27, max=213, avg=49.72, stdev=26.68 00:39:11.882 lat (msec): min=27, max=213, avg=49.76, stdev=26.68 00:39:11.882 clat percentiles (msec): 00:39:11.882 | 1.00th=[ 32], 5.00th=[ 43], 10.00th=[ 43], 20.00th=[ 43], 00:39:11.882 | 30.00th=[ 43], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 44], 00:39:11.882 | 70.00th=[ 44], 80.00th=[ 44], 90.00th=[ 45], 95.00th=[ 146], 00:39:11.882 | 99.00th=[ 161], 99.50th=[ 161], 99.90th=[ 213], 99.95th=[ 213], 00:39:11.882 | 99.99th=[ 213] 00:39:11.882 bw ( KiB/s): min= 256, max= 1536, per=4.12%, avg=1259.79, stdev=427.23, samples=19 00:39:11.882 iops : min= 64, max= 384, avg=314.95, stdev=106.81, samples=19 00:39:11.882 lat (msec) : 50=94.00%, 250=6.00% 00:39:11.882 cpu : usr=95.44%, sys=2.85%, ctx=151, majf=0, minf=1636 00:39:11.882 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:39:11.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.882 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.882 issued rwts: total=3200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.882 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:11.882 filename2: (groupid=0, jobs=1): err= 0: pid=2079369: Sun Jul 14 15:11:50 2024 00:39:11.882 read: IOPS=319, BW=1277KiB/s (1307kB/s)(12.5MiB/10027msec) 00:39:11.882 slat (nsec): min=11206, max=86030, avg=24559.31, stdev=10212.42 00:39:11.882 clat (msec): min=33, max=226, avg=49.93, stdev=26.58 00:39:11.882 lat (msec): min=33, max=226, avg=49.95, stdev=26.58 00:39:11.882 clat percentiles (msec): 00:39:11.882 | 1.00th=[ 34], 5.00th=[ 43], 10.00th=[ 43], 20.00th=[ 44], 00:39:11.882 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 44], 00:39:11.882 | 70.00th=[ 44], 80.00th=[ 44], 90.00th=[ 45], 95.00th=[ 146], 00:39:11.882 | 99.00th=[ 161], 99.50th=[ 190], 99.90th=[ 190], 99.95th=[ 226], 00:39:11.882 | 99.99th=[ 228] 00:39:11.882 bw ( KiB/s): min= 368, max= 1536, per=4.12%, avg=1259.63, stdev=425.07, samples=19 00:39:11.882 iops : min= 92, max= 384, avg=314.89, stdev=106.26, samples=19 00:39:11.882 lat (msec) : 50=94.00%, 250=6.00% 00:39:11.882 cpu : usr=97.31%, sys=1.91%, ctx=98, majf=0, minf=1634 00:39:11.882 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:39:11.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.882 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.882 issued rwts: total=3200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.882 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:11.882 filename2: (groupid=0, jobs=1): err= 0: pid=2079370: Sun Jul 14 15:11:50 2024 00:39:11.882 read: IOPS=318, BW=1272KiB/s (1303kB/s)(12.4MiB/10009msec) 00:39:11.882 slat (nsec): min=11153, max=82991, avg=32455.97, stdev=7720.55 00:39:11.882 clat (msec): min=32, max=317, avg=49.99, stdev=30.35 00:39:11.882 lat (msec): min=32, max=317, avg=50.03, stdev=30.35 00:39:11.882 clat percentiles (msec): 00:39:11.882 | 1.00th=[ 34], 5.00th=[ 43], 10.00th=[ 43], 20.00th=[ 43], 00:39:11.882 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 44], 00:39:11.882 | 70.00th=[ 44], 80.00th=[ 44], 90.00th=[ 45], 95.00th=[ 146], 00:39:11.882 | 99.00th=[ 161], 99.50th=[ 317], 99.90th=[ 317], 99.95th=[ 317], 00:39:11.882 | 99.99th=[ 317] 00:39:11.882 bw ( KiB/s): min= 256, max= 1536, per=4.12%, avg=1259.79, stdev=443.94, samples=19 00:39:11.882 iops : min= 64, max= 384, avg=314.95, stdev=110.99, samples=19 00:39:11.882 lat (msec) : 50=94.47%, 250=5.03%, 500=0.50% 00:39:11.882 cpu : usr=95.56%, sys=2.57%, ctx=142, majf=0, minf=1634 00:39:11.882 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:11.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.882 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.882 issued rwts: total=3184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.882 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:11.882 filename2: (groupid=0, jobs=1): err= 0: pid=2079371: Sun Jul 14 15:11:50 2024 00:39:11.882 read: IOPS=318, BW=1273KiB/s (1304kB/s)(12.4MiB/10004msec) 00:39:11.882 slat (nsec): min=7105, max=59775, avg=22248.06, stdev=7418.44 00:39:11.882 clat (msec): min=22, max=248, avg=50.06, stdev=27.64 00:39:11.882 lat (msec): min=23, max=248, avg=50.08, stdev=27.64 00:39:11.882 clat percentiles (msec): 00:39:11.882 | 1.00th=[ 43], 5.00th=[ 43], 10.00th=[ 43], 20.00th=[ 44], 00:39:11.882 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 44], 00:39:11.882 | 70.00th=[ 44], 80.00th=[ 44], 90.00th=[ 45], 95.00th=[ 146], 00:39:11.882 | 99.00th=[ 161], 99.50th=[ 249], 99.90th=[ 249], 99.95th=[ 249], 00:39:11.882 | 99.99th=[ 249] 00:39:11.882 bw ( KiB/s): min= 256, max= 1536, per=4.12%, avg=1259.79, stdev=427.23, samples=19 00:39:11.882 iops : min= 64, max= 384, avg=314.95, stdev=106.81, samples=19 00:39:11.882 lat (msec) : 50=93.84%, 100=0.63%, 250=5.53% 00:39:11.882 cpu : usr=97.69%, sys=1.78%, ctx=35, majf=0, minf=1636 00:39:11.882 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:11.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.882 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.882 issued rwts: total=3184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.882 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:11.882 filename2: (groupid=0, jobs=1): err= 0: pid=2079372: Sun Jul 14 15:11:50 2024 00:39:11.882 read: IOPS=318, BW=1273KiB/s (1304kB/s)(12.4MiB/10001msec) 00:39:11.882 slat (nsec): min=7671, max=79453, avg=37857.61, stdev=9400.40 00:39:11.882 clat (msec): min=34, max=202, avg=49.91, stdev=25.48 00:39:11.882 lat (msec): min=34, max=202, avg=49.95, stdev=25.48 00:39:11.882 clat percentiles (msec): 00:39:11.882 | 1.00th=[ 43], 5.00th=[ 43], 10.00th=[ 43], 20.00th=[ 43], 00:39:11.882 | 30.00th=[ 43], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 44], 00:39:11.883 | 70.00th=[ 44], 80.00th=[ 44], 90.00th=[ 45], 95.00th=[ 131], 00:39:11.883 | 99.00th=[ 159], 99.50th=[ 161], 99.90th=[ 188], 99.95th=[ 203], 00:39:11.883 | 99.99th=[ 203] 00:39:11.883 bw ( KiB/s): min= 384, max= 1536, per=4.12%, avg=1259.79, stdev=420.55, samples=19 00:39:11.883 iops : min= 96, max= 384, avg=314.95, stdev=105.14, samples=19 00:39:11.883 lat (msec) : 50=93.40%, 100=0.13%, 250=6.47% 00:39:11.883 cpu : usr=96.17%, sys=2.24%, ctx=82, majf=0, minf=1637 00:39:11.883 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:39:11.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.883 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.883 issued rwts: total=3184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.883 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:11.883 filename2: (groupid=0, jobs=1): err= 0: pid=2079373: Sun Jul 14 15:11:50 2024 00:39:11.883 read: IOPS=318, BW=1276KiB/s (1306kB/s)(12.5MiB/10033msec) 00:39:11.883 slat (usec): min=12, max=107, avg=29.15, stdev=14.53 00:39:11.883 clat (msec): min=37, max=213, avg=49.90, stdev=25.42 00:39:11.883 lat (msec): min=37, max=214, avg=49.93, stdev=25.42 00:39:11.883 clat percentiles (msec): 00:39:11.883 | 1.00th=[ 43], 5.00th=[ 43], 10.00th=[ 43], 20.00th=[ 44], 00:39:11.883 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 44], 00:39:11.883 | 70.00th=[ 44], 80.00th=[ 44], 90.00th=[ 45], 95.00th=[ 144], 00:39:11.883 | 99.00th=[ 161], 99.50th=[ 161], 99.90th=[ 161], 99.95th=[ 215], 00:39:11.883 | 99.99th=[ 215] 00:39:11.883 bw ( KiB/s): min= 384, max= 1536, per=4.16%, avg=1273.60, stdev=414.20, samples=20 00:39:11.883 iops : min= 96, max= 384, avg=318.40, stdev=103.55, samples=20 00:39:11.883 lat (msec) : 50=93.50%, 100=0.56%, 250=5.94% 00:39:11.883 cpu : usr=97.61%, sys=1.70%, ctx=47, majf=0, minf=1637 00:39:11.883 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:11.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.883 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.883 issued rwts: total=3200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.883 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:11.883 00:39:11.883 Run status group 0 (all jobs): 00:39:11.883 READ: bw=29.9MiB/s (31.3MB/s), 1272KiB/s-1317KiB/s (1303kB/s-1349kB/s), io=300MiB (314MB), run=10001-10036msec 00:39:12.141 ----------------------------------------------------- 00:39:12.141 Suppressions used: 00:39:12.141 count bytes template 00:39:12.141 45 402 /usr/src/fio/parse.c 00:39:12.141 1 8 libtcmalloc_minimal.so 00:39:12.141 1 904 libcrypto.so 00:39:12.141 ----------------------------------------------------- 00:39:12.141 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:12.141 bdev_null0 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:12.141 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:12.142 [2024-07-14 15:11:51.299543] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:12.142 bdev_null1 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:12.142 { 00:39:12.142 "params": { 00:39:12.142 "name": "Nvme$subsystem", 00:39:12.142 "trtype": "$TEST_TRANSPORT", 00:39:12.142 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:12.142 "adrfam": "ipv4", 00:39:12.142 "trsvcid": "$NVMF_PORT", 00:39:12.142 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:12.142 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:12.142 "hdgst": ${hdgst:-false}, 00:39:12.142 "ddgst": ${ddgst:-false} 00:39:12.142 }, 00:39:12.142 "method": "bdev_nvme_attach_controller" 00:39:12.142 } 00:39:12.142 EOF 00:39:12.142 )") 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:12.142 { 00:39:12.142 "params": { 00:39:12.142 "name": "Nvme$subsystem", 00:39:12.142 "trtype": "$TEST_TRANSPORT", 00:39:12.142 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:12.142 "adrfam": "ipv4", 00:39:12.142 "trsvcid": "$NVMF_PORT", 00:39:12.142 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:12.142 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:12.142 "hdgst": ${hdgst:-false}, 00:39:12.142 "ddgst": ${ddgst:-false} 00:39:12.142 }, 00:39:12.142 "method": "bdev_nvme_attach_controller" 00:39:12.142 } 00:39:12.142 EOF 00:39:12.142 )") 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:39:12.142 "params": { 00:39:12.142 "name": "Nvme0", 00:39:12.142 "trtype": "tcp", 00:39:12.142 "traddr": "10.0.0.2", 00:39:12.142 "adrfam": "ipv4", 00:39:12.142 "trsvcid": "4420", 00:39:12.142 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:12.142 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:12.142 "hdgst": false, 00:39:12.142 "ddgst": false 00:39:12.142 }, 00:39:12.142 "method": "bdev_nvme_attach_controller" 00:39:12.142 },{ 00:39:12.142 "params": { 00:39:12.142 "name": "Nvme1", 00:39:12.142 "trtype": "tcp", 00:39:12.142 "traddr": "10.0.0.2", 00:39:12.142 "adrfam": "ipv4", 00:39:12.142 "trsvcid": "4420", 00:39:12.142 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:12.142 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:12.142 "hdgst": false, 00:39:12.142 "ddgst": false 00:39:12.142 }, 00:39:12.142 "method": "bdev_nvme_attach_controller" 00:39:12.142 }' 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:12.142 15:11:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:12.399 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:39:12.399 ... 00:39:12.399 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:39:12.399 ... 00:39:12.399 fio-3.35 00:39:12.399 Starting 4 threads 00:39:12.399 EAL: No free 2048 kB hugepages reported on node 1 00:39:18.951 00:39:18.951 filename0: (groupid=0, jobs=1): err= 0: pid=2080874: Sun Jul 14 15:11:57 2024 00:39:18.951 read: IOPS=1452, BW=11.3MiB/s (11.9MB/s)(56.8MiB/5002msec) 00:39:18.951 slat (nsec): min=7072, max=51689, avg=16920.35, stdev=5556.42 00:39:18.951 clat (usec): min=1245, max=10332, avg=5447.52, stdev=526.54 00:39:18.951 lat (usec): min=1263, max=10343, avg=5464.44, stdev=526.54 00:39:18.951 clat percentiles (usec): 00:39:18.951 | 1.00th=[ 3621], 5.00th=[ 4817], 10.00th=[ 5080], 20.00th=[ 5276], 00:39:18.951 | 30.00th=[ 5342], 40.00th=[ 5407], 50.00th=[ 5407], 60.00th=[ 5473], 00:39:18.951 | 70.00th=[ 5538], 80.00th=[ 5669], 90.00th=[ 5800], 95.00th=[ 5997], 00:39:18.951 | 99.00th=[ 7111], 99.50th=[ 8160], 99.90th=[ 9241], 99.95th=[ 9634], 00:39:18.951 | 99.99th=[10290] 00:39:18.951 bw ( KiB/s): min=11104, max=11952, per=25.22%, avg=11649.78, stdev=266.47, samples=9 00:39:18.951 iops : min= 1388, max= 1494, avg=1456.22, stdev=33.31, samples=9 00:39:18.951 lat (msec) : 2=0.15%, 4=1.09%, 10=98.73%, 20=0.03% 00:39:18.951 cpu : usr=92.14%, sys=7.22%, ctx=6, majf=0, minf=1637 00:39:18.951 IO depths : 1=0.6%, 2=14.5%, 4=58.6%, 8=26.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:18.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:18.951 complete : 0=0.0%, 4=91.5%, 8=8.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:18.951 issued rwts: total=7266,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:18.951 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:18.951 filename0: (groupid=0, jobs=1): err= 0: pid=2080875: Sun Jul 14 15:11:57 2024 00:39:18.951 read: IOPS=1437, BW=11.2MiB/s (11.8MB/s)(56.2MiB/5002msec) 00:39:18.951 slat (nsec): min=7292, max=63557, avg=18631.97, stdev=5770.58 00:39:18.951 clat (usec): min=1081, max=11715, avg=5489.62, stdev=798.66 00:39:18.951 lat (usec): min=1099, max=11737, avg=5508.25, stdev=798.53 00:39:18.951 clat percentiles (usec): 00:39:18.951 | 1.00th=[ 2180], 5.00th=[ 4817], 10.00th=[ 5145], 20.00th=[ 5211], 00:39:18.951 | 30.00th=[ 5342], 40.00th=[ 5342], 50.00th=[ 5407], 60.00th=[ 5473], 00:39:18.951 | 70.00th=[ 5604], 80.00th=[ 5669], 90.00th=[ 5866], 95.00th=[ 6259], 00:39:18.951 | 99.00th=[ 8979], 99.50th=[ 9503], 99.90th=[10028], 99.95th=[10159], 00:39:18.951 | 99.99th=[11731] 00:39:18.951 bw ( KiB/s): min=10736, max=11888, per=24.97%, avg=11534.22, stdev=373.39, samples=9 00:39:18.951 iops : min= 1342, max= 1486, avg=1441.78, stdev=46.67, samples=9 00:39:18.951 lat (msec) : 2=0.58%, 4=1.39%, 10=97.90%, 20=0.13% 00:39:18.951 cpu : usr=92.84%, sys=6.50%, ctx=11, majf=0, minf=1635 00:39:18.951 IO depths : 1=0.8%, 2=22.3%, 4=51.9%, 8=25.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:18.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:18.951 complete : 0=0.0%, 4=90.6%, 8=9.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:18.951 issued rwts: total=7192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:18.951 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:18.951 filename1: (groupid=0, jobs=1): err= 0: pid=2080876: Sun Jul 14 15:11:57 2024 00:39:18.951 read: IOPS=1446, BW=11.3MiB/s (11.8MB/s)(56.5MiB/5001msec) 00:39:18.951 slat (usec): min=7, max=196, avg=17.11, stdev= 5.65 00:39:18.951 clat (usec): min=1162, max=10182, avg=5468.88, stdev=559.51 00:39:18.951 lat (usec): min=1184, max=10215, avg=5485.99, stdev=559.44 00:39:18.951 clat percentiles (usec): 00:39:18.951 | 1.00th=[ 3949], 5.00th=[ 4817], 10.00th=[ 5080], 20.00th=[ 5211], 00:39:18.951 | 30.00th=[ 5276], 40.00th=[ 5342], 50.00th=[ 5407], 60.00th=[ 5473], 00:39:18.951 | 70.00th=[ 5604], 80.00th=[ 5669], 90.00th=[ 5866], 95.00th=[ 6128], 00:39:18.951 | 99.00th=[ 8029], 99.50th=[ 8455], 99.90th=[ 9503], 99.95th=[ 9634], 00:39:18.951 | 99.99th=[10159] 00:39:18.952 bw ( KiB/s): min=10997, max=11776, per=25.10%, avg=11595.22, stdev=304.34, samples=9 00:39:18.952 iops : min= 1374, max= 1472, avg=1449.33, stdev=38.20, samples=9 00:39:18.952 lat (msec) : 2=0.17%, 4=0.87%, 10=98.95%, 20=0.01% 00:39:18.952 cpu : usr=92.86%, sys=6.54%, ctx=11, majf=0, minf=1637 00:39:18.952 IO depths : 1=0.6%, 2=16.9%, 4=56.1%, 8=26.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:18.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:18.952 complete : 0=0.0%, 4=91.5%, 8=8.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:18.952 issued rwts: total=7233,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:18.952 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:18.952 filename1: (groupid=0, jobs=1): err= 0: pid=2080877: Sun Jul 14 15:11:57 2024 00:39:18.952 read: IOPS=1437, BW=11.2MiB/s (11.8MB/s)(56.2MiB/5001msec) 00:39:18.952 slat (nsec): min=6866, max=62326, avg=17554.30, stdev=5789.16 00:39:18.952 clat (usec): min=1062, max=11299, avg=5495.28, stdev=846.86 00:39:18.952 lat (usec): min=1091, max=11321, avg=5512.83, stdev=846.80 00:39:18.952 clat percentiles (usec): 00:39:18.952 | 1.00th=[ 2057], 5.00th=[ 4752], 10.00th=[ 5145], 20.00th=[ 5276], 00:39:18.952 | 30.00th=[ 5342], 40.00th=[ 5342], 50.00th=[ 5407], 60.00th=[ 5473], 00:39:18.952 | 70.00th=[ 5604], 80.00th=[ 5669], 90.00th=[ 5866], 95.00th=[ 6652], 00:39:18.952 | 99.00th=[ 9110], 99.50th=[ 9503], 99.90th=[10290], 99.95th=[10290], 00:39:18.952 | 99.99th=[11338] 00:39:18.952 bw ( KiB/s): min=11040, max=11888, per=24.99%, avg=11543.78, stdev=339.69, samples=9 00:39:18.952 iops : min= 1380, max= 1486, avg=1442.89, stdev=42.57, samples=9 00:39:18.952 lat (msec) : 2=0.95%, 4=1.53%, 10=97.30%, 20=0.22% 00:39:18.952 cpu : usr=93.34%, sys=6.04%, ctx=6, majf=0, minf=1637 00:39:18.952 IO depths : 1=0.7%, 2=21.0%, 4=52.7%, 8=25.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:18.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:18.952 complete : 0=0.0%, 4=91.1%, 8=8.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:18.952 issued rwts: total=7189,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:18.952 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:18.952 00:39:18.952 Run status group 0 (all jobs): 00:39:18.952 READ: bw=45.1MiB/s (47.3MB/s), 11.2MiB/s-11.3MiB/s (11.8MB/s-11.9MB/s), io=226MiB (237MB), run=5001-5002msec 00:39:19.885 ----------------------------------------------------- 00:39:19.886 Suppressions used: 00:39:19.886 count bytes template 00:39:19.886 6 52 /usr/src/fio/parse.c 00:39:19.886 1 8 libtcmalloc_minimal.so 00:39:19.886 1 904 libcrypto.so 00:39:19.886 ----------------------------------------------------- 00:39:19.886 00:39:19.886 15:11:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:39:19.886 15:11:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:39:19.886 15:11:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:19.886 15:11:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:19.886 15:11:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:39:19.886 15:11:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:19.886 15:11:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:19.886 15:11:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:19.886 15:11:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:19.886 15:11:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:19.886 15:11:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:19.886 15:11:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:19.886 15:11:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:19.886 15:11:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:19.886 15:11:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:39:19.886 15:11:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:39:19.886 15:11:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:19.886 15:11:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:19.886 15:11:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:19.886 15:11:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:19.886 15:11:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:39:19.886 15:11:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:19.886 15:11:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:19.886 15:11:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:19.886 00:39:19.886 real 0m27.758s 00:39:19.886 user 4m32.917s 00:39:19.886 sys 0m8.849s 00:39:19.886 15:11:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:19.886 15:11:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:19.886 ************************************ 00:39:19.886 END TEST fio_dif_rand_params 00:39:19.886 ************************************ 00:39:19.886 15:11:58 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:39:19.886 15:11:58 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:39:19.886 15:11:58 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:19.886 15:11:58 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:19.886 15:11:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:19.886 ************************************ 00:39:19.886 START TEST fio_dif_digest 00:39:19.886 ************************************ 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:19.886 bdev_null0 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:19.886 [2024-07-14 15:11:58.966372] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:19.886 { 00:39:19.886 "params": { 00:39:19.886 "name": "Nvme$subsystem", 00:39:19.886 "trtype": "$TEST_TRANSPORT", 00:39:19.886 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:19.886 "adrfam": "ipv4", 00:39:19.886 "trsvcid": "$NVMF_PORT", 00:39:19.886 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:19.886 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:19.886 "hdgst": ${hdgst:-false}, 00:39:19.886 "ddgst": ${ddgst:-false} 00:39:19.886 }, 00:39:19.886 "method": "bdev_nvme_attach_controller" 00:39:19.886 } 00:39:19.886 EOF 00:39:19.886 )") 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:39:19.886 "params": { 00:39:19.886 "name": "Nvme0", 00:39:19.886 "trtype": "tcp", 00:39:19.886 "traddr": "10.0.0.2", 00:39:19.886 "adrfam": "ipv4", 00:39:19.886 "trsvcid": "4420", 00:39:19.886 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:19.886 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:19.886 "hdgst": true, 00:39:19.886 "ddgst": true 00:39:19.886 }, 00:39:19.886 "method": "bdev_nvme_attach_controller" 00:39:19.886 }' 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # break 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:19.886 15:11:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:20.145 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:39:20.145 ... 00:39:20.145 fio-3.35 00:39:20.145 Starting 3 threads 00:39:20.145 EAL: No free 2048 kB hugepages reported on node 1 00:39:32.340 00:39:32.340 filename0: (groupid=0, jobs=1): err= 0: pid=2081902: Sun Jul 14 15:12:10 2024 00:39:32.340 read: IOPS=167, BW=21.0MiB/s (22.0MB/s)(211MiB/10045msec) 00:39:32.340 slat (nsec): min=7827, max=65644, avg=25182.90, stdev=5952.94 00:39:32.340 clat (usec): min=13212, max=58717, avg=17839.52, stdev=2425.17 00:39:32.340 lat (usec): min=13238, max=58738, avg=17864.70, stdev=2425.43 00:39:32.340 clat percentiles (usec): 00:39:32.340 | 1.00th=[14746], 5.00th=[15664], 10.00th=[16188], 20.00th=[16909], 00:39:32.340 | 30.00th=[17171], 40.00th=[17433], 50.00th=[17695], 60.00th=[17957], 00:39:32.340 | 70.00th=[18220], 80.00th=[18482], 90.00th=[19268], 95.00th=[19792], 00:39:32.340 | 99.00th=[20841], 99.50th=[21365], 99.90th=[57410], 99.95th=[58459], 00:39:32.340 | 99.99th=[58459] 00:39:32.340 bw ( KiB/s): min=19968, max=22784, per=33.95%, avg=21527.40, stdev=641.14, samples=20 00:39:32.340 iops : min= 156, max= 178, avg=168.15, stdev= 4.99, samples=20 00:39:32.340 lat (msec) : 20=96.32%, 50=3.38%, 100=0.30% 00:39:32.340 cpu : usr=94.10%, sys=5.31%, ctx=21, majf=0, minf=1636 00:39:32.340 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:32.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:32.340 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:32.340 issued rwts: total=1684,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:32.340 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:32.340 filename0: (groupid=0, jobs=1): err= 0: pid=2081903: Sun Jul 14 15:12:10 2024 00:39:32.340 read: IOPS=162, BW=20.3MiB/s (21.3MB/s)(204MiB/10048msec) 00:39:32.340 slat (nsec): min=7618, max=55421, avg=21673.84, stdev=4716.82 00:39:32.340 clat (usec): min=12764, max=51800, avg=18442.05, stdev=1515.17 00:39:32.340 lat (usec): min=12782, max=51818, avg=18463.72, stdev=1514.93 00:39:32.340 clat percentiles (usec): 00:39:32.340 | 1.00th=[15926], 5.00th=[16909], 10.00th=[17433], 20.00th=[17695], 00:39:32.340 | 30.00th=[17957], 40.00th=[18220], 50.00th=[18220], 60.00th=[18482], 00:39:32.340 | 70.00th=[18744], 80.00th=[19006], 90.00th=[19530], 95.00th=[20055], 00:39:32.340 | 99.00th=[20841], 99.50th=[21627], 99.90th=[47973], 99.95th=[51643], 00:39:32.340 | 99.99th=[51643] 00:39:32.340 bw ( KiB/s): min=19968, max=21248, per=32.84%, avg=20825.60, stdev=313.81, samples=20 00:39:32.340 iops : min= 156, max= 166, avg=162.70, stdev= 2.45, samples=20 00:39:32.340 lat (msec) : 20=95.15%, 50=4.79%, 100=0.06% 00:39:32.340 cpu : usr=93.09%, sys=6.35%, ctx=23, majf=0, minf=1635 00:39:32.340 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:32.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:32.340 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:32.340 issued rwts: total=1630,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:32.340 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:32.340 filename0: (groupid=0, jobs=1): err= 0: pid=2081904: Sun Jul 14 15:12:10 2024 00:39:32.340 read: IOPS=165, BW=20.7MiB/s (21.7MB/s)(208MiB/10049msec) 00:39:32.340 slat (nsec): min=7872, max=56290, avg=22031.86, stdev=5034.44 00:39:32.340 clat (usec): min=11116, max=54120, avg=18064.85, stdev=1577.42 00:39:32.340 lat (usec): min=11134, max=54139, avg=18086.88, stdev=1577.55 00:39:32.340 clat percentiles (usec): 00:39:32.340 | 1.00th=[15270], 5.00th=[16450], 10.00th=[16909], 20.00th=[17433], 00:39:32.340 | 30.00th=[17695], 40.00th=[17957], 50.00th=[18220], 60.00th=[18220], 00:39:32.340 | 70.00th=[18482], 80.00th=[18744], 90.00th=[19268], 95.00th=[19530], 00:39:32.340 | 99.00th=[20841], 99.50th=[21365], 99.90th=[49021], 99.95th=[54264], 00:39:32.340 | 99.99th=[54264] 00:39:32.340 bw ( KiB/s): min=20224, max=21760, per=33.53%, avg=21262.85, stdev=382.05, samples=20 00:39:32.340 iops : min= 158, max= 170, avg=166.10, stdev= 3.01, samples=20 00:39:32.340 lat (msec) : 20=97.60%, 50=2.34%, 100=0.06% 00:39:32.340 cpu : usr=92.95%, sys=6.48%, ctx=20, majf=0, minf=1636 00:39:32.340 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:32.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:32.340 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:32.340 issued rwts: total=1664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:32.340 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:32.340 00:39:32.340 Run status group 0 (all jobs): 00:39:32.340 READ: bw=61.9MiB/s (64.9MB/s), 20.3MiB/s-21.0MiB/s (21.3MB/s-22.0MB/s), io=622MiB (652MB), run=10045-10049msec 00:39:32.340 ----------------------------------------------------- 00:39:32.340 Suppressions used: 00:39:32.340 count bytes template 00:39:32.340 5 44 /usr/src/fio/parse.c 00:39:32.340 1 8 libtcmalloc_minimal.so 00:39:32.340 1 904 libcrypto.so 00:39:32.340 ----------------------------------------------------- 00:39:32.340 00:39:32.340 15:12:11 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:39:32.340 15:12:11 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:39:32.340 15:12:11 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:39:32.340 15:12:11 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:32.340 15:12:11 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:39:32.340 15:12:11 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:32.340 15:12:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:32.340 15:12:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:32.340 15:12:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:32.340 15:12:11 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:32.340 15:12:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:32.340 15:12:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:32.340 15:12:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:32.340 00:39:32.340 real 0m12.557s 00:39:32.340 user 0m30.555s 00:39:32.340 sys 0m2.252s 00:39:32.340 15:12:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:32.340 15:12:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:32.340 ************************************ 00:39:32.340 END TEST fio_dif_digest 00:39:32.340 ************************************ 00:39:32.340 15:12:11 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:39:32.340 15:12:11 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:39:32.340 15:12:11 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:39:32.340 15:12:11 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:39:32.340 15:12:11 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:39:32.340 15:12:11 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:39:32.340 15:12:11 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:39:32.340 15:12:11 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:39:32.340 15:12:11 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:39:32.340 rmmod nvme_tcp 00:39:32.340 rmmod nvme_fabrics 00:39:32.340 rmmod nvme_keyring 00:39:32.340 15:12:11 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:39:32.340 15:12:11 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:39:32.340 15:12:11 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:39:32.340 15:12:11 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 2074973 ']' 00:39:32.340 15:12:11 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 2074973 00:39:32.340 15:12:11 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 2074973 ']' 00:39:32.340 15:12:11 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 2074973 00:39:32.340 15:12:11 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:39:32.340 15:12:11 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:32.340 15:12:11 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2074973 00:39:32.340 15:12:11 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:32.340 15:12:11 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:32.340 15:12:11 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2074973' 00:39:32.340 killing process with pid 2074973 00:39:32.340 15:12:11 nvmf_dif -- common/autotest_common.sh@967 -- # kill 2074973 00:39:32.340 15:12:11 nvmf_dif -- common/autotest_common.sh@972 -- # wait 2074973 00:39:33.715 15:12:12 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:39:33.715 15:12:12 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:34.646 Waiting for block devices as requested 00:39:34.646 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:39:34.905 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:39:34.905 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:39:35.164 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:39:35.164 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:39:35.164 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:39:35.165 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:39:35.427 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:39:35.427 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:39:35.427 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:39:35.427 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:39:35.686 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:39:35.686 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:39:35.686 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:39:35.945 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:39:35.945 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:39:35.945 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:39:36.203 15:12:15 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:39:36.203 15:12:15 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:39:36.203 15:12:15 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:36.203 15:12:15 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:39:36.203 15:12:15 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:36.203 15:12:15 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:36.203 15:12:15 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:38.104 15:12:17 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:39:38.104 00:39:38.104 real 1m15.611s 00:39:38.104 user 6m42.246s 00:39:38.104 sys 0m20.341s 00:39:38.104 15:12:17 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:38.104 15:12:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:38.104 ************************************ 00:39:38.104 END TEST nvmf_dif 00:39:38.104 ************************************ 00:39:38.104 15:12:17 -- common/autotest_common.sh@1142 -- # return 0 00:39:38.104 15:12:17 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:39:38.104 15:12:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:38.104 15:12:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:38.104 15:12:17 -- common/autotest_common.sh@10 -- # set +x 00:39:38.104 ************************************ 00:39:38.104 START TEST nvmf_abort_qd_sizes 00:39:38.104 ************************************ 00:39:38.104 15:12:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:39:38.363 * Looking for test storage... 00:39:38.363 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:38.363 15:12:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:38.363 15:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:39:38.363 15:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:38.363 15:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:38.363 15:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:38.363 15:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:38.363 15:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:38.363 15:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:38.363 15:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:38.363 15:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:38.363 15:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:38.363 15:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:38.363 15:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:38.363 15:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:38.363 15:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:38.363 15:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:38.363 15:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:38.363 15:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:38.363 15:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:38.363 15:12:17 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:38.363 15:12:17 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:38.363 15:12:17 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:38.363 15:12:17 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:38.363 15:12:17 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:38.363 15:12:17 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:38.363 15:12:17 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:39:38.363 15:12:17 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:38.363 15:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:39:38.363 15:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:38.363 15:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:38.363 15:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:38.363 15:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:38.363 15:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:38.363 15:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:38.363 15:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:38.363 15:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:38.363 15:12:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:39:38.363 15:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:39:38.363 15:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:38.363 15:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:39:38.363 15:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:39:38.363 15:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:39:38.363 15:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:38.363 15:12:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:38.363 15:12:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:38.363 15:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:39:38.363 15:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:39:38.363 15:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:39:38.363 15:12:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:40.266 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:40.266 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:39:40.266 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:39:40.266 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:39:40.266 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:39:40.266 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:39:40.266 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:39:40.266 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:39:40.266 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:39:40.266 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:39:40.266 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:39:40.266 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:39:40.266 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:39:40.266 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:39:40.266 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:39:40.266 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:40.266 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:40.266 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:40.267 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:40.267 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:40.267 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:40.267 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:39:40.267 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:40.267 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:39:40.267 00:39:40.267 --- 10.0.0.2 ping statistics --- 00:39:40.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:40.267 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:40.267 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:40.267 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:39:40.267 00:39:40.267 --- 10.0.0.1 ping statistics --- 00:39:40.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:40.267 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:39:40.267 15:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:39:41.642 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:39:41.642 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:39:41.642 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:39:41.642 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:39:41.642 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:39:41.642 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:39:41.642 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:39:41.642 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:39:41.642 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:39:41.642 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:39:41.642 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:39:41.642 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:39:41.642 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:39:41.642 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:39:41.642 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:39:41.642 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:39:42.579 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:39:42.579 15:12:21 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:42.579 15:12:21 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:39:42.579 15:12:21 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:39:42.579 15:12:21 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:42.579 15:12:21 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:39:42.579 15:12:21 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:39:42.579 15:12:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:39:42.579 15:12:21 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:39:42.579 15:12:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:39:42.579 15:12:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:42.579 15:12:21 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=2087528 00:39:42.579 15:12:21 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:39:42.579 15:12:21 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 2087528 00:39:42.579 15:12:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 2087528 ']' 00:39:42.579 15:12:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:42.579 15:12:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:42.579 15:12:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:42.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:42.579 15:12:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:42.579 15:12:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:42.579 [2024-07-14 15:12:21.884229] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:39:42.579 [2024-07-14 15:12:21.884380] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:42.839 EAL: No free 2048 kB hugepages reported on node 1 00:39:42.839 [2024-07-14 15:12:22.021358] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:43.097 [2024-07-14 15:12:22.278938] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:43.097 [2024-07-14 15:12:22.279010] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:43.097 [2024-07-14 15:12:22.279035] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:43.097 [2024-07-14 15:12:22.279053] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:43.097 [2024-07-14 15:12:22.279071] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:43.097 [2024-07-14 15:12:22.279652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:43.097 [2024-07-14 15:12:22.279775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:39:43.097 [2024-07-14 15:12:22.279814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:43.097 [2024-07-14 15:12:22.279824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:39:43.662 15:12:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:43.662 15:12:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:39:43.662 15:12:22 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:39:43.662 15:12:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:43.662 15:12:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:43.662 15:12:22 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:43.662 15:12:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:39:43.662 15:12:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:39:43.662 15:12:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:39:43.662 15:12:22 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:39:43.662 15:12:22 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:39:43.662 15:12:22 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:39:43.663 15:12:22 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:39:43.663 15:12:22 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:39:43.663 15:12:22 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:39:43.663 15:12:22 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:39:43.663 15:12:22 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:39:43.663 15:12:22 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:39:43.663 15:12:22 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:39:43.663 15:12:22 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:39:43.663 15:12:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:39:43.663 15:12:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:39:43.663 15:12:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:39:43.663 15:12:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:43.663 15:12:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:43.663 15:12:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:43.663 ************************************ 00:39:43.663 START TEST spdk_target_abort 00:39:43.663 ************************************ 00:39:43.663 15:12:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:39:43.663 15:12:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:39:43.663 15:12:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:39:43.663 15:12:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:43.663 15:12:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:46.943 spdk_targetn1 00:39:46.944 15:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:46.944 15:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:46.944 15:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:46.944 15:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:46.944 [2024-07-14 15:12:25.732498] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:46.944 15:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:46.944 15:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:39:46.944 15:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:46.944 15:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:46.944 15:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:46.944 15:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:39:46.944 15:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:46.944 15:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:46.944 15:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:46.944 15:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:39:46.944 15:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:46.944 15:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:46.944 [2024-07-14 15:12:25.778639] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:46.944 15:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:46.944 15:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:39:46.944 15:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:39:46.944 15:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:39:46.944 15:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:39:46.944 15:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:39:46.944 15:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:39:46.944 15:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:39:46.944 15:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:39:46.944 15:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:39:46.944 15:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:46.944 15:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:39:46.944 15:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:46.944 15:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:39:46.944 15:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:46.944 15:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:39:46.944 15:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:46.944 15:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:46.944 15:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:46.944 15:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:46.944 15:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:46.944 15:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:46.944 EAL: No free 2048 kB hugepages reported on node 1 00:39:50.232 Initializing NVMe Controllers 00:39:50.232 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:50.232 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:50.232 Initialization complete. Launching workers. 00:39:50.232 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9244, failed: 0 00:39:50.232 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1222, failed to submit 8022 00:39:50.232 success 701, unsuccess 521, failed 0 00:39:50.232 15:12:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:50.232 15:12:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:50.232 EAL: No free 2048 kB hugepages reported on node 1 00:39:53.514 Initializing NVMe Controllers 00:39:53.514 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:53.514 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:53.514 Initialization complete. Launching workers. 00:39:53.514 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8479, failed: 0 00:39:53.514 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1266, failed to submit 7213 00:39:53.514 success 291, unsuccess 975, failed 0 00:39:53.514 15:12:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:53.514 15:12:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:53.514 EAL: No free 2048 kB hugepages reported on node 1 00:39:56.799 Initializing NVMe Controllers 00:39:56.799 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:56.799 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:56.799 Initialization complete. Launching workers. 00:39:56.799 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 27326, failed: 0 00:39:56.799 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2657, failed to submit 24669 00:39:56.799 success 222, unsuccess 2435, failed 0 00:39:56.799 15:12:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:39:56.799 15:12:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:56.799 15:12:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:56.799 15:12:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:56.799 15:12:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:39:56.799 15:12:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:56.799 15:12:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:58.171 15:12:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:58.171 15:12:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2087528 00:39:58.171 15:12:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 2087528 ']' 00:39:58.171 15:12:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 2087528 00:39:58.171 15:12:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:39:58.171 15:12:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:58.171 15:12:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2087528 00:39:58.171 15:12:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:58.171 15:12:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:58.171 15:12:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2087528' 00:39:58.171 killing process with pid 2087528 00:39:58.171 15:12:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 2087528 00:39:58.171 15:12:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 2087528 00:39:59.571 00:39:59.571 real 0m15.604s 00:39:59.571 user 0m59.662s 00:39:59.571 sys 0m2.840s 00:39:59.571 15:12:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:59.571 15:12:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:59.571 ************************************ 00:39:59.571 END TEST spdk_target_abort 00:39:59.571 ************************************ 00:39:59.571 15:12:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:39:59.571 15:12:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:39:59.571 15:12:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:59.571 15:12:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:59.571 15:12:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:59.571 ************************************ 00:39:59.571 START TEST kernel_target_abort 00:39:59.571 ************************************ 00:39:59.571 15:12:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:39:59.571 15:12:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:39:59.571 15:12:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:39:59.571 15:12:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:59.571 15:12:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:59.571 15:12:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:59.571 15:12:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:59.571 15:12:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:59.571 15:12:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:59.571 15:12:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:59.571 15:12:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:59.571 15:12:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:59.571 15:12:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:39:59.571 15:12:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:39:59.571 15:12:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:39:59.571 15:12:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:59.571 15:12:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:59.571 15:12:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:39:59.572 15:12:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:39:59.572 15:12:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:39:59.572 15:12:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:39:59.572 15:12:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:39:59.572 15:12:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:40:00.505 Waiting for block devices as requested 00:40:00.505 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:40:00.505 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:40:00.766 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:40:00.766 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:40:00.766 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:40:00.766 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:40:01.023 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:40:01.023 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:40:01.023 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:40:01.023 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:40:01.279 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:40:01.279 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:40:01.279 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:40:01.279 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:40:01.536 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:40:01.536 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:40:01.536 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:40:02.100 15:12:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:40:02.100 15:12:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:40:02.100 15:12:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:40:02.100 15:12:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:40:02.100 15:12:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:40:02.100 15:12:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:40:02.100 15:12:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:40:02.100 15:12:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:40:02.100 15:12:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:40:02.100 No valid GPT data, bailing 00:40:02.100 15:12:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:40:02.100 15:12:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:40:02.100 15:12:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:40:02.100 15:12:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:40:02.100 15:12:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:40:02.100 15:12:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:02.100 15:12:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:40:02.100 15:12:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:40:02.100 15:12:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:40:02.100 15:12:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:40:02.100 15:12:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:40:02.100 15:12:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:40:02.100 15:12:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:40:02.100 15:12:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:40:02.101 15:12:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:40:02.101 15:12:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:40:02.101 15:12:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:40:02.101 15:12:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:40:02.101 00:40:02.101 Discovery Log Number of Records 2, Generation counter 2 00:40:02.101 =====Discovery Log Entry 0====== 00:40:02.101 trtype: tcp 00:40:02.101 adrfam: ipv4 00:40:02.101 subtype: current discovery subsystem 00:40:02.101 treq: not specified, sq flow control disable supported 00:40:02.101 portid: 1 00:40:02.101 trsvcid: 4420 00:40:02.101 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:40:02.101 traddr: 10.0.0.1 00:40:02.101 eflags: none 00:40:02.101 sectype: none 00:40:02.101 =====Discovery Log Entry 1====== 00:40:02.101 trtype: tcp 00:40:02.101 adrfam: ipv4 00:40:02.101 subtype: nvme subsystem 00:40:02.101 treq: not specified, sq flow control disable supported 00:40:02.101 portid: 1 00:40:02.101 trsvcid: 4420 00:40:02.101 subnqn: nqn.2016-06.io.spdk:testnqn 00:40:02.101 traddr: 10.0.0.1 00:40:02.101 eflags: none 00:40:02.101 sectype: none 00:40:02.101 15:12:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:40:02.101 15:12:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:40:02.101 15:12:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:40:02.101 15:12:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:40:02.101 15:12:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:40:02.101 15:12:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:40:02.101 15:12:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:40:02.101 15:12:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:40:02.101 15:12:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:40:02.101 15:12:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:02.101 15:12:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:40:02.101 15:12:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:02.101 15:12:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:40:02.101 15:12:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:02.101 15:12:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:40:02.101 15:12:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:02.101 15:12:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:40:02.101 15:12:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:02.101 15:12:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:02.101 15:12:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:02.101 15:12:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:02.359 EAL: No free 2048 kB hugepages reported on node 1 00:40:05.635 Initializing NVMe Controllers 00:40:05.635 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:40:05.635 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:05.635 Initialization complete. Launching workers. 00:40:05.635 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 36198, failed: 0 00:40:05.635 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36198, failed to submit 0 00:40:05.635 success 0, unsuccess 36198, failed 0 00:40:05.635 15:12:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:05.635 15:12:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:05.635 EAL: No free 2048 kB hugepages reported on node 1 00:40:08.915 Initializing NVMe Controllers 00:40:08.915 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:40:08.915 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:08.915 Initialization complete. Launching workers. 00:40:08.915 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 65052, failed: 0 00:40:08.915 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 16418, failed to submit 48634 00:40:08.915 success 0, unsuccess 16418, failed 0 00:40:08.915 15:12:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:08.915 15:12:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:08.915 EAL: No free 2048 kB hugepages reported on node 1 00:40:12.194 Initializing NVMe Controllers 00:40:12.194 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:40:12.194 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:12.194 Initialization complete. Launching workers. 00:40:12.194 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 62880, failed: 0 00:40:12.194 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15714, failed to submit 47166 00:40:12.194 success 0, unsuccess 15714, failed 0 00:40:12.194 15:12:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:40:12.194 15:12:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:40:12.194 15:12:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:40:12.194 15:12:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:12.194 15:12:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:40:12.194 15:12:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:40:12.194 15:12:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:12.194 15:12:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:40:12.194 15:12:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:40:12.194 15:12:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:40:13.128 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:40:13.128 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:40:13.128 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:40:13.128 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:40:13.128 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:40:13.128 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:40:13.128 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:40:13.128 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:40:13.128 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:40:13.128 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:40:13.128 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:40:13.128 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:40:13.128 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:40:13.128 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:40:13.128 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:40:13.128 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:40:14.058 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:40:14.058 00:40:14.058 real 0m14.821s 00:40:14.058 user 0m7.104s 00:40:14.058 sys 0m3.415s 00:40:14.058 15:12:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:14.058 15:12:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:14.058 ************************************ 00:40:14.058 END TEST kernel_target_abort 00:40:14.058 ************************************ 00:40:14.058 15:12:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:40:14.058 15:12:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:40:14.058 15:12:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:40:14.058 15:12:53 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:40:14.058 15:12:53 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:40:14.058 15:12:53 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:40:14.058 15:12:53 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:40:14.058 15:12:53 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:40:14.058 15:12:53 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:40:14.058 rmmod nvme_tcp 00:40:14.315 rmmod nvme_fabrics 00:40:14.315 rmmod nvme_keyring 00:40:14.315 15:12:53 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:40:14.315 15:12:53 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:40:14.315 15:12:53 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:40:14.315 15:12:53 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 2087528 ']' 00:40:14.315 15:12:53 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 2087528 00:40:14.315 15:12:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 2087528 ']' 00:40:14.315 15:12:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 2087528 00:40:14.315 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2087528) - No such process 00:40:14.316 15:12:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 2087528 is not found' 00:40:14.316 Process with pid 2087528 is not found 00:40:14.316 15:12:53 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:40:14.316 15:12:53 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:40:15.249 Waiting for block devices as requested 00:40:15.249 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:40:15.506 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:40:15.506 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:40:15.764 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:40:15.764 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:40:15.764 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:40:15.764 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:40:15.764 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:40:16.022 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:40:16.022 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:40:16.022 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:40:16.022 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:40:16.281 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:40:16.281 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:40:16.281 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:40:16.281 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:40:16.539 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:40:16.539 15:12:55 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:40:16.539 15:12:55 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:40:16.539 15:12:55 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:40:16.539 15:12:55 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:40:16.539 15:12:55 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:16.539 15:12:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:16.539 15:12:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:18.438 15:12:57 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:40:18.696 00:40:18.696 real 0m40.381s 00:40:18.696 user 1m9.135s 00:40:18.696 sys 0m9.542s 00:40:18.696 15:12:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:18.696 15:12:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:18.696 ************************************ 00:40:18.696 END TEST nvmf_abort_qd_sizes 00:40:18.696 ************************************ 00:40:18.696 15:12:57 -- common/autotest_common.sh@1142 -- # return 0 00:40:18.696 15:12:57 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:40:18.696 15:12:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:18.696 15:12:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:18.696 15:12:57 -- common/autotest_common.sh@10 -- # set +x 00:40:18.696 ************************************ 00:40:18.696 START TEST keyring_file 00:40:18.696 ************************************ 00:40:18.696 15:12:57 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:40:18.696 * Looking for test storage... 00:40:18.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:40:18.696 15:12:57 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:40:18.696 15:12:57 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:18.696 15:12:57 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:40:18.696 15:12:57 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:18.696 15:12:57 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:18.696 15:12:57 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:18.696 15:12:57 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:18.696 15:12:57 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:18.696 15:12:57 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:18.696 15:12:57 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:18.696 15:12:57 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:18.696 15:12:57 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:18.696 15:12:57 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:18.696 15:12:57 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:18.696 15:12:57 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:18.696 15:12:57 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:18.696 15:12:57 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:18.696 15:12:57 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:18.696 15:12:57 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:18.697 15:12:57 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:18.697 15:12:57 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:18.697 15:12:57 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:18.697 15:12:57 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:18.697 15:12:57 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:18.697 15:12:57 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:18.697 15:12:57 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:18.697 15:12:57 keyring_file -- paths/export.sh@5 -- # export PATH 00:40:18.697 15:12:57 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:18.697 15:12:57 keyring_file -- nvmf/common.sh@47 -- # : 0 00:40:18.697 15:12:57 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:18.697 15:12:57 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:18.697 15:12:57 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:18.697 15:12:57 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:18.697 15:12:57 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:18.697 15:12:57 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:18.697 15:12:57 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:18.697 15:12:57 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:18.697 15:12:57 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:40:18.697 15:12:57 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:40:18.697 15:12:57 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:40:18.697 15:12:57 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:40:18.697 15:12:57 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:40:18.697 15:12:57 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:40:18.697 15:12:57 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:40:18.697 15:12:57 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:40:18.697 15:12:57 keyring_file -- keyring/common.sh@17 -- # name=key0 00:40:18.697 15:12:57 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:40:18.697 15:12:57 keyring_file -- keyring/common.sh@17 -- # digest=0 00:40:18.697 15:12:57 keyring_file -- keyring/common.sh@18 -- # mktemp 00:40:18.697 15:12:57 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.uRAJNVN5Do 00:40:18.697 15:12:57 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:40:18.697 15:12:57 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:40:18.697 15:12:57 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:40:18.697 15:12:57 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:40:18.697 15:12:57 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:40:18.697 15:12:57 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:40:18.697 15:12:57 keyring_file -- nvmf/common.sh@705 -- # python - 00:40:18.697 15:12:57 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.uRAJNVN5Do 00:40:18.697 15:12:57 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.uRAJNVN5Do 00:40:18.697 15:12:57 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.uRAJNVN5Do 00:40:18.697 15:12:57 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:40:18.697 15:12:57 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:40:18.697 15:12:57 keyring_file -- keyring/common.sh@17 -- # name=key1 00:40:18.697 15:12:57 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:40:18.697 15:12:57 keyring_file -- keyring/common.sh@17 -- # digest=0 00:40:18.697 15:12:57 keyring_file -- keyring/common.sh@18 -- # mktemp 00:40:18.697 15:12:57 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.vCCypXOz4B 00:40:18.697 15:12:57 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:40:18.697 15:12:57 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:40:18.697 15:12:57 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:40:18.697 15:12:57 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:40:18.697 15:12:57 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:40:18.697 15:12:57 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:40:18.697 15:12:57 keyring_file -- nvmf/common.sh@705 -- # python - 00:40:18.697 15:12:57 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.vCCypXOz4B 00:40:18.697 15:12:57 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.vCCypXOz4B 00:40:18.697 15:12:57 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.vCCypXOz4B 00:40:18.697 15:12:57 keyring_file -- keyring/file.sh@30 -- # tgtpid=2093755 00:40:18.697 15:12:57 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:40:18.697 15:12:57 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2093755 00:40:18.697 15:12:57 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2093755 ']' 00:40:18.697 15:12:57 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:18.697 15:12:57 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:18.697 15:12:57 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:18.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:18.697 15:12:57 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:18.697 15:12:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:18.977 [2024-07-14 15:12:58.021687] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:18.977 [2024-07-14 15:12:58.021848] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2093755 ] 00:40:18.977 EAL: No free 2048 kB hugepages reported on node 1 00:40:18.977 [2024-07-14 15:12:58.150637] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:19.235 [2024-07-14 15:12:58.401179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:20.219 15:12:59 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:20.220 15:12:59 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:40:20.220 15:12:59 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:40:20.220 15:12:59 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:20.220 15:12:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:20.220 [2024-07-14 15:12:59.301031] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:20.220 null0 00:40:20.220 [2024-07-14 15:12:59.333048] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:40:20.220 [2024-07-14 15:12:59.333624] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:40:20.220 [2024-07-14 15:12:59.341095] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:40:20.220 15:12:59 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:20.220 15:12:59 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:40:20.220 15:12:59 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:40:20.220 15:12:59 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:40:20.220 15:12:59 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:40:20.220 15:12:59 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:20.220 15:12:59 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:40:20.220 15:12:59 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:20.220 15:12:59 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:40:20.220 15:12:59 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:20.220 15:12:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:20.220 [2024-07-14 15:12:59.353094] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:40:20.220 request: 00:40:20.220 { 00:40:20.220 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:40:20.220 "secure_channel": false, 00:40:20.220 "listen_address": { 00:40:20.220 "trtype": "tcp", 00:40:20.220 "traddr": "127.0.0.1", 00:40:20.220 "trsvcid": "4420" 00:40:20.220 }, 00:40:20.220 "method": "nvmf_subsystem_add_listener", 00:40:20.220 "req_id": 1 00:40:20.220 } 00:40:20.220 Got JSON-RPC error response 00:40:20.220 response: 00:40:20.220 { 00:40:20.220 "code": -32602, 00:40:20.220 "message": "Invalid parameters" 00:40:20.220 } 00:40:20.220 15:12:59 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:40:20.220 15:12:59 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:40:20.220 15:12:59 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:40:20.220 15:12:59 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:40:20.220 15:12:59 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:40:20.220 15:12:59 keyring_file -- keyring/file.sh@46 -- # bperfpid=2093899 00:40:20.220 15:12:59 keyring_file -- keyring/file.sh@48 -- # waitforlisten 2093899 /var/tmp/bperf.sock 00:40:20.220 15:12:59 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:40:20.220 15:12:59 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2093899 ']' 00:40:20.220 15:12:59 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:20.220 15:12:59 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:20.220 15:12:59 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:20.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:20.220 15:12:59 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:20.220 15:12:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:20.220 [2024-07-14 15:12:59.437913] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:20.220 [2024-07-14 15:12:59.438060] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2093899 ] 00:40:20.220 EAL: No free 2048 kB hugepages reported on node 1 00:40:20.478 [2024-07-14 15:12:59.563012] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:20.735 [2024-07-14 15:12:59.808688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:21.298 15:13:00 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:21.298 15:13:00 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:40:21.298 15:13:00 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.uRAJNVN5Do 00:40:21.298 15:13:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.uRAJNVN5Do 00:40:21.298 15:13:00 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.vCCypXOz4B 00:40:21.298 15:13:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.vCCypXOz4B 00:40:21.555 15:13:00 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:40:21.555 15:13:00 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:40:21.555 15:13:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:21.555 15:13:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:21.555 15:13:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:21.811 15:13:01 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.uRAJNVN5Do == \/\t\m\p\/\t\m\p\.\u\R\A\J\N\V\N\5\D\o ]] 00:40:21.811 15:13:01 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:40:21.811 15:13:01 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:40:21.812 15:13:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:21.812 15:13:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:21.812 15:13:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:22.069 15:13:01 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.vCCypXOz4B == \/\t\m\p\/\t\m\p\.\v\C\C\y\p\X\O\z\4\B ]] 00:40:22.069 15:13:01 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:40:22.069 15:13:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:22.069 15:13:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:22.069 15:13:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:22.069 15:13:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:22.069 15:13:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:22.327 15:13:01 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:40:22.327 15:13:01 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:40:22.327 15:13:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:22.327 15:13:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:22.327 15:13:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:22.327 15:13:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:22.327 15:13:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:22.584 15:13:01 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:40:22.584 15:13:01 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:22.584 15:13:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:22.842 [2024-07-14 15:13:02.070625] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:23.100 nvme0n1 00:40:23.101 15:13:02 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:40:23.101 15:13:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:23.101 15:13:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:23.101 15:13:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:23.101 15:13:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:23.101 15:13:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:23.358 15:13:02 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:40:23.358 15:13:02 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:40:23.358 15:13:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:23.358 15:13:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:23.358 15:13:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:23.358 15:13:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:23.358 15:13:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:23.617 15:13:02 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:40:23.617 15:13:02 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:23.617 Running I/O for 1 seconds... 00:40:24.548 00:40:24.548 Latency(us) 00:40:24.548 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:24.548 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:40:24.548 nvme0n1 : 1.01 6211.62 24.26 0.00 0.00 20492.69 6796.33 30680.56 00:40:24.548 =================================================================================================================== 00:40:24.548 Total : 6211.62 24.26 0.00 0.00 20492.69 6796.33 30680.56 00:40:24.548 0 00:40:24.548 15:13:03 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:40:24.548 15:13:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:40:24.805 15:13:04 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:40:24.805 15:13:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:24.805 15:13:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:24.805 15:13:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:24.805 15:13:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:24.805 15:13:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:25.063 15:13:04 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:40:25.063 15:13:04 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:40:25.063 15:13:04 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:25.063 15:13:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:25.063 15:13:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:25.063 15:13:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:25.063 15:13:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:25.321 15:13:04 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:40:25.321 15:13:04 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:25.321 15:13:04 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:40:25.321 15:13:04 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:25.321 15:13:04 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:40:25.321 15:13:04 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:25.321 15:13:04 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:40:25.321 15:13:04 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:25.321 15:13:04 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:25.321 15:13:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:25.580 [2024-07-14 15:13:04.820701] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:40:25.580 [2024-07-14 15:13:04.821290] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (107): Transport endpoint is not connected 00:40:25.580 [2024-07-14 15:13:04.822255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:40:25.580 [2024-07-14 15:13:04.823249] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:40:25.580 [2024-07-14 15:13:04.823284] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:40:25.580 [2024-07-14 15:13:04.823307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:40:25.580 request: 00:40:25.580 { 00:40:25.580 "name": "nvme0", 00:40:25.580 "trtype": "tcp", 00:40:25.580 "traddr": "127.0.0.1", 00:40:25.580 "adrfam": "ipv4", 00:40:25.580 "trsvcid": "4420", 00:40:25.580 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:25.580 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:25.580 "prchk_reftag": false, 00:40:25.580 "prchk_guard": false, 00:40:25.580 "hdgst": false, 00:40:25.580 "ddgst": false, 00:40:25.580 "psk": "key1", 00:40:25.580 "method": "bdev_nvme_attach_controller", 00:40:25.580 "req_id": 1 00:40:25.580 } 00:40:25.580 Got JSON-RPC error response 00:40:25.580 response: 00:40:25.580 { 00:40:25.580 "code": -5, 00:40:25.580 "message": "Input/output error" 00:40:25.580 } 00:40:25.580 15:13:04 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:40:25.580 15:13:04 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:40:25.580 15:13:04 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:40:25.580 15:13:04 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:40:25.580 15:13:04 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:40:25.580 15:13:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:25.580 15:13:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:25.580 15:13:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:25.580 15:13:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:25.580 15:13:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:25.838 15:13:05 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:40:25.838 15:13:05 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:40:25.838 15:13:05 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:25.838 15:13:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:25.838 15:13:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:25.838 15:13:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:25.838 15:13:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:26.095 15:13:05 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:40:26.095 15:13:05 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:40:26.095 15:13:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:40:26.353 15:13:05 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:40:26.353 15:13:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:40:26.610 15:13:05 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:40:26.610 15:13:05 keyring_file -- keyring/file.sh@77 -- # jq length 00:40:26.610 15:13:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:26.868 15:13:06 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:40:26.868 15:13:06 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.uRAJNVN5Do 00:40:26.868 15:13:06 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.uRAJNVN5Do 00:40:26.868 15:13:06 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:40:26.868 15:13:06 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.uRAJNVN5Do 00:40:26.868 15:13:06 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:40:26.868 15:13:06 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:26.868 15:13:06 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:40:26.868 15:13:06 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:26.868 15:13:06 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.uRAJNVN5Do 00:40:26.868 15:13:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.uRAJNVN5Do 00:40:27.126 [2024-07-14 15:13:06.302229] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.uRAJNVN5Do': 0100660 00:40:27.126 [2024-07-14 15:13:06.302281] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:40:27.126 request: 00:40:27.126 { 00:40:27.126 "name": "key0", 00:40:27.126 "path": "/tmp/tmp.uRAJNVN5Do", 00:40:27.126 "method": "keyring_file_add_key", 00:40:27.126 "req_id": 1 00:40:27.126 } 00:40:27.126 Got JSON-RPC error response 00:40:27.126 response: 00:40:27.126 { 00:40:27.126 "code": -1, 00:40:27.126 "message": "Operation not permitted" 00:40:27.126 } 00:40:27.126 15:13:06 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:40:27.126 15:13:06 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:40:27.126 15:13:06 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:40:27.126 15:13:06 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:40:27.126 15:13:06 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.uRAJNVN5Do 00:40:27.126 15:13:06 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.uRAJNVN5Do 00:40:27.126 15:13:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.uRAJNVN5Do 00:40:27.383 15:13:06 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.uRAJNVN5Do 00:40:27.383 15:13:06 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:40:27.383 15:13:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:27.383 15:13:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:27.383 15:13:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:27.384 15:13:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:27.384 15:13:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:27.641 15:13:06 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:40:27.641 15:13:06 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:27.641 15:13:06 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:40:27.641 15:13:06 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:27.641 15:13:06 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:40:27.641 15:13:06 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:27.641 15:13:06 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:40:27.641 15:13:06 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:27.641 15:13:06 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:27.641 15:13:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:27.898 [2024-07-14 15:13:07.060528] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.uRAJNVN5Do': No such file or directory 00:40:27.898 [2024-07-14 15:13:07.060585] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:40:27.898 [2024-07-14 15:13:07.060628] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:40:27.898 [2024-07-14 15:13:07.060647] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:40:27.898 [2024-07-14 15:13:07.060668] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:40:27.898 request: 00:40:27.898 { 00:40:27.898 "name": "nvme0", 00:40:27.898 "trtype": "tcp", 00:40:27.898 "traddr": "127.0.0.1", 00:40:27.898 "adrfam": "ipv4", 00:40:27.898 "trsvcid": "4420", 00:40:27.898 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:27.898 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:27.898 "prchk_reftag": false, 00:40:27.898 "prchk_guard": false, 00:40:27.898 "hdgst": false, 00:40:27.898 "ddgst": false, 00:40:27.898 "psk": "key0", 00:40:27.898 "method": "bdev_nvme_attach_controller", 00:40:27.898 "req_id": 1 00:40:27.898 } 00:40:27.899 Got JSON-RPC error response 00:40:27.899 response: 00:40:27.899 { 00:40:27.899 "code": -19, 00:40:27.899 "message": "No such device" 00:40:27.899 } 00:40:27.899 15:13:07 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:40:27.899 15:13:07 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:40:27.899 15:13:07 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:40:27.899 15:13:07 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:40:27.899 15:13:07 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:40:27.899 15:13:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:40:28.156 15:13:07 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:40:28.156 15:13:07 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:40:28.156 15:13:07 keyring_file -- keyring/common.sh@17 -- # name=key0 00:40:28.156 15:13:07 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:40:28.156 15:13:07 keyring_file -- keyring/common.sh@17 -- # digest=0 00:40:28.156 15:13:07 keyring_file -- keyring/common.sh@18 -- # mktemp 00:40:28.156 15:13:07 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.0utngAheRM 00:40:28.156 15:13:07 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:40:28.156 15:13:07 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:40:28.156 15:13:07 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:40:28.156 15:13:07 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:40:28.156 15:13:07 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:40:28.156 15:13:07 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:40:28.156 15:13:07 keyring_file -- nvmf/common.sh@705 -- # python - 00:40:28.156 15:13:07 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.0utngAheRM 00:40:28.156 15:13:07 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.0utngAheRM 00:40:28.156 15:13:07 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.0utngAheRM 00:40:28.156 15:13:07 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.0utngAheRM 00:40:28.156 15:13:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.0utngAheRM 00:40:28.415 15:13:07 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:28.415 15:13:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:28.673 nvme0n1 00:40:28.673 15:13:07 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:40:28.673 15:13:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:28.673 15:13:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:28.673 15:13:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:28.673 15:13:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:28.673 15:13:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:28.930 15:13:08 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:40:28.930 15:13:08 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:40:28.930 15:13:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:40:29.188 15:13:08 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:40:29.188 15:13:08 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:40:29.188 15:13:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:29.188 15:13:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:29.188 15:13:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:29.446 15:13:08 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:40:29.446 15:13:08 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:40:29.446 15:13:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:29.446 15:13:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:29.446 15:13:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:29.446 15:13:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:29.446 15:13:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:29.704 15:13:08 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:40:29.704 15:13:08 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:40:29.704 15:13:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:40:29.962 15:13:09 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:40:29.962 15:13:09 keyring_file -- keyring/file.sh@104 -- # jq length 00:40:29.962 15:13:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:30.219 15:13:09 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:40:30.219 15:13:09 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.0utngAheRM 00:40:30.219 15:13:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.0utngAheRM 00:40:30.477 15:13:09 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.vCCypXOz4B 00:40:30.477 15:13:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.vCCypXOz4B 00:40:30.736 15:13:09 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:30.736 15:13:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:30.994 nvme0n1 00:40:30.994 15:13:10 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:40:30.994 15:13:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:40:31.252 15:13:10 keyring_file -- keyring/file.sh@112 -- # config='{ 00:40:31.252 "subsystems": [ 00:40:31.252 { 00:40:31.252 "subsystem": "keyring", 00:40:31.252 "config": [ 00:40:31.252 { 00:40:31.252 "method": "keyring_file_add_key", 00:40:31.252 "params": { 00:40:31.252 "name": "key0", 00:40:31.252 "path": "/tmp/tmp.0utngAheRM" 00:40:31.252 } 00:40:31.252 }, 00:40:31.252 { 00:40:31.252 "method": "keyring_file_add_key", 00:40:31.252 "params": { 00:40:31.252 "name": "key1", 00:40:31.252 "path": "/tmp/tmp.vCCypXOz4B" 00:40:31.252 } 00:40:31.252 } 00:40:31.252 ] 00:40:31.252 }, 00:40:31.252 { 00:40:31.252 "subsystem": "iobuf", 00:40:31.252 "config": [ 00:40:31.252 { 00:40:31.252 "method": "iobuf_set_options", 00:40:31.252 "params": { 00:40:31.252 "small_pool_count": 8192, 00:40:31.252 "large_pool_count": 1024, 00:40:31.252 "small_bufsize": 8192, 00:40:31.252 "large_bufsize": 135168 00:40:31.252 } 00:40:31.252 } 00:40:31.252 ] 00:40:31.252 }, 00:40:31.252 { 00:40:31.252 "subsystem": "sock", 00:40:31.252 "config": [ 00:40:31.252 { 00:40:31.252 "method": "sock_set_default_impl", 00:40:31.252 "params": { 00:40:31.252 "impl_name": "posix" 00:40:31.252 } 00:40:31.252 }, 00:40:31.252 { 00:40:31.252 "method": "sock_impl_set_options", 00:40:31.253 "params": { 00:40:31.253 "impl_name": "ssl", 00:40:31.253 "recv_buf_size": 4096, 00:40:31.253 "send_buf_size": 4096, 00:40:31.253 "enable_recv_pipe": true, 00:40:31.253 "enable_quickack": false, 00:40:31.253 "enable_placement_id": 0, 00:40:31.253 "enable_zerocopy_send_server": true, 00:40:31.253 "enable_zerocopy_send_client": false, 00:40:31.253 "zerocopy_threshold": 0, 00:40:31.253 "tls_version": 0, 00:40:31.253 "enable_ktls": false 00:40:31.253 } 00:40:31.253 }, 00:40:31.253 { 00:40:31.253 "method": "sock_impl_set_options", 00:40:31.253 "params": { 00:40:31.253 "impl_name": "posix", 00:40:31.253 "recv_buf_size": 2097152, 00:40:31.253 "send_buf_size": 2097152, 00:40:31.253 "enable_recv_pipe": true, 00:40:31.253 "enable_quickack": false, 00:40:31.253 "enable_placement_id": 0, 00:40:31.253 "enable_zerocopy_send_server": true, 00:40:31.253 "enable_zerocopy_send_client": false, 00:40:31.253 "zerocopy_threshold": 0, 00:40:31.253 "tls_version": 0, 00:40:31.253 "enable_ktls": false 00:40:31.253 } 00:40:31.253 } 00:40:31.253 ] 00:40:31.253 }, 00:40:31.253 { 00:40:31.253 "subsystem": "vmd", 00:40:31.253 "config": [] 00:40:31.253 }, 00:40:31.253 { 00:40:31.253 "subsystem": "accel", 00:40:31.253 "config": [ 00:40:31.253 { 00:40:31.253 "method": "accel_set_options", 00:40:31.253 "params": { 00:40:31.253 "small_cache_size": 128, 00:40:31.253 "large_cache_size": 16, 00:40:31.253 "task_count": 2048, 00:40:31.253 "sequence_count": 2048, 00:40:31.253 "buf_count": 2048 00:40:31.253 } 00:40:31.253 } 00:40:31.253 ] 00:40:31.253 }, 00:40:31.253 { 00:40:31.253 "subsystem": "bdev", 00:40:31.253 "config": [ 00:40:31.253 { 00:40:31.253 "method": "bdev_set_options", 00:40:31.253 "params": { 00:40:31.253 "bdev_io_pool_size": 65535, 00:40:31.253 "bdev_io_cache_size": 256, 00:40:31.253 "bdev_auto_examine": true, 00:40:31.253 "iobuf_small_cache_size": 128, 00:40:31.253 "iobuf_large_cache_size": 16 00:40:31.253 } 00:40:31.253 }, 00:40:31.253 { 00:40:31.253 "method": "bdev_raid_set_options", 00:40:31.253 "params": { 00:40:31.253 "process_window_size_kb": 1024 00:40:31.253 } 00:40:31.253 }, 00:40:31.253 { 00:40:31.253 "method": "bdev_iscsi_set_options", 00:40:31.253 "params": { 00:40:31.253 "timeout_sec": 30 00:40:31.253 } 00:40:31.253 }, 00:40:31.253 { 00:40:31.253 "method": "bdev_nvme_set_options", 00:40:31.253 "params": { 00:40:31.253 "action_on_timeout": "none", 00:40:31.253 "timeout_us": 0, 00:40:31.253 "timeout_admin_us": 0, 00:40:31.253 "keep_alive_timeout_ms": 10000, 00:40:31.253 "arbitration_burst": 0, 00:40:31.253 "low_priority_weight": 0, 00:40:31.253 "medium_priority_weight": 0, 00:40:31.253 "high_priority_weight": 0, 00:40:31.253 "nvme_adminq_poll_period_us": 10000, 00:40:31.253 "nvme_ioq_poll_period_us": 0, 00:40:31.253 "io_queue_requests": 512, 00:40:31.253 "delay_cmd_submit": true, 00:40:31.253 "transport_retry_count": 4, 00:40:31.253 "bdev_retry_count": 3, 00:40:31.253 "transport_ack_timeout": 0, 00:40:31.253 "ctrlr_loss_timeout_sec": 0, 00:40:31.253 "reconnect_delay_sec": 0, 00:40:31.253 "fast_io_fail_timeout_sec": 0, 00:40:31.253 "disable_auto_failback": false, 00:40:31.253 "generate_uuids": false, 00:40:31.253 "transport_tos": 0, 00:40:31.253 "nvme_error_stat": false, 00:40:31.253 "rdma_srq_size": 0, 00:40:31.253 "io_path_stat": false, 00:40:31.253 "allow_accel_sequence": false, 00:40:31.253 "rdma_max_cq_size": 0, 00:40:31.253 "rdma_cm_event_timeout_ms": 0, 00:40:31.253 "dhchap_digests": [ 00:40:31.253 "sha256", 00:40:31.253 "sha384", 00:40:31.253 "sha512" 00:40:31.253 ], 00:40:31.253 "dhchap_dhgroups": [ 00:40:31.253 "null", 00:40:31.253 "ffdhe2048", 00:40:31.253 "ffdhe3072", 00:40:31.253 "ffdhe4096", 00:40:31.253 "ffdhe6144", 00:40:31.253 "ffdhe8192" 00:40:31.253 ] 00:40:31.253 } 00:40:31.253 }, 00:40:31.253 { 00:40:31.253 "method": "bdev_nvme_attach_controller", 00:40:31.253 "params": { 00:40:31.253 "name": "nvme0", 00:40:31.253 "trtype": "TCP", 00:40:31.253 "adrfam": "IPv4", 00:40:31.253 "traddr": "127.0.0.1", 00:40:31.253 "trsvcid": "4420", 00:40:31.253 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:31.253 "prchk_reftag": false, 00:40:31.253 "prchk_guard": false, 00:40:31.253 "ctrlr_loss_timeout_sec": 0, 00:40:31.253 "reconnect_delay_sec": 0, 00:40:31.253 "fast_io_fail_timeout_sec": 0, 00:40:31.253 "psk": "key0", 00:40:31.253 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:31.253 "hdgst": false, 00:40:31.253 "ddgst": false 00:40:31.253 } 00:40:31.253 }, 00:40:31.253 { 00:40:31.253 "method": "bdev_nvme_set_hotplug", 00:40:31.253 "params": { 00:40:31.253 "period_us": 100000, 00:40:31.253 "enable": false 00:40:31.253 } 00:40:31.253 }, 00:40:31.253 { 00:40:31.253 "method": "bdev_wait_for_examine" 00:40:31.253 } 00:40:31.253 ] 00:40:31.253 }, 00:40:31.253 { 00:40:31.253 "subsystem": "nbd", 00:40:31.253 "config": [] 00:40:31.253 } 00:40:31.253 ] 00:40:31.253 }' 00:40:31.253 15:13:10 keyring_file -- keyring/file.sh@114 -- # killprocess 2093899 00:40:31.253 15:13:10 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2093899 ']' 00:40:31.253 15:13:10 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2093899 00:40:31.253 15:13:10 keyring_file -- common/autotest_common.sh@953 -- # uname 00:40:31.253 15:13:10 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:31.253 15:13:10 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2093899 00:40:31.511 15:13:10 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:40:31.511 15:13:10 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:40:31.511 15:13:10 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2093899' 00:40:31.511 killing process with pid 2093899 00:40:31.511 15:13:10 keyring_file -- common/autotest_common.sh@967 -- # kill 2093899 00:40:31.511 Received shutdown signal, test time was about 1.000000 seconds 00:40:31.511 00:40:31.511 Latency(us) 00:40:31.511 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:31.511 =================================================================================================================== 00:40:31.511 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:31.511 15:13:10 keyring_file -- common/autotest_common.sh@972 -- # wait 2093899 00:40:32.442 15:13:11 keyring_file -- keyring/file.sh@117 -- # bperfpid=2095476 00:40:32.442 15:13:11 keyring_file -- keyring/file.sh@119 -- # waitforlisten 2095476 /var/tmp/bperf.sock 00:40:32.442 15:13:11 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2095476 ']' 00:40:32.442 15:13:11 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:32.442 15:13:11 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:40:32.442 15:13:11 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:32.442 15:13:11 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:40:32.442 "subsystems": [ 00:40:32.442 { 00:40:32.442 "subsystem": "keyring", 00:40:32.442 "config": [ 00:40:32.442 { 00:40:32.442 "method": "keyring_file_add_key", 00:40:32.442 "params": { 00:40:32.442 "name": "key0", 00:40:32.442 "path": "/tmp/tmp.0utngAheRM" 00:40:32.442 } 00:40:32.442 }, 00:40:32.442 { 00:40:32.442 "method": "keyring_file_add_key", 00:40:32.442 "params": { 00:40:32.442 "name": "key1", 00:40:32.442 "path": "/tmp/tmp.vCCypXOz4B" 00:40:32.442 } 00:40:32.442 } 00:40:32.442 ] 00:40:32.442 }, 00:40:32.442 { 00:40:32.442 "subsystem": "iobuf", 00:40:32.442 "config": [ 00:40:32.442 { 00:40:32.442 "method": "iobuf_set_options", 00:40:32.442 "params": { 00:40:32.442 "small_pool_count": 8192, 00:40:32.442 "large_pool_count": 1024, 00:40:32.442 "small_bufsize": 8192, 00:40:32.442 "large_bufsize": 135168 00:40:32.442 } 00:40:32.442 } 00:40:32.442 ] 00:40:32.442 }, 00:40:32.442 { 00:40:32.442 "subsystem": "sock", 00:40:32.442 "config": [ 00:40:32.442 { 00:40:32.442 "method": "sock_set_default_impl", 00:40:32.442 "params": { 00:40:32.442 "impl_name": "posix" 00:40:32.442 } 00:40:32.442 }, 00:40:32.442 { 00:40:32.442 "method": "sock_impl_set_options", 00:40:32.442 "params": { 00:40:32.442 "impl_name": "ssl", 00:40:32.442 "recv_buf_size": 4096, 00:40:32.442 "send_buf_size": 4096, 00:40:32.442 "enable_recv_pipe": true, 00:40:32.442 "enable_quickack": false, 00:40:32.442 "enable_placement_id": 0, 00:40:32.442 "enable_zerocopy_send_server": true, 00:40:32.442 "enable_zerocopy_send_client": false, 00:40:32.442 "zerocopy_threshold": 0, 00:40:32.442 "tls_version": 0, 00:40:32.442 "enable_ktls": false 00:40:32.442 } 00:40:32.442 }, 00:40:32.442 { 00:40:32.442 "method": "sock_impl_set_options", 00:40:32.442 "params": { 00:40:32.442 "impl_name": "posix", 00:40:32.442 "recv_buf_size": 2097152, 00:40:32.442 "send_buf_size": 2097152, 00:40:32.442 "enable_recv_pipe": true, 00:40:32.442 "enable_quickack": false, 00:40:32.442 "enable_placement_id": 0, 00:40:32.442 "enable_zerocopy_send_server": true, 00:40:32.442 "enable_zerocopy_send_client": false, 00:40:32.442 "zerocopy_threshold": 0, 00:40:32.442 "tls_version": 0, 00:40:32.442 "enable_ktls": false 00:40:32.442 } 00:40:32.442 } 00:40:32.442 ] 00:40:32.442 }, 00:40:32.442 { 00:40:32.442 "subsystem": "vmd", 00:40:32.442 "config": [] 00:40:32.442 }, 00:40:32.442 { 00:40:32.442 "subsystem": "accel", 00:40:32.442 "config": [ 00:40:32.442 { 00:40:32.442 "method": "accel_set_options", 00:40:32.442 "params": { 00:40:32.442 "small_cache_size": 128, 00:40:32.442 "large_cache_size": 16, 00:40:32.442 "task_count": 2048, 00:40:32.442 "sequence_count": 2048, 00:40:32.442 "buf_count": 2048 00:40:32.442 } 00:40:32.442 } 00:40:32.442 ] 00:40:32.442 }, 00:40:32.442 { 00:40:32.442 "subsystem": "bdev", 00:40:32.442 "config": [ 00:40:32.442 { 00:40:32.442 "method": "bdev_set_options", 00:40:32.442 "params": { 00:40:32.442 "bdev_io_pool_size": 65535, 00:40:32.442 "bdev_io_cache_size": 256, 00:40:32.442 "bdev_auto_examine": true, 00:40:32.442 "iobuf_small_cache_size": 128, 00:40:32.442 "iobuf_large_cache_size": 16 00:40:32.442 } 00:40:32.442 }, 00:40:32.442 { 00:40:32.442 "method": "bdev_raid_set_options", 00:40:32.442 "params": { 00:40:32.442 "process_window_size_kb": 1024 00:40:32.442 } 00:40:32.442 }, 00:40:32.442 { 00:40:32.442 "method": "bdev_iscsi_set_options", 00:40:32.442 "params": { 00:40:32.442 "timeout_sec": 30 00:40:32.442 } 00:40:32.442 }, 00:40:32.442 { 00:40:32.442 "method": "bdev_nvme_set_options", 00:40:32.442 "params": { 00:40:32.442 "action_on_timeout": "none", 00:40:32.442 "timeout_us": 0, 00:40:32.442 "timeout_admin_us": 0, 00:40:32.442 "keep_alive_timeout_ms": 10000, 00:40:32.442 "arbitration_burst": 0, 00:40:32.442 "low_priority_weight": 0, 00:40:32.442 "medium_priority_weight": 0, 00:40:32.442 "high_priority_weight": 0, 00:40:32.442 "nvme_adminq_poll_period_us": 10000, 00:40:32.442 "nvme_ioq_poll_period_us": 0, 00:40:32.442 "io_queue_requests": 512, 00:40:32.442 "delay_cmd_submit": true, 00:40:32.442 "transport_retry_count": 4, 00:40:32.442 "bdev_retry_count": 3, 00:40:32.442 "transport_ack_timeout": 0, 00:40:32.442 "ctrlr_loss_timeout_sec": 0, 00:40:32.442 "reconnect_delay_sec": 0, 00:40:32.442 "fast_io_fail_timeout_sec": 0, 00:40:32.442 "disable_auto_failback": false, 00:40:32.442 "generate_uuids": false, 00:40:32.442 "transport_tos": 0, 00:40:32.442 "nvme_error_stat": false, 00:40:32.442 "rdma_srq_size": 0, 00:40:32.442 "io_path_stat": false, 00:40:32.442 "allow_accel_sequence": false, 00:40:32.442 15:13:11 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:32.442 "rdma_max_cq_size": 0, 00:40:32.442 "rdma_cm_event_timeout_ms": 0, 00:40:32.442 "dhchap_digests": [ 00:40:32.442 "sha256", 00:40:32.442 "sha384", 00:40:32.442 "sha512" 00:40:32.442 ], 00:40:32.442 "dhchap_dhgroups": [ 00:40:32.442 "null", 00:40:32.442 "ffdhe2048", 00:40:32.442 "ffdhe3072", 00:40:32.442 "ffdhe4096", 00:40:32.442 "ffdhe6144", 00:40:32.442 "ffdhe8192" 00:40:32.442 ] 00:40:32.442 } 00:40:32.442 }, 00:40:32.442 { 00:40:32.442 "method": "bdev_nvme_attach_controller", 00:40:32.442 "params": { 00:40:32.442 "name": "nvme0", 00:40:32.442 "trtype": "TCP", 00:40:32.442 "adrfam": "IPv4", 00:40:32.442 "traddr": "127.0.0.1", 00:40:32.442 "trsvcid": "4420", 00:40:32.442 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:32.442 "prchk_reftag": false, 00:40:32.442 "prchk_guard": false, 00:40:32.442 "ctrlr_loss_timeout_sec": 0, 00:40:32.442 "reconnect_delay_sec": 0, 00:40:32.442 "fast_io_fail_timeout_sec": 0, 00:40:32.442 "psk": "key0", 00:40:32.442 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:32.442 "hdgst": false, 00:40:32.442 "ddgst": false 00:40:32.442 } 00:40:32.442 }, 00:40:32.442 { 00:40:32.442 "method": "bdev_nvme_set_hotplug", 00:40:32.443 "params": { 00:40:32.443 "period_us": 100000, 00:40:32.443 "enable": false 00:40:32.443 } 00:40:32.443 }, 00:40:32.443 { 00:40:32.443 "method": "bdev_wait_for_examine" 00:40:32.443 } 00:40:32.443 ] 00:40:32.443 }, 00:40:32.443 { 00:40:32.443 "subsystem": "nbd", 00:40:32.443 "config": [] 00:40:32.443 } 00:40:32.443 ] 00:40:32.443 }' 00:40:32.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:32.443 15:13:11 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:32.443 15:13:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:32.443 [2024-07-14 15:13:11.631913] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:32.443 [2024-07-14 15:13:11.632064] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2095476 ] 00:40:32.443 EAL: No free 2048 kB hugepages reported on node 1 00:40:32.701 [2024-07-14 15:13:11.753533] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:32.701 [2024-07-14 15:13:11.980739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:33.266 [2024-07-14 15:13:12.400661] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:33.266 15:13:12 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:33.266 15:13:12 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:40:33.266 15:13:12 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:40:33.266 15:13:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:33.266 15:13:12 keyring_file -- keyring/file.sh@120 -- # jq length 00:40:33.523 15:13:12 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:40:33.524 15:13:12 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:40:33.524 15:13:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:33.524 15:13:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:33.524 15:13:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:33.524 15:13:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:33.524 15:13:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:33.782 15:13:13 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:40:33.782 15:13:13 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:40:33.782 15:13:13 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:33.782 15:13:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:33.782 15:13:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:33.782 15:13:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:33.782 15:13:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:34.048 15:13:13 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:40:34.048 15:13:13 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:40:34.048 15:13:13 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:40:34.048 15:13:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:40:34.309 15:13:13 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:40:34.309 15:13:13 keyring_file -- keyring/file.sh@1 -- # cleanup 00:40:34.309 15:13:13 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.0utngAheRM /tmp/tmp.vCCypXOz4B 00:40:34.309 15:13:13 keyring_file -- keyring/file.sh@20 -- # killprocess 2095476 00:40:34.309 15:13:13 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2095476 ']' 00:40:34.309 15:13:13 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2095476 00:40:34.309 15:13:13 keyring_file -- common/autotest_common.sh@953 -- # uname 00:40:34.309 15:13:13 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:34.309 15:13:13 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2095476 00:40:34.309 15:13:13 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:40:34.309 15:13:13 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:40:34.309 15:13:13 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2095476' 00:40:34.309 killing process with pid 2095476 00:40:34.309 15:13:13 keyring_file -- common/autotest_common.sh@967 -- # kill 2095476 00:40:34.309 Received shutdown signal, test time was about 1.000000 seconds 00:40:34.309 00:40:34.309 Latency(us) 00:40:34.309 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:34.309 =================================================================================================================== 00:40:34.309 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:40:34.309 15:13:13 keyring_file -- common/autotest_common.sh@972 -- # wait 2095476 00:40:35.708 15:13:14 keyring_file -- keyring/file.sh@21 -- # killprocess 2093755 00:40:35.708 15:13:14 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2093755 ']' 00:40:35.708 15:13:14 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2093755 00:40:35.708 15:13:14 keyring_file -- common/autotest_common.sh@953 -- # uname 00:40:35.708 15:13:14 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:35.708 15:13:14 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2093755 00:40:35.708 15:13:14 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:40:35.708 15:13:14 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:40:35.708 15:13:14 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2093755' 00:40:35.708 killing process with pid 2093755 00:40:35.708 15:13:14 keyring_file -- common/autotest_common.sh@967 -- # kill 2093755 00:40:35.708 [2024-07-14 15:13:14.650482] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:40:35.708 15:13:14 keyring_file -- common/autotest_common.sh@972 -- # wait 2093755 00:40:38.235 00:40:38.235 real 0m19.364s 00:40:38.235 user 0m42.951s 00:40:38.235 sys 0m3.601s 00:40:38.235 15:13:17 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:38.235 15:13:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:38.235 ************************************ 00:40:38.235 END TEST keyring_file 00:40:38.235 ************************************ 00:40:38.235 15:13:17 -- common/autotest_common.sh@1142 -- # return 0 00:40:38.235 15:13:17 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:40:38.235 15:13:17 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:40:38.235 15:13:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:38.235 15:13:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:38.235 15:13:17 -- common/autotest_common.sh@10 -- # set +x 00:40:38.235 ************************************ 00:40:38.235 START TEST keyring_linux 00:40:38.235 ************************************ 00:40:38.235 15:13:17 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:40:38.235 * Looking for test storage... 00:40:38.235 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:40:38.235 15:13:17 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:40:38.235 15:13:17 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:38.235 15:13:17 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:40:38.235 15:13:17 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:38.235 15:13:17 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:38.235 15:13:17 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:38.235 15:13:17 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:38.235 15:13:17 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:38.235 15:13:17 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:38.235 15:13:17 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:38.235 15:13:17 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:38.235 15:13:17 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:38.235 15:13:17 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:38.235 15:13:17 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:38.235 15:13:17 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:38.235 15:13:17 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:38.235 15:13:17 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:38.235 15:13:17 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:38.235 15:13:17 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:38.235 15:13:17 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:38.235 15:13:17 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:38.235 15:13:17 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:38.235 15:13:17 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:38.235 15:13:17 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:38.235 15:13:17 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:38.235 15:13:17 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:38.235 15:13:17 keyring_linux -- paths/export.sh@5 -- # export PATH 00:40:38.235 15:13:17 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:38.235 15:13:17 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:40:38.235 15:13:17 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:38.235 15:13:17 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:38.235 15:13:17 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:38.235 15:13:17 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:38.235 15:13:17 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:38.235 15:13:17 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:38.235 15:13:17 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:38.235 15:13:17 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:38.235 15:13:17 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:40:38.235 15:13:17 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:40:38.235 15:13:17 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:40:38.235 15:13:17 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:40:38.235 15:13:17 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:40:38.235 15:13:17 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:40:38.235 15:13:17 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:40:38.235 15:13:17 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:40:38.235 15:13:17 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:40:38.235 15:13:17 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:40:38.235 15:13:17 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:40:38.235 15:13:17 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:40:38.235 15:13:17 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:40:38.235 15:13:17 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:40:38.235 15:13:17 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:40:38.236 15:13:17 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:40:38.236 15:13:17 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:40:38.236 15:13:17 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:40:38.236 15:13:17 keyring_linux -- nvmf/common.sh@705 -- # python - 00:40:38.236 15:13:17 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:40:38.236 15:13:17 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:40:38.236 /tmp/:spdk-test:key0 00:40:38.236 15:13:17 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:40:38.236 15:13:17 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:40:38.236 15:13:17 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:40:38.236 15:13:17 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:40:38.236 15:13:17 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:40:38.236 15:13:17 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:40:38.236 15:13:17 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:40:38.236 15:13:17 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:40:38.236 15:13:17 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:40:38.236 15:13:17 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:40:38.236 15:13:17 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:40:38.236 15:13:17 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:40:38.236 15:13:17 keyring_linux -- nvmf/common.sh@705 -- # python - 00:40:38.236 15:13:17 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:40:38.236 15:13:17 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:40:38.236 /tmp/:spdk-test:key1 00:40:38.236 15:13:17 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2096249 00:40:38.236 15:13:17 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:40:38.236 15:13:17 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2096249 00:40:38.236 15:13:17 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 2096249 ']' 00:40:38.236 15:13:17 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:38.236 15:13:17 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:38.236 15:13:17 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:38.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:38.236 15:13:17 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:38.236 15:13:17 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:38.236 [2024-07-14 15:13:17.440438] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:38.236 [2024-07-14 15:13:17.440590] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2096249 ] 00:40:38.236 EAL: No free 2048 kB hugepages reported on node 1 00:40:38.495 [2024-07-14 15:13:17.576961] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:38.753 [2024-07-14 15:13:17.831609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:39.687 15:13:18 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:39.687 15:13:18 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:40:39.687 15:13:18 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:40:39.687 15:13:18 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:39.687 15:13:18 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:39.687 [2024-07-14 15:13:18.718841] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:39.687 null0 00:40:39.687 [2024-07-14 15:13:18.750862] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:40:39.687 [2024-07-14 15:13:18.751426] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:40:39.687 15:13:18 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:39.687 15:13:18 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:40:39.687 847627220 00:40:39.687 15:13:18 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:40:39.687 977505822 00:40:39.687 15:13:18 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2096389 00:40:39.687 15:13:18 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:40:39.687 15:13:18 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2096389 /var/tmp/bperf.sock 00:40:39.687 15:13:18 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 2096389 ']' 00:40:39.687 15:13:18 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:39.687 15:13:18 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:39.687 15:13:18 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:39.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:39.687 15:13:18 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:39.687 15:13:18 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:39.687 [2024-07-14 15:13:18.857691] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:39.687 [2024-07-14 15:13:18.857850] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2096389 ] 00:40:39.687 EAL: No free 2048 kB hugepages reported on node 1 00:40:39.945 [2024-07-14 15:13:18.998562] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:39.945 [2024-07-14 15:13:19.250613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:40.510 15:13:19 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:40.510 15:13:19 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:40:40.510 15:13:19 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:40:40.510 15:13:19 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:40:40.769 15:13:19 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:40:40.769 15:13:19 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:40:41.333 15:13:20 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:40:41.333 15:13:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:40:41.590 [2024-07-14 15:13:20.824441] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:41.851 nvme0n1 00:40:41.851 15:13:20 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:40:41.851 15:13:20 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:40:41.851 15:13:20 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:40:41.851 15:13:20 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:40:41.851 15:13:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:41.851 15:13:20 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:40:42.110 15:13:21 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:40:42.110 15:13:21 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:40:42.110 15:13:21 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:40:42.110 15:13:21 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:40:42.110 15:13:21 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:42.110 15:13:21 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:42.110 15:13:21 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:40:42.367 15:13:21 keyring_linux -- keyring/linux.sh@25 -- # sn=847627220 00:40:42.367 15:13:21 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:40:42.367 15:13:21 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:40:42.367 15:13:21 keyring_linux -- keyring/linux.sh@26 -- # [[ 847627220 == \8\4\7\6\2\7\2\2\0 ]] 00:40:42.367 15:13:21 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 847627220 00:40:42.367 15:13:21 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:40:42.367 15:13:21 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:42.367 Running I/O for 1 seconds... 00:40:43.299 00:40:43.299 Latency(us) 00:40:43.299 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:43.299 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:40:43.299 nvme0n1 : 1.02 6467.66 25.26 0.00 0.00 19621.82 8641.04 29321.29 00:40:43.299 =================================================================================================================== 00:40:43.299 Total : 6467.66 25.26 0.00 0.00 19621.82 8641.04 29321.29 00:40:43.299 0 00:40:43.299 15:13:22 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:40:43.299 15:13:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:40:43.557 15:13:22 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:40:43.557 15:13:22 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:40:43.557 15:13:22 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:40:43.557 15:13:22 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:40:43.557 15:13:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:43.557 15:13:22 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:40:43.815 15:13:23 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:40:43.815 15:13:23 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:40:43.815 15:13:23 keyring_linux -- keyring/linux.sh@23 -- # return 00:40:43.815 15:13:23 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:43.815 15:13:23 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:40:43.815 15:13:23 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:43.815 15:13:23 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:40:43.815 15:13:23 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:43.815 15:13:23 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:40:43.815 15:13:23 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:43.815 15:13:23 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:43.815 15:13:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:44.073 [2024-07-14 15:13:23.322816] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:40:44.073 [2024-07-14 15:13:23.323336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7000 (107): Transport endpoint is not connected 00:40:44.073 [2024-07-14 15:13:23.324301] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7000 (9): Bad file descriptor 00:40:44.073 [2024-07-14 15:13:23.325296] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:40:44.073 [2024-07-14 15:13:23.325336] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:40:44.073 [2024-07-14 15:13:23.325359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:40:44.073 request: 00:40:44.073 { 00:40:44.073 "name": "nvme0", 00:40:44.073 "trtype": "tcp", 00:40:44.073 "traddr": "127.0.0.1", 00:40:44.073 "adrfam": "ipv4", 00:40:44.073 "trsvcid": "4420", 00:40:44.073 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:44.073 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:44.073 "prchk_reftag": false, 00:40:44.073 "prchk_guard": false, 00:40:44.073 "hdgst": false, 00:40:44.073 "ddgst": false, 00:40:44.073 "psk": ":spdk-test:key1", 00:40:44.073 "method": "bdev_nvme_attach_controller", 00:40:44.073 "req_id": 1 00:40:44.073 } 00:40:44.073 Got JSON-RPC error response 00:40:44.073 response: 00:40:44.073 { 00:40:44.073 "code": -5, 00:40:44.073 "message": "Input/output error" 00:40:44.073 } 00:40:44.073 15:13:23 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:40:44.073 15:13:23 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:40:44.073 15:13:23 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:40:44.073 15:13:23 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:40:44.073 15:13:23 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:40:44.073 15:13:23 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:40:44.073 15:13:23 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:40:44.073 15:13:23 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:40:44.073 15:13:23 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:40:44.073 15:13:23 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:40:44.073 15:13:23 keyring_linux -- keyring/linux.sh@33 -- # sn=847627220 00:40:44.073 15:13:23 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 847627220 00:40:44.073 1 links removed 00:40:44.073 15:13:23 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:40:44.073 15:13:23 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:40:44.073 15:13:23 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:40:44.073 15:13:23 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:40:44.073 15:13:23 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:40:44.073 15:13:23 keyring_linux -- keyring/linux.sh@33 -- # sn=977505822 00:40:44.073 15:13:23 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 977505822 00:40:44.073 1 links removed 00:40:44.073 15:13:23 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2096389 00:40:44.073 15:13:23 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 2096389 ']' 00:40:44.073 15:13:23 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 2096389 00:40:44.073 15:13:23 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:40:44.073 15:13:23 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:44.073 15:13:23 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2096389 00:40:44.073 15:13:23 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:40:44.073 15:13:23 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:40:44.073 15:13:23 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2096389' 00:40:44.073 killing process with pid 2096389 00:40:44.073 15:13:23 keyring_linux -- common/autotest_common.sh@967 -- # kill 2096389 00:40:44.073 Received shutdown signal, test time was about 1.000000 seconds 00:40:44.073 00:40:44.073 Latency(us) 00:40:44.073 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:44.073 =================================================================================================================== 00:40:44.073 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:44.073 15:13:23 keyring_linux -- common/autotest_common.sh@972 -- # wait 2096389 00:40:45.446 15:13:24 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2096249 00:40:45.446 15:13:24 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 2096249 ']' 00:40:45.446 15:13:24 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 2096249 00:40:45.446 15:13:24 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:40:45.446 15:13:24 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:45.446 15:13:24 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2096249 00:40:45.446 15:13:24 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:40:45.446 15:13:24 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:40:45.446 15:13:24 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2096249' 00:40:45.446 killing process with pid 2096249 00:40:45.446 15:13:24 keyring_linux -- common/autotest_common.sh@967 -- # kill 2096249 00:40:45.446 15:13:24 keyring_linux -- common/autotest_common.sh@972 -- # wait 2096249 00:40:47.968 00:40:47.968 real 0m9.495s 00:40:47.968 user 0m16.164s 00:40:47.968 sys 0m1.855s 00:40:47.968 15:13:26 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:47.968 15:13:26 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:47.968 ************************************ 00:40:47.968 END TEST keyring_linux 00:40:47.968 ************************************ 00:40:47.968 15:13:26 -- common/autotest_common.sh@1142 -- # return 0 00:40:47.968 15:13:26 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:40:47.968 15:13:26 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:40:47.968 15:13:26 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:40:47.968 15:13:26 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:40:47.968 15:13:26 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:40:47.968 15:13:26 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:40:47.968 15:13:26 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:40:47.968 15:13:26 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:40:47.968 15:13:26 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:40:47.968 15:13:26 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:40:47.968 15:13:26 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:40:47.968 15:13:26 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:40:47.968 15:13:26 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:40:47.968 15:13:26 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:40:47.968 15:13:26 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:40:47.968 15:13:26 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:40:47.968 15:13:26 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:40:47.968 15:13:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:40:47.968 15:13:26 -- common/autotest_common.sh@10 -- # set +x 00:40:47.968 15:13:26 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:40:47.968 15:13:26 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:40:47.968 15:13:26 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:40:47.968 15:13:26 -- common/autotest_common.sh@10 -- # set +x 00:40:49.337 INFO: APP EXITING 00:40:49.337 INFO: killing all VMs 00:40:49.337 INFO: killing vhost app 00:40:49.337 INFO: EXIT DONE 00:40:50.714 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:40:50.714 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:40:50.714 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:40:50.714 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:40:50.714 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:40:50.714 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:40:50.714 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:40:50.714 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:40:50.714 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:40:50.714 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:40:50.714 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:40:50.714 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:40:50.714 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:40:50.714 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:40:50.714 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:40:50.714 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:40:50.714 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:40:51.648 Cleaning 00:40:51.649 Removing: /var/run/dpdk/spdk0/config 00:40:51.649 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:40:51.649 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:40:51.649 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:40:51.649 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:40:51.649 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:40:51.649 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:40:51.649 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:40:51.649 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:40:51.649 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:40:51.649 Removing: /var/run/dpdk/spdk0/hugepage_info 00:40:51.649 Removing: /var/run/dpdk/spdk1/config 00:40:51.649 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:40:51.649 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:40:51.649 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:40:51.649 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:40:51.649 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:40:51.649 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:40:51.649 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:40:51.649 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:40:51.649 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:40:51.649 Removing: /var/run/dpdk/spdk1/hugepage_info 00:40:51.649 Removing: /var/run/dpdk/spdk1/mp_socket 00:40:51.649 Removing: /var/run/dpdk/spdk2/config 00:40:51.649 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:40:51.649 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:40:51.908 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:40:51.908 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:40:51.908 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:40:51.908 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:40:51.908 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:40:51.908 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:40:51.908 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:40:51.908 Removing: /var/run/dpdk/spdk2/hugepage_info 00:40:51.908 Removing: /var/run/dpdk/spdk3/config 00:40:51.908 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:40:51.908 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:40:51.908 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:40:51.908 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:40:51.908 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:40:51.908 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:40:51.908 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:40:51.908 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:40:51.908 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:40:51.908 Removing: /var/run/dpdk/spdk3/hugepage_info 00:40:51.908 Removing: /var/run/dpdk/spdk4/config 00:40:51.908 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:40:51.908 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:40:51.908 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:40:51.908 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:40:51.908 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:40:51.908 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:40:51.908 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:40:51.908 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:40:51.908 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:40:51.908 Removing: /var/run/dpdk/spdk4/hugepage_info 00:40:51.908 Removing: /dev/shm/bdev_svc_trace.1 00:40:51.908 Removing: /dev/shm/nvmf_trace.0 00:40:51.908 Removing: /dev/shm/spdk_tgt_trace.pid1747065 00:40:51.908 Removing: /var/run/dpdk/spdk0 00:40:51.908 Removing: /var/run/dpdk/spdk1 00:40:51.908 Removing: /var/run/dpdk/spdk2 00:40:51.908 Removing: /var/run/dpdk/spdk3 00:40:51.908 Removing: /var/run/dpdk/spdk4 00:40:51.908 Removing: /var/run/dpdk/spdk_pid1744196 00:40:51.908 Removing: /var/run/dpdk/spdk_pid1745322 00:40:51.908 Removing: /var/run/dpdk/spdk_pid1747065 00:40:51.908 Removing: /var/run/dpdk/spdk_pid1747782 00:40:51.908 Removing: /var/run/dpdk/spdk_pid1748733 00:40:51.908 Removing: /var/run/dpdk/spdk_pid1749147 00:40:51.908 Removing: /var/run/dpdk/spdk_pid1750133 00:40:51.908 Removing: /var/run/dpdk/spdk_pid1750272 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1750907 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1752361 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1753545 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1754128 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1754717 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1755194 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1755783 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1756067 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1756232 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1756544 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1756994 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1760266 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1760816 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1761372 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1761558 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1762866 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1763128 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1764408 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1764628 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1765069 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1765207 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1765641 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1765781 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1766813 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1767099 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1767414 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1767866 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1768136 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1768461 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1768753 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1769053 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1769450 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1769748 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1770157 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1770448 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1770741 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1771149 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1771440 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1771845 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1772136 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1772550 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1772843 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1773128 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1773539 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1773835 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1774241 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1774538 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1774879 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1775243 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1775565 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1776169 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1778621 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1835016 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1837778 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1845465 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1848902 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1851511 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1851920 00:40:51.909 Removing: /var/run/dpdk/spdk_pid1856030 00:40:52.168 Removing: /var/run/dpdk/spdk_pid1861855 00:40:52.168 Removing: /var/run/dpdk/spdk_pid1862133 00:40:52.168 Removing: /var/run/dpdk/spdk_pid1865033 00:40:52.168 Removing: /var/run/dpdk/spdk_pid1868988 00:40:52.168 Removing: /var/run/dpdk/spdk_pid1871295 00:40:52.168 Removing: /var/run/dpdk/spdk_pid1878690 00:40:52.168 Removing: /var/run/dpdk/spdk_pid1884541 00:40:52.168 Removing: /var/run/dpdk/spdk_pid1885955 00:40:52.168 Removing: /var/run/dpdk/spdk_pid1886778 00:40:52.168 Removing: /var/run/dpdk/spdk_pid1897755 00:40:52.168 Removing: /var/run/dpdk/spdk_pid1900238 00:40:52.168 Removing: /var/run/dpdk/spdk_pid1926060 00:40:52.168 Removing: /var/run/dpdk/spdk_pid1929104 00:40:52.168 Removing: /var/run/dpdk/spdk_pid1930282 00:40:52.168 Removing: /var/run/dpdk/spdk_pid1931736 00:40:52.168 Removing: /var/run/dpdk/spdk_pid1932009 00:40:52.168 Removing: /var/run/dpdk/spdk_pid1932282 00:40:52.168 Removing: /var/run/dpdk/spdk_pid1932561 00:40:52.168 Removing: /var/run/dpdk/spdk_pid1933386 00:40:52.168 Removing: /var/run/dpdk/spdk_pid1935451 00:40:52.168 Removing: /var/run/dpdk/spdk_pid1936717 00:40:52.168 Removing: /var/run/dpdk/spdk_pid1937411 00:40:52.168 Removing: /var/run/dpdk/spdk_pid1939293 00:40:52.168 Removing: /var/run/dpdk/spdk_pid1940114 00:40:52.168 Removing: /var/run/dpdk/spdk_pid1940948 00:40:52.168 Removing: /var/run/dpdk/spdk_pid1943726 00:40:52.168 Removing: /var/run/dpdk/spdk_pid1947376 00:40:52.168 Removing: /var/run/dpdk/spdk_pid1950920 00:40:52.168 Removing: /var/run/dpdk/spdk_pid1975462 00:40:52.168 Removing: /var/run/dpdk/spdk_pid1978485 00:40:52.168 Removing: /var/run/dpdk/spdk_pid1982516 00:40:52.168 Removing: /var/run/dpdk/spdk_pid1984099 00:40:52.168 Removing: /var/run/dpdk/spdk_pid1985717 00:40:52.168 Removing: /var/run/dpdk/spdk_pid1988725 00:40:52.168 Removing: /var/run/dpdk/spdk_pid1991521 00:40:52.168 Removing: /var/run/dpdk/spdk_pid1996634 00:40:52.168 Removing: /var/run/dpdk/spdk_pid1996643 00:40:52.168 Removing: /var/run/dpdk/spdk_pid1999776 00:40:52.168 Removing: /var/run/dpdk/spdk_pid1999931 00:40:52.168 Removing: /var/run/dpdk/spdk_pid2000069 00:40:52.168 Removing: /var/run/dpdk/spdk_pid2000457 00:40:52.168 Removing: /var/run/dpdk/spdk_pid2000465 00:40:52.168 Removing: /var/run/dpdk/spdk_pid2001562 00:40:52.168 Removing: /var/run/dpdk/spdk_pid2002842 00:40:52.168 Removing: /var/run/dpdk/spdk_pid2004038 00:40:52.168 Removing: /var/run/dpdk/spdk_pid2005217 00:40:52.168 Removing: /var/run/dpdk/spdk_pid2006396 00:40:52.168 Removing: /var/run/dpdk/spdk_pid2007698 00:40:52.168 Removing: /var/run/dpdk/spdk_pid2011622 00:40:52.168 Removing: /var/run/dpdk/spdk_pid2012067 00:40:52.168 Removing: /var/run/dpdk/spdk_pid2013350 00:40:52.168 Removing: /var/run/dpdk/spdk_pid2014199 00:40:52.168 Removing: /var/run/dpdk/spdk_pid2018176 00:40:52.168 Removing: /var/run/dpdk/spdk_pid2020396 00:40:52.168 Removing: /var/run/dpdk/spdk_pid2024709 00:40:52.168 Removing: /var/run/dpdk/spdk_pid2028400 00:40:52.168 Removing: /var/run/dpdk/spdk_pid2034891 00:40:52.168 Removing: /var/run/dpdk/spdk_pid2039610 00:40:52.168 Removing: /var/run/dpdk/spdk_pid2039618 00:40:52.168 Removing: /var/run/dpdk/spdk_pid2052077 00:40:52.168 Removing: /var/run/dpdk/spdk_pid2052743 00:40:52.168 Removing: /var/run/dpdk/spdk_pid2053414 00:40:52.168 Removing: /var/run/dpdk/spdk_pid2054204 00:40:52.168 Removing: /var/run/dpdk/spdk_pid2055687 00:40:52.168 Removing: /var/run/dpdk/spdk_pid2056351 00:40:52.168 Removing: /var/run/dpdk/spdk_pid2056895 00:40:52.168 Removing: /var/run/dpdk/spdk_pid2057563 00:40:52.168 Removing: /var/run/dpdk/spdk_pid2060333 00:40:52.168 Removing: /var/run/dpdk/spdk_pid2060729 00:40:52.168 Removing: /var/run/dpdk/spdk_pid2064773 00:40:52.168 Removing: /var/run/dpdk/spdk_pid2064957 00:40:52.168 Removing: /var/run/dpdk/spdk_pid2066720 00:40:52.168 Removing: /var/run/dpdk/spdk_pid2072105 00:40:52.168 Removing: /var/run/dpdk/spdk_pid2072119 00:40:52.168 Removing: /var/run/dpdk/spdk_pid2075141 00:40:52.168 Removing: /var/run/dpdk/spdk_pid2076661 00:40:52.168 Removing: /var/run/dpdk/spdk_pid2078183 00:40:52.168 Removing: /var/run/dpdk/spdk_pid2079162 00:40:52.168 Removing: /var/run/dpdk/spdk_pid2080693 00:40:52.168 Removing: /var/run/dpdk/spdk_pid2081687 00:40:52.168 Removing: /var/run/dpdk/spdk_pid2087963 00:40:52.168 Removing: /var/run/dpdk/spdk_pid2088355 00:40:52.168 Removing: /var/run/dpdk/spdk_pid2088750 00:40:52.168 Removing: /var/run/dpdk/spdk_pid2090632 00:40:52.168 Removing: /var/run/dpdk/spdk_pid2091030 00:40:52.168 Removing: /var/run/dpdk/spdk_pid2091319 00:40:52.168 Removing: /var/run/dpdk/spdk_pid2093755 00:40:52.168 Removing: /var/run/dpdk/spdk_pid2093899 00:40:52.168 Removing: /var/run/dpdk/spdk_pid2095476 00:40:52.168 Removing: /var/run/dpdk/spdk_pid2096249 00:40:52.168 Removing: /var/run/dpdk/spdk_pid2096389 00:40:52.168 Clean 00:40:52.427 15:13:31 -- common/autotest_common.sh@1451 -- # return 0 00:40:52.427 15:13:31 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:40:52.427 15:13:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:40:52.427 15:13:31 -- common/autotest_common.sh@10 -- # set +x 00:40:52.427 15:13:31 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:40:52.427 15:13:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:40:52.427 15:13:31 -- common/autotest_common.sh@10 -- # set +x 00:40:52.427 15:13:31 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:40:52.427 15:13:31 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:40:52.427 15:13:31 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:40:52.427 15:13:31 -- spdk/autotest.sh@391 -- # hash lcov 00:40:52.427 15:13:31 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:40:52.427 15:13:31 -- spdk/autotest.sh@393 -- # hostname 00:40:52.427 15:13:31 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:40:52.685 geninfo: WARNING: invalid characters removed from testname! 00:41:19.258 15:13:57 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:22.540 15:14:01 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:25.818 15:14:04 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:28.347 15:14:07 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:30.919 15:14:10 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:34.197 15:14:13 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:36.726 15:14:15 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:41:36.726 15:14:15 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:36.726 15:14:15 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:41:36.726 15:14:15 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:36.726 15:14:15 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:36.726 15:14:15 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:36.726 15:14:15 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:36.726 15:14:15 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:36.726 15:14:15 -- paths/export.sh@5 -- $ export PATH 00:41:36.726 15:14:15 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:36.726 15:14:15 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:41:36.726 15:14:15 -- common/autobuild_common.sh@444 -- $ date +%s 00:41:36.726 15:14:15 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720962855.XXXXXX 00:41:36.726 15:14:15 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720962855.IzU8Da 00:41:36.726 15:14:15 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:41:36.726 15:14:15 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:41:36.726 15:14:15 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:41:36.726 15:14:15 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:41:36.726 15:14:15 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:41:36.726 15:14:15 -- common/autobuild_common.sh@460 -- $ get_config_params 00:41:36.726 15:14:15 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:41:36.726 15:14:15 -- common/autotest_common.sh@10 -- $ set +x 00:41:36.726 15:14:15 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:41:36.726 15:14:15 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:41:36.726 15:14:15 -- pm/common@17 -- $ local monitor 00:41:36.726 15:14:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:36.726 15:14:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:36.726 15:14:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:36.726 15:14:15 -- pm/common@21 -- $ date +%s 00:41:36.726 15:14:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:36.726 15:14:15 -- pm/common@21 -- $ date +%s 00:41:36.726 15:14:15 -- pm/common@25 -- $ sleep 1 00:41:36.726 15:14:15 -- pm/common@21 -- $ date +%s 00:41:36.726 15:14:15 -- pm/common@21 -- $ date +%s 00:41:36.726 15:14:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720962855 00:41:36.726 15:14:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720962855 00:41:36.726 15:14:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720962855 00:41:36.726 15:14:15 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720962855 00:41:36.726 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720962855_collect-vmstat.pm.log 00:41:36.726 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720962855_collect-cpu-load.pm.log 00:41:36.726 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720962855_collect-cpu-temp.pm.log 00:41:36.726 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720962855_collect-bmc-pm.bmc.pm.log 00:41:37.663 15:14:16 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:41:37.663 15:14:16 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:41:37.663 15:14:16 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:41:37.663 15:14:16 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:41:37.663 15:14:16 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:41:37.663 15:14:16 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:41:37.663 15:14:16 -- spdk/autopackage.sh@19 -- $ timing_finish 00:41:37.663 15:14:16 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:41:37.663 15:14:16 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:41:37.663 15:14:16 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:41:37.663 15:14:16 -- spdk/autopackage.sh@20 -- $ exit 0 00:41:37.663 15:14:16 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:41:37.663 15:14:16 -- pm/common@29 -- $ signal_monitor_resources TERM 00:41:37.663 15:14:16 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:41:37.663 15:14:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:37.663 15:14:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:41:37.663 15:14:16 -- pm/common@44 -- $ pid=2108916 00:41:37.663 15:14:16 -- pm/common@50 -- $ kill -TERM 2108916 00:41:37.663 15:14:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:37.663 15:14:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:41:37.663 15:14:16 -- pm/common@44 -- $ pid=2108918 00:41:37.663 15:14:16 -- pm/common@50 -- $ kill -TERM 2108918 00:41:37.663 15:14:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:37.663 15:14:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:41:37.663 15:14:16 -- pm/common@44 -- $ pid=2108920 00:41:37.663 15:14:16 -- pm/common@50 -- $ kill -TERM 2108920 00:41:37.663 15:14:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:37.663 15:14:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:41:37.663 15:14:16 -- pm/common@44 -- $ pid=2108950 00:41:37.663 15:14:16 -- pm/common@50 -- $ sudo -E kill -TERM 2108950 00:41:37.663 + [[ -n 1657956 ]] 00:41:37.663 + sudo kill 1657956 00:41:37.933 [Pipeline] } 00:41:37.953 [Pipeline] // stage 00:41:37.958 [Pipeline] } 00:41:37.974 [Pipeline] // timeout 00:41:37.979 [Pipeline] } 00:41:37.996 [Pipeline] // catchError 00:41:38.001 [Pipeline] } 00:41:38.019 [Pipeline] // wrap 00:41:38.025 [Pipeline] } 00:41:38.040 [Pipeline] // catchError 00:41:38.049 [Pipeline] stage 00:41:38.051 [Pipeline] { (Epilogue) 00:41:38.066 [Pipeline] catchError 00:41:38.067 [Pipeline] { 00:41:38.082 [Pipeline] echo 00:41:38.083 Cleanup processes 00:41:38.089 [Pipeline] sh 00:41:38.413 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:41:38.413 2109058 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:41:38.413 2109183 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:41:38.431 [Pipeline] sh 00:41:38.712 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:41:38.712 ++ grep -v 'sudo pgrep' 00:41:38.712 ++ awk '{print $1}' 00:41:38.712 + sudo kill -9 2109058 00:41:38.724 [Pipeline] sh 00:41:39.009 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:41:48.986 [Pipeline] sh 00:41:49.273 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:41:49.273 Artifacts sizes are good 00:41:49.288 [Pipeline] archiveArtifacts 00:41:49.295 Archiving artifacts 00:41:49.548 [Pipeline] sh 00:41:49.833 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:41:49.849 [Pipeline] cleanWs 00:41:49.860 [WS-CLEANUP] Deleting project workspace... 00:41:49.860 [WS-CLEANUP] Deferred wipeout is used... 00:41:49.867 [WS-CLEANUP] done 00:41:49.869 [Pipeline] } 00:41:49.889 [Pipeline] // catchError 00:41:49.902 [Pipeline] sh 00:41:50.184 + logger -p user.info -t JENKINS-CI 00:41:50.193 [Pipeline] } 00:41:50.210 [Pipeline] // stage 00:41:50.215 [Pipeline] } 00:41:50.232 [Pipeline] // node 00:41:50.238 [Pipeline] End of Pipeline 00:41:50.269 Finished: SUCCESS